source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Elektrolytdatenbank%20Regensburg | Elektrolytdatenbank Regensburg (abridged ELDAR) is a compilation of thermodynamic data, bibliography and properties of electrolytes and their solutions.
History
The gathering of data has begun in 1981. It is a member of DECHEMA and associate of Dortmund Data Bank.
Content
Densities, dielectric constants
Thermal expansion and compressibility data
Electrical conductivity data
Solubility data
Activity and excess molar quantity data
External links
official site
Chemical databases |
https://en.wikipedia.org/wiki/Covariance%20operator | In probability theory, for a probability measure P on a Hilbert space H with inner product , the covariance of P is the bilinear form Cov: H × H → R given by
for all x and y in H. The covariance operator C is then defined by
(from the Riesz representation theorem, such operator exists if Cov is bounded). Since Cov is symmetric in its arguments, the covariance operator is
self-adjoint. When P is a centred Gaussian measure, C is also a nuclear operator. In particular, it is a compact operator of trace class, that is, it has finite trace.
Even more generally, for a probability measure P on a Banach space B, the covariance of P is the bilinear form on the algebraic dual B#, defined by
where is now the value of the linear functional x on the element z.
Quite similarly, the covariance function of a function-valued random element (in special cases is called random process or random field) z is
where z(x) is now the value of the function z at the point x, i.e., the value of the linear functional evaluated at z.
See also
Bilinear forms
Covariance and correlation
Probability theory
Hilbert spaces |
https://en.wikipedia.org/wiki/Bundle%20gerbe | In mathematics, a bundle gerbe is a geometrical model of certain 1-gerbes with connection, or equivalently of a 2-class in Deligne cohomology.
Topology
-principal bundles over a space (see circle bundle) are geometrical realizations of 1-classes in Deligne cohomology which consist of 1-form connections and 2-form curvatures. The topology of a bundle is classified by its Chern class, which is an element of , the second integral cohomology of .
Gerbes, or more precisely 1-gerbes, are abstract descriptions of Deligne 2-classes, which each define an element of , the third integral cohomology of M.
As a cohomology class in Deligne cohomology
Recall for a smooth manifold the p-th Deligne cohomology groups are defined by the hypercohomology of the complex called the weight q Deligne complex, where is the sheaf of germs of smooth differential k-forms tensored with . So, we write for the Deligne-cohomology groups of weight . In the case the Deligne complex is then
We can understand the Deligne cohomology groups by looking at the Cech resolution giving a double complex. There is also an associated short exact sequence
where are the closed germs of complex valued 2-forms on and is the subspace of such forms where period integrals are integral. This can be used to show are the isomorphism classes of bundle-gerbes on a smooth manifold , or equivalently, the isomorphism classes of -bundles on .
History
Historically the most popular construction of a gerbe is a category-theoretic model featured in Giraud's theory of gerbes, which are roughly sheaves of groupoids over M.
In 1994 Murray introduced bundle gerbes, which are geometric realizations of 1-gerbes.
For many purposes these are more suitable for calculations than Giraud's realization, because their construction is entirely within the framework of classical geometry. In fact, as their name suggests, they are fiber bundles.
This notion was extended to higher gerbes the following year.
Relationship wi |
https://en.wikipedia.org/wiki/Penrig | The penrig is a unit of tumescence, specifically for the tumescence of the human penis. It is defined as the tumescence that will raise 100 grams of penile tissue through one millimetre. The word is formed from PENis RIGidity. |
https://en.wikipedia.org/wiki/Exotoxin | An exotoxin is a toxin secreted by bacteria. An exotoxin can cause damage to the host by destroying cells or disrupting normal cellular metabolism. They are highly potent and can cause major damage to the host. Exotoxins may be secreted, or, similar to endotoxins, may be released during lysis of the cell. Gram negative pathogens may secrete outer membrane vesicles containing lipopolysaccharide endotoxin and some virulence proteins in the bounding membrane along with some other toxins as intra-vesicular contents, thus adding a previously unforeseen dimension to the well-known eukaryote process of membrane vesicle trafficking, which is quite active at the host–pathogen interface.
They may exert their effect locally or produce systemic effects. Well-known exotoxins include: botulinum toxin produced by Clostridium botulinum; Corynebacterium diphtheriae toxin, produced during life-threatening symptoms of diphtheria; tetanospasmin produced by Clostridium tetani. The toxic properties of most exotoxins can be inactivated by heat or chemical treatment to produce a toxoid. These retain their antigenic specificity and can be used to produce antitoxins and, in the case of diphtheria and tetanus toxoids, are used as vaccines.
Exotoxins are susceptible to antibodies produced by the immune system, but some exotoxins are so toxic that they may be fatal to the host before the immune system has a chance to mount defenses against them. In such cases, antitoxin, anti-serum containing antibodies, can sometimes be injected to provide passive immunity.
Types
Many exotoxins have been categorized. This classification, while fairly exhaustive, is not the only system used. Other systems for classifying or identifying toxins include:
By organism generating the toxin
By organism susceptible to the toxin
By secretion system used to release the toxin (for example, toxic effectors of type VI secretion system)
By tissue target type susceptible to the toxin (neurotoxins affect the nervous |
https://en.wikipedia.org/wiki/Oosporein | Oosporein is a toxic, bronze colored dibenzoquinone with the molecular formula C14H10O8. Oosporein was first extracted from various molds and has antibiotic, antiviral, cytotoxic, antifungal, and Insecticide properties. |
https://en.wikipedia.org/wiki/Thioinosinic%20acid | Thioinosinic acid (or thioinosine monophosphate, TIMP) is an intermediate metabolite of azathioprine, an immunosuppressive drug. |
https://en.wikipedia.org/wiki/Branko%20Gr%C3%BCnbaum | Branko Grünbaum (; 2 October 1929 – 14 September 2018) was a Croatian-born mathematician of Jewish descent and a professor emeritus at the University of Washington in Seattle. He received his Ph.D. in 1957 from Hebrew University of Jerusalem in Israel.
Life
Grünbaum was born in Osijek, then part of the Kingdom of Yugoslavia, on 2 October 1929. His father was Jewish and his mother was Catholic, so during World War II the family survived the Holocaust by living at his Catholic grandmother's home. After the war, as a high school student, he met Zdenka Bienenstock, a Jew who had lived through the war hidden in a convent while the rest of her family were killed. Grünbaum became a student at the University of Zagreb, but grew disenchanted with the communist ideology of the Socialist Federal Republic of Yugoslavia, applied for emigration to Israel, and traveled with his family and Zdenka to Haifa in 1949.
In Israel, Grünbaum found a job in Tel Aviv, but in 1950 returned to the study of mathematics, at the Hebrew University of Jerusalem. He earned a master's degree in 1954 and in the same year married Zdenka, who continued as a master's student in chemistry. He served a tour of duty as an operations researcher in the Israeli Air Force beginning in 1955, and he and Zdenka had the first of their two sons in 1956. He completed his Ph.D. in 1957; his dissertation concerned convex geometry and was supervised by Aryeh Dvoretzky.
After finishing his military service in 1958, Grünbaum and his family came to the US so that Grünbaum could become a postdoctoral researcher at the Institute for Advanced Study. He then became a visiting researcher at the University of Washington in 1960. He agreed to return to Israel as a lecturer at the Hebrew University, but his plans were disrupted by the Israeli authorities determining that he was not a Jew (because his mother was not Jewish) and annulling his marriage; he and Zdenka remarried in Seattle before their return.
Grünbaum remained a |
https://en.wikipedia.org/wiki/Kinetic%20theory%20of%20gases | The kinetic theory of gases is a simple, historically significant classical model of the thermodynamic behavior of gases, with which many principal concepts of thermodynamics were established. The model describes a gas as a large number of identical submicroscopic particles (atoms or molecules), all of which are in constant, rapid, random motion. Their size is assumed to be much smaller than the average distance between the particles. The particles undergo random elastic collisions between themselves and with the enclosing walls of the container. The basic version of the model describes the ideal gas, and considers no other interactions between the particles.
The kinetic theory of gases explains the macroscopic properties of gases, such as volume, pressure, and temperature, as well as transport properties such as viscosity, thermal conductivity and mass diffusivity. Due to the time reversibility of microscopic dynamics (microscopic reversibility), the kinetic theory is also connected to the principle of detailed balance, in terms of the fluctuation-dissipation theorem (for Brownian motion) and the Onsager reciprocal relations.
Historically, the kinetic theory of gases was the first explicit exercise of the ideas of statistical mechanics.
History
In about 50 BCE, the Roman philosopher Lucretius proposed that apparently static macroscopic bodies were composed on a small scale of rapidly moving atoms all bouncing off each other. This Epicurean atomistic point of view was rarely considered in the subsequent centuries, when Aristotlean ideas were dominant.
In 1738 Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the pressure of the gas, and that their average kinetic energy determines the temperature of the gas. The theory was not immediately accepted, in part be |
https://en.wikipedia.org/wiki/Project%20Phoenix%20%28SETI%29 | Project Phoenix was a SETI project to search for extraterrestrial intelligence by analyzing patterns in radio signals. It was run by the independently funded SETI Institute of Mountain View, California, U.S.
Project Phoenix started work in February 1995 with the Parkes radio telescope located in New South Wales, Australia, the largest telescope in the Southern Hemisphere.
Between September 1996 and April 1998, the Project used the National Radio Astronomy Observatory, Green Bank in Green Bank, West Virginia, U.S.
Rather than attempting to scan the whole sky for messages, the Project concentrated on nearby systems that are similar to our own. Project Phoenix's targets comprised about 800 stars with a 200 light-year range.
The Project searched for radio signals as narrow as 1 Hz between 1,000 and 3,000 MHz: a broad bandwidth compared with most SETI searches.
In March 2004 the Project announced that after checking the 800 stars on its list, it had failed to find any evidence of extraterrestrial signals. Project leader Peter Backus remarked that they had been forced to conclude that "we live in a quiet neighborhood".
See also
HabCat |
https://en.wikipedia.org/wiki/1%3A35%20scale | 1:35 scale is the most popular scale for model military vehicles, with an extensive lineup of models and aftermarket parts available from a wide variety of manufacturers.
The roots of 1:35 as a military modelling scale lie in early motorized plastic tank kits. To accommodate electric motors and gearboxes, these models needed to be made in a larger scale. There were many companies making such tanks, but it was Tamiya's example that made 1:35 a de facto standard.
Company chairman Shunsaku Tamiya explains the origins of the scale in his book Master Modeler:
After the success of the Panther, I thought it would be a good idea for us to produce other tanks from different countries in the same scale. I measured the Panther and it turned out to be about 1/35 of the size of the original. This size had been chosen simply because it would accommodate a couple of B-type batteries. Tamiya's 1/35 series tanks eventually got to be known around the world, but this is the slightly haphazard origin of their rather awkward scale.
Early kits in the scale, built around bulky motorization components, often sacrificed scale appearance and detail, but their large size and potential for intricate superdetailing appealed to hobbyists.
Over the years, kits have become more and more detailed and accurate, and nowadays there is a whole industry in 1:35 dedicated to offering aftermarket detail parts for kits. After a new kit is released, companies like Aber and Eduard usually make detail sets available for it, allowing modellers to replace kit parts with more accurate photoetched alternatives.
In terms of model range, 1:35 is typically limited to military land vehicles and figures. Some helicopter kits also exist in the scale, whereas large airplane kits are more commonly done in 1:32 scale. In recent years, there have been some aeroplane releases in 1:35 as well, typically of vehicles operating in close contact with ground forces, such as the Fieseler Storch liaison aircraft or the Horsa |
https://en.wikipedia.org/wiki/Fixstars%20Solutions | Fixstars Solutions, Inc. is a software and services company specializing in multi-core processors, particularly in Nvidia's GPU and CUDA environment, IBM Power7, and Cell. They also specialize in solid-state drives and currently manufacture the world's largest SATA drives.
During the early part of 2010, Fixstars developed a strong relationship with Nvidia and focused its linux distribution for GPU computing. Yellow Dog Enterprise Linux for CUDA is the first enterprise Linux OS optimized for GPU computing. It offers end users, developers and integrators a faster, more reliable, and less complex GPU computing experience.
Terra Soft acquisition
On November 11, 2008, Japanese company Fixstars announced that it had acquired essentially all of Terra Soft's assets. Terra Soft's former founder and CEO Kai Staats was appointed as COO of Fixstars's new American subsidiary, Fixstars Solutions, which is based in Irvine, California. Fixstars Solutions retained Terra Soft's product line, staff and regional offices in Loveland, Colorado.
Terra Soft provided software and services for the PowerPC/Power ISA and Linux OS platform. Former Terra Soft Solutions produced Yellow Dog Linux (YDL) and Yellow Dog Enterprise Linux which included cluster construction tools. Customers included Argonne, Sandia, Lawrence Livermore, and Los Alamos National Labs, several Department of Defense contractors including Boeing, Lockheed Martin, and SAIC; the U.S. Air Force, Navy, Army, and NASA; and many of the top universities around the world including California Institute of Technology, MIT, and Stanford University.
As an Apple value-added reseller and IBM Business Partner, Terra Soft Solutions provided turnkey and build-to-order desktop workstations, servers, and High Performance Computing clusters. Terra Soft made their Yellow Dog Linux distribution solely for PowerPC/Power ISA, optimizing the distributions for AltiVec and the Cell.
Terra Soft was the first to support a variety of Apple compute |
https://en.wikipedia.org/wiki/Guadeloupe%20parakeet | The Guadeloupe parakeet (Psittacara labati) is a hypothetical species of parrot that would have been endemic to Guadeloupe.
Description
Jean-Baptiste Labat described a population of small parrots living on Guadeloupe:
Taxonomy
They were later named Conurus labati, and are now called the Guadeloupe parakeet. It has been postulated to be a separate species based on little evidence. There are no specimens or remains of the extinct parrots. Their taxonomy may never be fully elucidated, and so their postulated status as a separate species is hypothetical. It is presumed to have gone extinct in the late 18th century, if it did indeed exist. |
https://en.wikipedia.org/wiki/Taleb%20distribution | In economics and finance, a Taleb distribution is the statistical profile of an investment which normally provides a payoff of small positive returns, while carrying a small but significant risk of catastrophic losses. The term was coined by journalist Martin Wolf and economist John Kay to describe investments with a "high probability of a modest gain and a low probability of huge losses in any period."
The concept is named after Nassim Nicholas Taleb, based on ideas outlined in his book Fooled by Randomness.
According to Taleb in Silent Risk, the term should be called "payoff" to reflect the importance of the payoff function of the underlying probability distribution, rather than the distribution itself. The term is meant to refer to an investment returns profile in which there is a high probability of a small gain, and a small probability of a very large loss, which more than outweighs the gains. In these situations the expected value is very much less than zero, but this fact is camouflaged by the appearance of low risk and steady returns. It is a combination of kurtosis risk and skewness risk: overall returns are dominated by extreme events (kurtosis), which are to the downside (skew). Such kind of distributions have been studied in economic time series related to business cycles.
More detailed and formal discussion of the bets on small probability events is in the academic essay by Taleb, called "Why Did the Crisis of 2008 Happen?" and in the 2004 paper in the Journal of Behavioral Finance called "Why Do We Prefer Asymmetric Payoffs?" in which he writes "agents risking other people’s capital would have the incentive to camouflage the properties by showing a steady income. Intuitively, hedge funds are paid on an annual basis while disasters happen every four or five years, for example. The fund manager does not repay his incentive fee."
Criticism of trading strategies
Pursuing a trading strategy with a Taleb distribution yields a high probability of steady |
https://en.wikipedia.org/wiki/Epigenetic%20regulation%20of%20neurogenesis | Epigenetic regulation of neurogenesis is the role that epigenetics (hertitable characteristics that do not involve changes in DNA sequence) plays in the regulation of neurogenesis (the production of neurons from neural stem cells).
Epigenetics is the study of heritable changes in gene expression which do not result from modifications to the sequence of DNA. Neurogenesis is the mechanism for neuron proliferation and differentiation. It entails many different complex processes which are all time and order dependent.
Processes such as neuron proliferation, fate specification, differentiation, maturation, and functional integration of newborn cells into existing neuronal networks are all interconnected. In the past decade many epigenetic regulatory mechanisms have been shown to play a large role in the timing and determination of neural stem cell lineages.
Mechanisms
Three important methods of epigenetic regulation include histone modification, DNA methylation and demethylation, and microRNA (miRNA) expression. Histones keep the DNA of the eukaryotic cell tightly packaged through charge interactions between the positive charge on the histone tail and the negative charge of the DNA, as well as between histone tails of nearby nucleosomes. While there are many different types of histone modifications, in neural epigenetics there are two primary mechanisms which have been explored: histone methylation and histone acetylation.
In the former, methyl groups are either added or removed to the histone altering its structure and exposing chromatin and leading to gene activation or deactivation. In the latter, histone acetylation causes the histone to hold the DNA more loosely, allowing for more gene activation. DNA methylation, in which methyl groups are added to cytosine or adenosine residues on the DNA, is a more lasting method of gene inactivation than histone modification, though is still reversible in some cases. MicroRNAs are a small form of non-coding RNA (ncRNA) wh |
https://en.wikipedia.org/wiki/Transistor%20count | The transistor count is the number of transistors in an electronic device (typically on a single substrate or "chip"). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microprocessors are contained in the cache memories, which consist mostly of the same memory cell circuits replicated many times). The rate at which MOS transistor counts have increased generally follows Moore's law, which observed that the transistor count doubles approximately every two years. However, being directly proportional to the area of a chip, transistor count does not represent how advanced the corresponding manufacturing technology is: a better indication of this is the transistor density (the ratio of a chip's transistor count to its area).
, the highest transistor count in flash memory is Micron's 2terabyte (3D-stacked) 16-die, 232-layer V-NAND flash memory chip, with 5.3trillion floating-gate MOSFETs (3bits per transistor).
The highest transistor count in a single chip processor is that of the deep learning processor Wafer Scale Engine 2 by Cerebras. It has 2.6trillion MOSFETs in 84 exposed fields (dies) on a wafer, manufactured using TSMC's 7 nm FinFET process.
As of 2023, the GPU with the highest transistor count is AMD's MI300X, built on TSMC's N5 process and totalling 153 billion MOSFETs.
The highest transistor count in a consumer microprocessor is 134billion transistors, in Apple's ARM-based dual-die M2 Ultra system on a chip, which is fabricated using TSMC's 5 nm semiconductor manufacturing process.
In terms of computer systems that consist of numerous integrated circuits, the supercomputer with the highest transistor count was the Chinese-designed Sunway TaihuLight, which has for all CPUs/nodes combined "about 400 trillion transistors in the processing part of the hardware" and "the DRAM includes about 12 quadrillion transistors, and that's about 97 percent of all the transistors." To compare, the smallest comp |
https://en.wikipedia.org/wiki/Winny | Winny (also known as WinNY) is a Japanese peer-to-peer (P2P) file-sharing program developed by Isamu Kaneko, a research assistant at the University of Tokyo in 2002. Like Freenet, a user must add an encrypted node list in order to connect to other nodes on the network. Users choose three cluster words which symbolize their interests, and then Winny connects to other nodes which share these cluster words, downloading and storing encrypted data from cache of these neighbors in a distributed data store. If users want a particular file, they set up triggers (keywords), and Winny will download files marked by these triggers. The encryption was meant to provide anonymity, but Winny also included bulletin boards where users would announce uploads, and the IP address of posters could be discovered through these boards. While Freenet was implemented in Java, Winny was implemented as a Windows C++ application.
The software takes its name from WinMX, where the M and the X are each advanced one letter in the Latin alphabet, to N and Y. Netagent published a survey in June 2018 suggesting that Winny was still the most popular p2p network in Japan ahead of Perfect Dark (P2P) and Share (P2P) with approximately 45,000 nodes connecting each day over Golden Week. The number of nodes on Winny appears to be holding steady compared with 2015.
Kaneko first announced Winny on the Download Software board of the 2channel (2ch for short) Japanese bulletin board site. Since 2channel users often refer to anonymous users by their post numbers, Kaneko came to be known as "Mr. 47" ("47-Shi", or 47氏 in Japanese), or just "47".
After Winny's development stopped, a new peer-to-peer application, Share, was developed to be a successor.
Antinny
Since August 2003, several worms called "Antinny" have spread on the Winny network.
Some versions of Antinny work as follows:
Upload files from the host computer onto the Winny network.
Upload screenshots onto an image board.
Denial-of-service atta |
https://en.wikipedia.org/wiki/Aphlebia | Aphlebiae are the imperfect or irregular leaf endings commonly found on ferns and fossils of ferns from the Carboniferous Period, but seem to have disappeared by the beginning of the Mesozoic. According to the United States Geological Survey in 1983, “The discovery in recent years of Aplebiæ attached to the rachis of many species of Pecopteris and Sphenopteris, such as P. dentata, P. Biotii, P. abbrebiata, and Sphenopteris cremate strengthens the view now generally entertained, that most of the species of Aphlebia are stipal abortive pinnæ growing from the bases of primary or secondary rachises” (101). The word itself is derived from the Greek "phleb-", meaning vein, and "a-", meaning without. |
https://en.wikipedia.org/wiki/%CE%91-Isomethyl%20ionone | α-Isomethyl ionone, also known as α-cetone, is a synthetically made and naturally occurring organic compound found in Brewer's yeasts or the species known as Saccharomyces cerevisiae. The compound is an isomer of methyl ionone. Alpha-isomethyl ionone can be colorless or pale-straw coloured liquid. Its primary scent is flowery and secondary scent is violet. It may also have a woody or orris-like scent. and is often used in flavouring and cosmetic industries for example, aftershave lotions, bath products, hair care products, moisturizers, perfumes, shampoos and skin care products. It is also an ingredient used in Chanel No. 5, and other branded products such as Fidji by Guy Laroche. Perfume fragrances that α-isomethyl ionone is used in are for example, amber, chypre, violet, mimosa, reseda, iris, orris, cyclamen, chypre, berries, woody notes, ylang-ylang, leather, orange, nut, pistachio, muscatel, and tobacco.
Properties
α-Isomethyl ionone would be classified as a norsesquiterpenoid, having 14 carbon atoms (1 less than the 15 of three consecutive isoprene units). It is an extremely weak base, the calculated pKa values within the molecule being 19.7 (strongest acidic) and -4.8 (strongest basic). The percentage of α-isomethyl ionone used in perfumes is approximately ranging from 0.1% to 11.9%, with an average of 1.1%. For example, it is usually used in conjunction with hydroxycitronellal, woody notes, copaiba, N-methyl ionone, ionone, or Vetiver.
Synthesis
The synthesis of α-isomethyl ionone involves a cross-aldol condensation of citral with methyl ethyl ketone A high temperature and strong alkali is used. The ratio between the n-form and iso-form is controlled in order to obtain methyl pseudo-ionone and allow ring formation to occur. Iso-forms is then synthesized consequently. |
https://en.wikipedia.org/wiki/List%20of%20boiling%20and%20freezing%20information%20of%20solvents |
See also
Freezing-point depression
Boiling-point elevation |
https://en.wikipedia.org/wiki/Golodirsen | Golodirsen, sold under the brand name Vyondys 53, is a medication used for the treatment of Duchenne muscular dystrophy (DMD). It is an antisense oligonucleotide drug of phosphorodiamidate morpholino oligomer (PMO) chemistry.
The most common side effects include headache, fever, fall, cough, vomiting, abdominal pain, cold symptoms (nasopharyngitis) and nausea.
Medical uses
Golodirsen is indicated for the treatment of Duchenne muscular dystrophy (DMD) in people who have a confirmed mutation of the dystrophin gene that is amenable to exon 53 skipping.
Mechanism of action
Golodirsen has been provisionally approved for approximately 8% of all DMD patients amenable to exon 53 skipping. It works by inducing exon skipping in the dystrophin gene and thereby increasing the amount of dystrophin protein available to muscle fibers.
Adverse effects
The most common side effects include headache, fever, fall, cough, vomiting, abdominal pain, cold symptoms (nasopharyngitis) and nausea. In animal studies, no significant changes were seen in the male reproductive system of monkeys and mice following weekly subcutaneous administration. According to the reports obtained from the clinical trials, pain at the site of intravenous administration, back pain, oropharyngeal pain, sprain in ligaments, diarrhea, dizziness, contusion, flu, ear infection, rhinitis, skin abrasion, tachycardia, and constipation occurred at an elevated frequency in the treatment group, as compared to their placebo counterparts. Hypersensitivity reactions, including rash, fever, itching, hives, skin irritation (dermatitis) and skin peeling (exfoliation), have occurred in people who were treated with golodirsen.
Renal toxicity was observed in animals who received golodirsen. Although renal toxicity was not observed in the clinical studies with golodirsen, potentially fatal glomerulonephritis, has been observed after administration of some antisense oligonucleotides. Renal function should be monitored in those ta |
https://en.wikipedia.org/wiki/Anna%20Parker%20Fessenden | Anna Parker Fessenden (April 8, 1896 – May 3, 1972) was an American botanist and mathematics educator.
Early life and education
Anna Parker Fessenden was born in Thomaston, Maine, and raised in Mattapan, Massachusetts, the middle of three daughters of William S. Fessenden and Alida Mary Mehan Fessenden. Her mother was assistant principal of Sandwich High School.
Fessenden graduated from Girls' Latin School in 1914, and graduated from Smith College in 1918. As a college student, she was active in the Smith College Unitarian Club, and she edited and wrote for the Smith College Monthly. She earned a master's degree from the University of Minnesota in 1920. Her master's thesis, under advisor Josephine Tilden, was titled "Observations on Two Rare Australian Algae, Myriocladia Sciurus, Harvey and Bactrophora Irregularis, N. SP."
Career
Fessenden taught botany at Vassar College, Wellesley College and at the University of Minnesota. She and Josephine Tilden co-authored an article on brown algae from Australia. She taught mathematics at Needham High School in Massachusetts for 36 years, and was a director of math programs for the Needham school district. She retired from teaching in 1962.
Fessenden was an active member of several clubs including the Audubon Society, and a trustee of the Thomaston Historical Society.
Personal life
Fessenden died in 1972, aged 76 years, in Camden, Maine. Her grave is with her parents' graves, in Sandwich, Massachusetts. |
https://en.wikipedia.org/wiki/Cyberun | Cyberun is a ZX Spectrum video game by Ultimate Play the Game and published by U.S. Gold in 1986. Although not part of the Jetman series, it has similarities to Jetpac in that the player must construct their spaceship from parts, then seek out resources and power-ups.
Gameplay
The player controls a spaceship trapped on a planet inhabited by hostile aliens. The goal is to upgrade the spaceship with parts scattered around the planet and mine a valuable element called "Cybernite". The atmosphere above ground is populated by flying aliens and clouds that drip acid, damaging the ship's shields. The ship requires fuel to fly, and once exhausted will bounce along the ground of the planet unable to climb. A similar enemy ship is also on the planet attempting to mine the Cybernite before the player. Fuel can be replenished by tankers on the planet surface, but damaged shields cannot be repaired. The player must venture into caverns below the surface in order to mine the Cybernite, which can only be done once the ship has been upgraded to include a mining laser. Once sufficient Cybernite has been collected, the player can escape to the next planet in the Zebarema system.
Reception
The game was well received by critics, with Crash awarding it a 90% Crash Smash, and Your Spectrum giving it 8/10, describing the game as "a classic pick up the pieces and shoot em up with brilliant graphics". |
https://en.wikipedia.org/wiki/Air%20Force%20Two | Air Force Two is the air traffic control designated call sign held by any United States Air Force aircraft carrying the vice president of the United States, but not the president. The term is often associated with the Boeing C-32, a modified 757 which is most commonly used as the vice president's transport. Other 89th Airlift Wing aircraft, such as the Boeing C-40 Clipper, C-20B, C-37A, and C-37B, have also served in this role. The VC-25A, the aircraft most often used by the president as Air Force One, has also been used by the vice president as Air Force Two.
History
Richard Nixon was one of the first senior officials in American government to travel internationally via jet aircraft on official business, taking a Boeing VC-137A Stratoliner on his visit to the Soviet Union in July 1959 for the Kitchen Debates as Eisenhower's vice president.
Domestically, non-presidential VIP travel still relied on the prop powered Convair VC-131 Samaritan aircraft until Nelson Rockefeller was named Gerald Ford's vice president in 1974. Rockefeller personally owned a Grumman Gulfstream II jet that he preferred to the much slower Convair; Rockefeller's Gulfstream II then used the "Executive Two" callsign while he was in office. This would prompt the 89th Airlift Wing's acquisition of 3 McDonnell Douglas VC-9Cs in 1975, adding to their 3 VC-137s jets used for senior executive international travel.
Prior senior executive aircraft included the former presidential Douglas VC-54 Skymaster, Douglas VC-118A, and Lockheed C-121 Constellations, held in reserve as back-up aircraft for the newer aircraft designated for presidential travel.
Design
Aircraft allocated for use by the vice president and senior executives authorized to travel under the Special Air Mission designation operated by the 89th Airlift Wing can be distinguished from the distinctive Raymond Loewy Air Force One livery by the lack of the Steel blue cheatline and cap over the cockpit.
Former presidential aircraft that has |
https://en.wikipedia.org/wiki/Binary%20mass%20function | In astronomy, the binary mass function or simply mass function is a function that constrains the mass of the unseen component (typically a star or exoplanet) in a single-lined spectroscopic binary star or in a planetary system. It can be calculated from observable quantities only, namely the orbital period of the binary system, and the peak radial velocity of the observed star. The velocity of one binary component and the orbital period provide information on the separation and gravitational force between the two components, and hence on the masses of the components.
Introduction
The binary mass function follows from Kepler's third law when the radial velocity of one binary component is known.
Kepler's third law describes the motion of two bodies orbiting a common center of mass. It relates the orbital period with the orbital separation between the two bodies, and the sum of their masses. For a given orbital separation, a higher total system mass implies higher orbital velocities. On the other hand, for a given system mass, a longer orbital period implies a larger separation and lower orbital velocities.
Because the orbital period and orbital velocities in the binary system are related to the masses of the binary components, measuring these parameters provides some information about the masses of one or both components. However, the true orbital velocity is often unknown, because velocities in the plane of the sky are much more difficult to determine than velocities along the line of sight.
Radial velocity is the velocity component of orbital velocity in the line of sight of the observer. Unlike true orbital velocity, radial velocity can be determined from Doppler spectroscopy of spectral lines in the light of a star, or from variations in the arrival times of pulses from a radio pulsar. A binary system is called a single-lined spectroscopic binary if the radial motion of only one of the two binary components can be measured. In this case, a lower limit on the |
https://en.wikipedia.org/wiki/Lysenkoism | Lysenkoism (, ; , ) was a political campaign led by Soviet biologist Trofim Lysenko against genetics and science-based agriculture in the mid-20th century, rejecting natural selection in favour of a form of Lamarckism, as well as expanding upon the techniques of vernalization and grafting.
More than 3,000 mainstream biologists were dismissed or imprisoned, and numerous scientists were executed in the Soviet campaign to suppress scientific opponents. The president of the Soviet Agriculture Academy, Nikolai Vavilov, who had been Lysenko's mentor, but later denounced him, was sent to prison and died there, while Soviet genetics research was effectively destroyed. Research and teaching in the fields of neurophysiology, cell biology, and many other biological disciplines were harmed or banned.
The government of the Soviet Union (USSR) supported the campaign, and Joseph Stalin personally edited a speech by Lysenko in a way that reflected his support for what would come to be known as Lysenkoism, despite his skepticism toward Lysenko's assertion that all science is class-oriented in nature. Lysenko served as the director of the USSR's Lenin All-Union Academy of Agricultural Sciences. Other countries of the Eastern Bloc including the People's Republic of Poland, the Republic of Czechoslovakia, and the German Democratic Republic accepted Lysenkoism as the official "new biology", to varying degrees, as did the People's Republic of China for some years.
Context
Mendelian genetics, the science of heredity, developed into an experimentally-based field of biology at the start of the 20th century through the work of August Weismann, Thomas Hunt Morgan, and others, building on the rediscovered work of Gregor Mendel. They showed that the characteristics of an organism were carried by inherited genes, which were located on chromosomes in each cell's nucleus. These could be affected by random changes, mutations, and could be shuffled and recombined during sexual reproduction, but |
https://en.wikipedia.org/wiki/Ecological%20Complexity | Ecological Complexity is a quarterly peer-reviewed scientific journal covering the field of biocomplexity in the environment and theoretical ecology with special attention to papers that integrate natural and social processes at various spatio-temporal scales. The founding editor was Bai-Lian (Larry) Li (University of California at Riverside) and the current editor-in-chief is Sergei Petrovskii (University of Leicester).
External links
Elsevier academic journals
Quarterly journals
Ecology journals
Academic journals established in 2004
English-language journals |
https://en.wikipedia.org/wiki/Specman | Specman is an EDA tool that provides advanced automated functional verification of hardware designs. It provides an environment for working with, compiling, and debugging testbench environments written in the e Hardware Verification Language. Specman also offers automated testbench generation to boost productivity in the context of block, chip, and system verification.
The Specman tool itself does not include an HDL simulator (for design languages such as VHDL or Verilog.) To simulate an e-testbench with a design written in VHDL/Verilog, Specman must be run in conjunction with a separate HDL simulation tool. Specman is a feature of Cadence's new Xcelium simulator, where tighter product integration offers both faster runtime performance and debugs capabilities not available with other HDL simulators. In principle, Specman can co-simulate with any HDL simulator supporting standard PLI or VHPI interface, such as Synopsys's VCS, or Mentor's Questa.
History
Specman was originally developed at Verisity, an Israel-based company, which was acquired by Cadence on April 7, 2005.
It is now part of Cadence's functional verification suite. |
https://en.wikipedia.org/wiki/History%20of%20ecology | Ecology is a new science and considered as an important branch of biological science, having only become prominent during the second half of the 20th century. Ecological thought is derivative of established currents in philosophy, particularly from ethics and politics.
Its history stems all the way back to the 4th century. One of the first ecologists whose writings survive may have been Aristotle or perhaps his student, Theophrastus, both of whom had interest in many species of animals and plants. Theophrastus described interrelationships between animals and their environment as early as the 4th century BC. Ecology developed substantially in the 18th and 19th century. It began with Carl Linnaeus and his work with the economy of nature. Soon after came Alexander von Humboldt and his work with botanical geography. Alexander von Humboldt and Karl Möbius then contributed with the notion of biocoenosis. Eugenius Warming's work with ecological plant geography led to the founding of ecology as a discipline. Charles Darwin's work also contributed to the science of ecology, and Darwin is often attributed with progressing the discipline more than anyone else in its young history. Ecological thought expanded even more in the early 20th century. Major contributions included: Eduard Suess’ and Vladimir Vernadsky's work with the biosphere, Arthur Tansley's ecosystem, Charles Elton's Animal Ecology, and Henry Cowles ecological succession.
Ecology influenced the social sciences and humanities. Human ecology began in the early 20th century and it recognized humans as an ecological factor. Later James Lovelock advanced views on earth as a macro-organism with the Gaia hypothesis. Conservation stemmed from the science of ecology. Important figures and movements include Shelford and the ESA, National Environmental Policy act, George Perkins Marsh, Theodore Roosevelt, Stephen A. Forbes, and post-Dust Bowl conservation. Later in the 20th century world governments collaborated on man’s e |
https://en.wikipedia.org/wiki/Ambient%20calculus | In computer science, the ambient calculus is a process calculus devised by Luca Cardelli and Andrew D. Gordon in 1998, and used to describe and theorise about concurrent systems that include mobility. Here mobility means both computation carried out on mobile devices (i.e. networks that have a dynamic topology), and mobile computation (i.e. executable code that is able to move around the network). The ambient calculus provides a unified framework for modeling both kinds of mobility. It is used to model interactions in such concurrent systems as the Internet.
Since its inception, the ambient calculus has grown into a family of closely related ambient calculi.
Informal description
Ambients
The fundamental primitive of the ambient calculus is the ambient. An ambient is informally defined as a bounded place in which computation can occur. The notion of boundaries is considered key to representing mobility, since a boundary defines a contained computational agent that can be moved in its entirety. Examples of ambients include:
a web page (bounded by a file)
a virtual address space (bounded by an addressing range)
a Unix file system (bounded within a physical volume)
a single data object (bounded by “self”)
a laptop (bounded by its case and data ports)
The key properties of ambients within the Ambient calculus are:
Ambients have names, which are used to control access to the ambient.
Ambients can be nested inside other ambients (representing, for example, administrative domains)
Ambients can be moved as a whole.
Operations
Computation is represented as the crossing of boundaries, i.e. the movement of ambients. There are four basic operations (or capabilities) on ambients:
instructs the surrounding ambient to enter some sibling ambient , and then proceed as
instructs the surrounding ambient to exit its parent ambient
instructs the surrounding ambient to dissolve the boundary of an ambient located at the same level
makes any number of copies of |
https://en.wikipedia.org/wiki/Harris%E2%80%93Benedict%20equation | The Harris–Benedict equation (also called the Harris-Benedict principle) is a method used to estimate an individual's basal metabolic rate (BMR).
The estimated BMR value may be multiplied by a number that corresponds to the individual's activity level; the resulting number is the approximate daily kilocalorie intake to maintain current body weight.
The Harris-Benedict equation may be used to assist weight loss — by reducing the kilocalorie intake number below the estimated maintenance intake of the equation.
Calculating the Harris-Benedict BMR
The original Harris–Benedict equations were published in 1918 and 1919.
The Harris–Benedict equations revised by Roza and Shizgal in 1984.
The 95% confidence range for men is ±213.0 kcal/day, and ±201.0 kcal/day for women.
The Harris–Benedict equations revised by Mifflin and St Jeor in 1990:
History
The Harris-Benedict equation sprang from a study by James Arthur Harris and Francis Gano Benedict, which was published in 1919 by the Carnegie Institution of Washington in the monograph A Biometric Study Of Basal Metabolism In Man. A 1984 revision improved its accuracy. Mifflin et al. published an equation more predictive for modern lifestyles in 1990. Later work produced BMR estimators that accounted for lean body mass.
Issues in dietary use
As the BMR equations do not attempt to take into account body composition, identical results can be calculated for a very muscular person, and an overweight person, who are both the same height, weight, age and gender. As muscle and fat require differing amounts of calories to maintain, the TEE estimates will not be accurate for such cases.
The paper behind the latest update (Mifflin et al) to the BMR formula states all participants in their study fall within the 'normal' and 'overweight' body mass index (BMI) categories, and so the results also do not necessarily apply to those in the 'underweight' or 'obese' BMI categories.
See also
Food energy
Resting metabolic rate
Institu |
https://en.wikipedia.org/wiki/29%20%28number%29 | 29 (twenty-nine) is the natural number following 28 and preceding 30.
Mathematics
29 is the tenth prime number, and the fifth primorial prime.
29 forms a twin prime pair with thirty-one, which is also a primorial prime. Twenty-nine is also the sixth Sophie Germain prime.
29 is the sum of three consecutive squares, 22 + 32 + 42.
29 is a Lucas prime, a Pell prime, and a tetranacci number.
29 is an Eisenstein prime with no imaginary part and real part of the form 3n − 1. 29 is also the 10th supersingular prime.
None of the first 29 natural numbers have more than two different prime factors. This is the longest such consecutive sequence.
29 is a Markov number, appearing in the solutions to x + y + z = 3xyz: {2, 5, 29}, {2, 29, 169}, {5, 29, 433}, {29, 169, 14701}, etc.
29 is a Perrin number, preceded in the sequence by 12, 17, 22.
29 is the smallest positive whole number that cannot be made from the numbers {1, 2, 3, 4}, using each exactly once and using only addition, subtraction, multiplication, and division.
29 is the number of pentacubes if reflections are considered distinct.
The 29th dimension is the highest dimension for compact hyperbolic Coxeter polytopes that are bounded by a fundamental polyhedron, and the highest dimension that holds arithmetic discrete groups of reflections with noncompact unbounded fundamental polyhedra.
Religion
The Bishnois community follows 29 principles. Guru Jambheshwar had laid down 29 principles to be followed by the sect in 1485 A.D. In Hindi, Bish means 20 and noi means 9; thus, Bishnoi translates as Twenty-niners.
The number of suras in the Qur'an that begin with muqatta'at.
Science and astronomy
The atomic number of copper.
Messier object M29, a magnitude 6.6 open cluster in the constellation Cygnus.
The New General Catalogue object NGC 29, a spiral galaxy in the constellation Andromeda.
Saturn requires over 29 years to orbit the Sun.
The number of days February has in leap years.
Language and literature
|
https://en.wikipedia.org/wiki/Recorded%20Future | Recorded Future is a privately held cybersecurity company founded in 2009, with headquarters in Somerville, Massachusetts.
The company specializes in the collection, processing, analysis, and dissemination of threat intelligence. Recorded Future uses machine learning and natural language processing methods to continuously collect and organize data from open web, dark web, and technical sources.
The resulting information is displayed within a software-as-a-service portal.
History
In 2007, co-founders Christopher Ahlberg and Staffan Truvé, both Ph.D.s in computer science from Chalmers University of Technology, filed for Recorded Future's first patent (granted in 2013 as United States patent US8468153B2) – Data Analysis System with Automated Query and Visualization Environment Setup. The patent was used for continuous collection and processing of data and information from sources across the open, deep, and dark web, facilitated by machine learning. Recorded Future was officially incorporated in 2009.
The company received initial funding from Google and In-Q-Tel, which was reported in a July 2010 introduction to Recorded Future published by Wired.
When it decided that its algorithms and visualization software matched needs within the intelligence community, Recorded Future entered the cyber threat intelligence market in January 2012.
In 2014, the company launched Recorded Future Dark Web, integrating open and dark web sourcing as well as dark web forum access and analysis.
In 2016, Recorded Future was named a partner for threat intelligence by Splunk, Palo Alto Networks, and Vencore GEOINT.
In May 2017, Recorded Future introduced Insikt Group, the company's threat intelligence research arm. The word "insikt" is Swedish, a nod to Recorded Future's co-founders, and means "insight." Insikt Group is responsible for delivering analyst-generated assessments, insights, and recommended actions to customers and the public.
In May 2019, New York-based private equity fi |
https://en.wikipedia.org/wiki/List%20of%20common%20physics%20notations | This is a list of common physical constants and variables, and their notations. Note that bold text indicates that the quantity is a vector.
Latin characters
Greek characters
Other characters
See also
List of letters used in mathematics and science
Glossary of mathematical symbols
List of mathematical uses of Latin letters
Greek letters used in mathematics, science, and engineering
Physical constant
Physical quantity
International System of Units
ISO 31 |
https://en.wikipedia.org/wiki/Om%20Prakash%20Bhasin%20Award | Om Prakash Bhasin Award for Science and Technology is an Indian award, instituted in 1985 to recognize excellence in the areas of science and technology. The award, given individually or collectively to a group, is annual in cycle and carries a plaque, a citation and a cash prize of 100,000. The winners are invited to deliver the Om Prakash Bhasin Memorial Lecture at a venue decided by the award committee.
Profile
Om Prakash Bhasin Awards have been instituted by Shri Om Prakash Bhasin Foundation, a New Delhi-based charitable organization founded by Vinod Bhasin, along with her two sons, Shivy Bhasin and Hemant Kumar Bhasin, to honour the memory of her husband, Om Prakash Bhasin, a non resident Indian businessman. The corpus for the award of 5,100,000 was formed by Om Prakash Bhasin as a trust before his death. The awards, started in 1985, are given in five categories. The selection is through a notified procedure and is decided by a committee appointed for the purpose. The committee includes the Chairman of the foundation, two trustees representing the foundation, a member of the scientific community and a representative of the State Bank of India, the bankers to the foundation. The incumbent committee members are:
Shivy Bhasin - Chairman
Hemant Kumar Bhasin - Foundation trustee
Vinod Prakash Sharma - Scientist trustee
Samar Vikram Bhasin - Foundation trustee
State Bank of India nominee
Categories
Agriculture and Allied Sciences
Biotechnology
Electronics and Information Technology
Engineering including Energy and Aerospace
Health and Medical Sciences
Recipients
Agriculture and Allied Sciences
Source: Shri Om Prakash Bhasin Foundation
Biotechnology
Source: Shri Om Prakash Bhasin Foundation
Electronics and Information Technology
Source: Shri Om Prakash Bhasin Foundation
Engineering including Energy and Aerospace
Source: Shri Om Prakash Bhasin Foundation
Health and Medical Sciences
Source: Shri Om Prakash Bhasin Foundation
See also
List of ge |
https://en.wikipedia.org/wiki/Nemesis%20%28Resident%20Evil%29 | The Nemesis, also called the Nemesis-T Type, or the , is a character in Resident Evil (Biohazard in Japan), a survival horror video game series created by the Japanese company Capcom. Although smaller than other Tyrant models, the creature dwarfs a typical human, and possesses vastly superior intelligence and physical dexterity to its undead peers. It is featured in Resident Evil 3: Nemesis (1999) as a titular main villain before later emerging in other titles and cameo roles. It is also featured on various merchandise and was portrayed by Matthew G. Taylor in the 2004 film Resident Evil: Apocalypse. The character is voiced by Tony Rosato in the original game and Gregg Berger in Operation Raccoon City (2012). In the 2020 remake of Resident Evil 3, the character is voiced by David Cockman, with Neil Newbon providing the motion capture performance. Nemesis has also been featured in several other game franchises, including as a playable character in Marvel vs. Capcom and Dead by Daylight.
Taking inspiration from the T-1000 from Terminator 2: Judgment Day, Nemesis was conceived by Shinji Mikami and Kazuhiro Aoyama as an enemy that stalk the player throughout the game and invoke a persistent feeling of paranoia. Written by Yashuhisa Kawamura to be a weapon of revenge by the Umbrella Corporation, Nemesis' design was drawn by artist Yoshinori Matsushita, who was instructed to create "a rough guy who attacks with weapons and has an intimidating build" in order to heighten the fear of being pursued. Since the Nemesis' introduction, the character has received a positive reception and has come to be regarded as one of the series' most popular characters, though his design and role in the Resident Evil 3 remake has been criticized. Some publications have praised its role as an intimidating villain, while others have noted it as one of their favorite and most terrifying monsters in video games.
Conception and creation
Introduced in Resident Evil 3: Nemesis, producer Shinji Mi |
https://en.wikipedia.org/wiki/Bulk%20modulus | The bulk modulus ( or ) of a substance is a measure of the resistance of a substance to bulk compression. It is defined as the ratio of the infinitesimal pressure increase to the resulting relative decrease of the volume.
Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear stress, and Young's modulus describes the response to normal (lengthwise stretching) stress. For a fluid, only the bulk modulus is meaningful. For a complex anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke's law. The reciprocal of the bulk modulus at fixed temperature is called the isothermal compressibility.
Definition
The bulk modulus (which is usually positive) can be formally defined by the equation
where is pressure, is the initial volume of the substance, and denotes the derivative of pressure with respect to volume. Since the volume is inversely proportional to the density, it follows that
where is the initial density and denotes the derivative of pressure with respect to density. The inverse of the bulk modulus gives a substance's compressibility. Generally the bulk modulus is defined at constant temperature as the isothermal bulk modulus, but can also be defined at constant entropy as the adiabatic bulk modulus.
Thermodynamic relation
Strictly speaking, the bulk modulus is a thermodynamic quantity, and in order to specify a bulk modulus it is necessary to specify how the pressure varies during compression: constant-temperature (isothermal ), constant-entropy (isentropic ), and other variations are possible. Such distinctions are especially relevant for gases.
For an ideal gas, an isentropic process has:
where is the heat capacity ratio. Therefore, the isentropic bulk modulus is given by
Similarly, an isothermal process of an ideal gas has:
Therefore, the isothermal bulk modulu |
https://en.wikipedia.org/wiki/Deep%20external%20pudendal%20artery | The deep external pudendal artery (deep external pudic artery) is one of the pudendal arteries that is more deeply seated than the superficial external pudendal artery, passes medially across the pectineus and the adductor longus muscles; it is covered by the fascia lata, which it pierces at the medial side of the thigh, and is distributed, in the male, to the integument of the scrotum and perineum, in the female to the labia majora; its branches anastomose with the scrotal or labial branches of the perineal artery.
Additional Images
See also
Internal pudendal artery |
https://en.wikipedia.org/wiki/Telet%C3%B3n%20%28Chile%29 | Teletón is a charity event held in Chile on a yearly basis since 1978. It is usually held during the first week of December, unless a political election occurs at the same time. The major Chilean television networks hold a 27-hour transmission, to raise funds to help children with developmental disabilities (most commonly cerebral palsy) treated at Instituto de Rehabilitación Infantil ("Infant Rehabilitation Institute") centers of the Fundación Teletón.
In Chile, the transmission of Teletón is an event of national unity and, proportionately, the most widely watched telethon in the world.
Since the first telethon, over US$286 million has been raised, and 14 rehabilitation centers have been built in the cities of Arica, Iquique, Antofagasta, Calama, Copiapó, Coquimbo, Valparaíso, Santiago, Talca, Concepción, Temuco, Valdivia, Puerto Montt and Coyhaique.
During the annual event, local and worldwide stars participate in live events across the country. Teletón has been hosted by television personality Mario Kreutzberger, best known by his stage name Don Francisco, since the first event, aired in 1978. Each year, a poster child is elected to become the face of the charity.
With the exception of the initial Teletón in 1978, each year's goal is set to be exactly the total amount raised in the previous event, in the spirit of increasing the funds available to the Foundation to account for increased inflation and overall maintenance costs.
Up until now, the goal has been reached and surpassed on all Teletón versions with the exception of 1995's, were the final account was roughly 12% short of that year's goal.
Telethons
See also
Chile helps Chile—2010 telethon in response to the 2010 Chile earthquake and a 1985 special about the 1985 Chilean Earthquake |
https://en.wikipedia.org/wiki/Mobile%20virtualization | Mobile virtualization is hardware virtualization on a mobile phone or connected wireless device. It enables multiple operating systems or virtual machines to run simultaneously on a mobile phone or connected wireless device. It uses a hypervisor to create secure separation between the underlying hardware and the software that runs on top of it; this can be considered a form of an embedded hypervisor, or a close analogue. Virtualization technology has been used widely for many years in other fields such as data servers (storage virtualization) and personal computers (desktop virtualization).
Applications
Low cost platform
In 2008, the mobile industry became interested in using the benefits of virtualization technology for cell phones and other devices like tablets, netbooks and machine-to-machine (M2M) modules. With mobile virtualization, mobile devices can be manufactured more cheaply through the re-use of software and hardware, which shortens development time. One such example is using mobile virtualization to create low-cost Android smartphones without a separate baseband processor by running the applications and the baseband processor code in separate virtual machines on a single processor. Semiconductor vendors such as ST-Ericsson have adopted mobile virtualization as part of their low-cost Android platform strategy.
Enterprise
Another use case for mobile virtualization is in the enterprise market. Today, many consumers carry two mobile phones: one for business use and another for personal use. With mobile virtualization, mobile phones can support multiple domains/operating systems on the same hardware, so that the enterprise IT department can securely manage one domain (in a virtual machine), and the mobile operator can separately manage the other domain (in a virtual machine).
In September 2010, ARM announced that it would support a virtualization extension in its ARM Cortex-A15 processor.
Platforms
Every mobile platform does virtualization differently. |
https://en.wikipedia.org/wiki/UniGene | UniGene was a NCBI database of the transcriptome and thus, despite the name, not primarily a database for genes. Each entry is a set of transcripts that appear to stem from the same transcription locus (i.e. gene or expressed pseudogene). Information on protein similarities, gene expression, cDNA clones, and genomic location is included with each entry.
Descriptions of the UniGene transcript based and genome based build procedures are available.
A detailed description of UniGene database
The UniGene resource, developed at NCBI, clusters ESTs and other mRNA sequences, along with coding sequences (CDSs) annotated on genomic DNA, into subsets of related sequences. In most cases, each cluster is made up of sequences produced by a single gene, including alternatively spliced transcripts. However, some genes may be represented by more than one cluster. The clusters are organism specific and are currently available for human, mouse, rat, zebrafish, and cattle. They are built in several stages, using an automatic process based on special sequence comparison algorithms. First, the nucleotide sequences are searched for contaminants, such as mitochondrial, ribosomal, and vector sequence, repetitive elements, and low-complexity sequences. After a sequence is screened, it must contain at least 100 bases to be a candidate for entry into UniGene. mRNA and genomic DNA are clustered first into gene links. A second sequence comparison links ESTs to each other and to the gene links. At this stage, all clusters are ‘‘anchored,’’ and contain either a sequence with a polyadenylation site or two ESTs labeled as coming from the 3 end of a clone. Clone-based edges are added by linking the 5 and 3 ESTs that derive from the same clone. In some cases, this linking may merge clusters identified at a previous stage. Finally, unanchored ESTs and gene clusters of size 1 (which may represent rare transcripts) |
https://en.wikipedia.org/wiki/Cosmological%20perturbation%20theory | In physical cosmology, cosmological perturbation theory is the theory by which the evolution of structure is understood in the Big Bang model. Cosmological perturbation theory may be broken into two categories: Newtonian or general relativistic. Each case uses its governing equations to compute gravitational and pressure forces which cause small perturbations to grow and eventually seed the formation of stars, quasars, galaxies and clusters. Both cases apply only to situations where the universe is predominantly homogeneous, such as during cosmic inflation and large parts of the Big Bang. The universe is believed to still be homogeneous enough that the theory is a good approximation on the largest scales, but on smaller scales more involved techniques, such as N-body simulations, must be used. When deciding whether to use general relativity for perturbation theory, note that Newtonian physics is only applicable in some cases such as for scales smaller than the Hubble horizon, where spacetime is sufficiently flat, and for which speeds are non-relativistic.
Because of the gauge invariance of general relativity, the correct formulation of cosmological perturbation theory is subtle.
In particular, when describing an inhomogeneous spacetime, there is often not a preferred coordinate choice. There are currently two distinct approaches to perturbation theory in classical general relativity:
gauge-invariant perturbation theory based on foliating a space-time with hyper-surfaces, and
1+3 covariant gauge-invariant perturbation theory based on threading a space-time with frames.
Newtonian perturbation theory
In this section, we will focus on the effect of matter on structure formation in the hydrodynamical fluid regime. This regime is useful because dark matter has dominated structure growth for most of the universe's history. In this regime, we are on sub-Hubble scales (where is the Hubble parameter) so we can take spacetime to be flat, and ignore general relativisti |
https://en.wikipedia.org/wiki/Inverted%20repeat | An inverted repeat (or IR) is a single stranded sequence of nucleotides followed downstream by its reverse complement. The intervening sequence of nucleotides between the initial sequence and the reverse complement can be any length including zero. For example, is an inverted repeat sequence. When the intervening length is zero, the composite sequence is a palindromic sequence.
Both inverted repeats and direct repeats constitute types of nucleotide sequences that occur repetitively. These repeated DNA sequences often range from a pair of nucleotides to a whole gene, while the proximity of the repeat sequences varies between widely dispersed and simple tandem arrays. The short tandem repeat sequences may exist as just a few copies in a small region to thousands of copies dispersed all over the genome of most eukaryotes. Repeat sequences with about 10–100 base pairs are known as minisatellites, while shorter repeat sequences having mostly 2–4 base pairs are known as microsatellites. The most common repeats include the dinucleotide repeats, which have the bases AC on one DNA strand, and GT on the complementary strand. Some elements of the genome with unique sequences function as exons, introns and regulatory DNA. Though the most familiar loci of the repetitive sequences are the centromere and the telomere, a large portion of the repeated sequences in the genome are found among the noncoding DNA.
Inverted repeats have a number of important biological functions. They define the boundaries in transposons and indicate regions capable of self-complementary base pairing (regions within a single sequence which can base pair with each other). These properties play an important role in genome instability and contribute not only to cellular evolution and genetic diversity but also to mutation and disease. In order to study these effects in detail, a number of programs and databases have been developed to assist in discovery and annotation of inverted repeats in various g |
https://en.wikipedia.org/wiki/Sergei%20Viktorovich%20Bochkarev | Sergei (or Sergey) Viktorovich Bochkarev (or Bočkarev) (Сергей Викторович Бочкарёв, born July 24, 1941, in Kuybyshev now renamed Samara) is a Soviet and Russian mathematician.
Education and career
He received in 1964 his undergraduate degree from Moscow Institute of Physics and Technology and in 1969 his Russian Candidate of Sciences degree (PhD) from Moscow State University. His dissertation о рядах Фурье по системе Хаара (On Fourier series in the Haar system) was supervised by Pyotr Lavrentyevich Ulyanov. From Moscow State University, Bochkarev received in 1974 his Russian Doctor of Science degree (habilitation). Since 1971 he has worked at the Steklov Institute of Mathematics, where he holds the title of leading scientific researcher in the Department of Function Theory.
His research deals with harmonic analysis, BMO spaces, Hardy spaces, functional analysis, construction of orthogonal bases in various function spaces, and exponential sums.
In 1977 he was awarded the Salem Prize. In 1978 he was an Invited Speaker with talk Метод усреднения в теории ортогональных рядов (The averaging method in the theory of orthogonal bases) at the International Congress of Mathematicians in Helsinki.
Selected publications
On a problem of Zygmund, Mathematics of the USSR-Izvestia, vol. 7, no. 3, 1973, p. 629
Existence of a basis in the space of functions analytic in the disk, and some properties of Franklin's system, Math. USSR Sbornik, vol. 24, 1974, pp. 1–16
The method of averaging in the theory of orthogonal series and some questions in the theory of bases, Tr. MIAN SSSR, vol. 146, 1978, pp. 3–87
The method of averaging in the theory of orthogonal series and some questions in the theory of bases, Proc. Steklov Inst. Math., vol. 146, 1980, pp. 1–92
Everywhere divergent Fourier series with respect to the Walsh system and with respect to multiplicative systems, Russian Math. Surveys, vol. 59, 2004, pp. 103–124
Multiplicative Inequalities for the L1 Norm: Applications in Analy |
https://en.wikipedia.org/wiki/Endothoracic%20fascia | The endothoracic fascia is the layer of loose connective tissue deep to the intercostal spaces and ribs, separating these structures from the underlying pleura. This fascial layer is the outermost membrane of the thoracic cavity. The endothoracic fascia contains variable amounts of fat.
It becomes more fibrous over the apices of the lungs as the suprapleural membrane. It separates the internal thoracic artery from the parietal pleura. |
https://en.wikipedia.org/wiki/Domain%20adaptation | Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning a model from a source data distribution and applying that model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new user who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial for learning unrelated sources.
Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation.
Overview
Domain adaptation is the ability to apply an algorithm trained in one or more "source domains" to a different (but related) "target domain". Domain adaptation is a subcategory of transfer learning. In domain adaptation, the source and target domains all have the same feature space (but different distributions); in contrast, transfer learning includes cases where the target domain's feature space is different from the source feature space or spaces.
Domain shift
A domain shift, or distributional shift, is a change in the data distribution between an algorithm's training dataset, and a dataset it encounters when deployed. These domain shifts are common in practical applications of artificial intelligence. Conventional machine-learning algorithms often adapt poorly to domain shifts. The modern machine-learning community has many different strategies to attempt to gain better domain adaptation.
Examples
An algorithm trained on newswires might have to adapt to a new dataset of biomedical documents.
A spam filter, trained on a certain group of email users during training, must adapt to a new target user when deployed.
Applying AI diagnostic algorithms, trained on labeled data associated with previous diseases, to new unlabeled data associated with the COVID-19 pandemic.
A sudden soci |
https://en.wikipedia.org/wiki/Archaeoglobaceae | Archaeoglobaceae are a family of the Archaeoglobales. All known genera within the Archaeoglobaceae are hyperthermophilic and can be found near undersea hydrothermal vents. Archaeoglobaceae are the only family in the order Archaeoglobales, which is the only order in the class Archaeoglobi.
Mode of metabolism
While all genera within the Archaeoglobaceae are related to each other phylogenetically, the mode of metabolism used by each of these organisms is unique. Archaeoglobus are chemoorganotrophic sulfate-reducing archaea, the only known member of the Archaea that possesses this type of metabolism. Ferroglobus, in contrast, are chemolithotrophic organisms that couple the oxidation of ferrous iron to the reduction of nitrate. Geoglobus are iron reducing-archaea that use hydrogen gas or organic compounds as energy sources.
Characteristic and genera
Archaeoglobaceae have three genera and here are some brief differences between them:
Archaeoglobus: This genus contains the most well-known and studied members of the Archaeoglobaceae family. They are thermophilic sulfate-reducing bacteria that are found in hydrothermal vents and oil reservoirs. They can grow at high temperatures and use a variety of organic compounds as electron donors.
Ferroglobus: This genus contains a single species, Ferroglobus placidus, which is found in hydrothermal vents. They are thermophilic and can grow at high temperatures, but they differ from other members of the family in that they use iron as an electron donor instead of organic compounds.
Geoglobus: This genus contains a single species, Geoglobus acetivorans, which is found in hydrothermal vents. They are thermophilic and can grow at high temperatures, and they differ from other members of the family in that they use acetate as an electron donor.
living environments
Archaeoglobus species are found in a variety of extreme environments, including deep-sea hydrothermal vents, oil reservoirs, and hot springs. These environments are charact |
https://en.wikipedia.org/wiki/Histone%20methylation | Histone methylation is a process by which methyl groups are transferred to amino acids of histone proteins that make up nucleosomes, which the DNA double helix wraps around to form chromosomes. Methylation of histones can either increase or decrease transcription of genes, depending on which amino acids in the histones are methylated, and how many methyl groups are attached. Methylation events that weaken chemical attractions between histone tails and DNA increase transcription because they enable the DNA to uncoil from nucleosomes so that transcription factor proteins and RNA polymerase can access the DNA. This process is critical for the regulation of gene expression that allows different cells to express different genes.
Function
Histone methylation, as a mechanism for modifying chromatin structure is associated with stimulation of neural pathways known to be important for formation of long-term memories and learning. Histone methylation is crucial for almost all phases of animal embryonic development.
Animal models have shown methylation and other epigenetic regulation mechanisms to be associated with conditions of aging, neurodegenerative diseases, and intellectual disability (Rubinstein–Taybi syndrome, X-linked intellectual disability). Misregulation of H3K4, H3K27, and H4K20 are associated with cancers. This modification alters the properties of the nucleosome and affects its interactions with other proteins, particularly in regards to gene transcription processes.
Histone methylation can be associated with either transcriptional repression or activation. For example, trimethylation of histone H3 at lysine 4 (H3K4me3) is an active mark for transcription and is upregulated in hippocampus one hour after contextual fear conditioning in rats. However, dimethylation of histone H3 at lysine 9 (H3K9me2), a signal for transcriptional silencing, is increased after exposure to either the fear conditioning or a novel environment alone.
Methylation of some lysine |
https://en.wikipedia.org/wiki/Declassification | Declassification is the process of ceasing a protective classification, often under the principle of freedom of information. Procedures for declassification vary by country. Papers may be withheld without being classified as secret, and eventually made available.
United Kingdom
Classified information has been governed by various Official Secrets Acts, the latest being the Official Secrets Act 1989. Until 1989 requested information was routinely kept secret invoking the public interest defence; this was largely removed by the 1989 Act. The Freedom of Information Act 2000 largely requires information to be disclosed unless there are good reasons for secrecy.
Confidential government papers such as the yearly cabinet papers used routinely to be withheld formally, although not necessarily classified as secret, for 30 years under the thirty year rule, and released usually on a New Year's Day; freedom of information legislation has relaxed this rigid approach.
United States
Executive Order 13526 establishes the mechanisms for most declassifications, within the laws passed by Congress. The originating agency assigns a declassification date, by default 25 years. After 25 years, declassification review is automatic with nine narrow exceptions that allow information to remain as classified. At 50 years, there are two exceptions, and classifications beyond 75 years require special permission. Because of changes in policy and circumstances, agencies are expected to actively review documents that have been classified for fewer than 25 years. They must also respond to Mandatory Declassification Review and Freedom of Information Act requests. The National Archives and Records Administration houses the National Declassification Center to coordinate reviews and Information Security Oversight Office to promulgate rules and enforce quality measures across all agencies. NARA reviews documents on behalf of defunct agencies and permanently stores declassified documents for public i |
https://en.wikipedia.org/wiki/Peter%20Rona%20%28physician%29 | Peter Rona, born as Peter Rosenfeld (* 13. May 1871 in Budapest; † February or March 1945) was a Hungarian German Jewish physician and physiologist. |
https://en.wikipedia.org/wiki/Financial%20gerontology | Financial gerontology is a multidisciplinary field of study encompassing both academic and professional education, that integrates research on aging and human development with the concerns of finance and business. Following from its roots in social gerontology, Financial gerontology is not simply the study of old people but emphasizes the multiple processes of aging. In particular, research and teaching in financial gerontology draws upon four kinds of aging or "'four lenses" through which aging and finance can be viewed: population aging, individual aging, family aging, and generational aging. While it is problematic that "demography is destiny," demographic concepts, issues, and data play a substantial role in understanding the dynamics of financial gerontology. For example, through the lens of population aging, demography identifies the number of persons of different ages in cities and countries—and at multiple points in time. Through the lens of individual aging, demography also notes changes in the length of time—number of years lived in older age, typically measured by increases in life expectancy. From in its founding years in the beginning of the 21st century, one primary interest of Financial Gerontology has been on baby boomers and their relationships with their parents. The impact of these two kinds of aging on finance are reasonably apparent. The large and increasing number of older persons [population aging] in a society, no matter how "old age" is defined, and the longer each of these persons lives [individual aging], the greater the impact on a society's pattern of retirement, public and private pension systems, health, health care, and the personal and societal financing of health care. The focus on boomers illustrates also the other two lenses or "kinds" of aging. How boomers deal with the social, emotional, and financial aspects of their parents' aging is a central aspect of family aging. And how boomers may differ from their parents born and rai |
https://en.wikipedia.org/wiki/Teaching%20Mathematics%20and%20Its%20Applications | Teaching Mathematics and Its Applications is a quarterly peer-reviewed academic journal in the field of mathematics education. The Journal was established in 1982 and is published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. The editors-in-chief are Duncan Lawson (Newman University, Birmingham), Chris Sangwin (University of Edinburgh), and Anne Watson (University of Oxford).
The journal is abstracted and indexed in the British Education Index, Education Research Abstracts, Educational Management Abstracts, Educational Technology Abstracts, MathEduc Database, and ProQuest databases.
See also
List of mathematics education journals |
https://en.wikipedia.org/wiki/Isao%20Ijima | was a Japanese zoologist known for his studies of sponges (Porifera) — including his circumscription of the genus Staurocalyptus — leeches (Hirudinea), flatworms (Turbellaria), birds, and fish. Professor of Zoology at Tokyo Imperial University, he is considered the founder of parasitology in Japan and was the first President of the Ornithological Society of Japan. Taxa named in his honour include Ijima's sea snake and Ijima's leaf warbler.
Biography
Born in Hamamatsu in 1861 into a samurai family of Hamamatsu Domain, at the age of fifteen he entered the Kaisei Gakkō [ja] school in Tokyo, before enrolling as a student in the Science College at the Imperial University, Tokyo in 1878. There he studied under Edward Sylvester Morse and Charles Otis Whitman. In 1879, together with , both having previously received training from and assisted Morse in his exploration of the Ōmori Shell Mounds, Ijima excavated the Okadaira Shell Mound; this is credited with being the first modern archaeological survey conducted solely by Japanese. Upon graduation in 1881, as one of three from the first cohort in the Department of Zoology, he became an assistant in the College. The next year he went to Germany to study zoology at the University of Leipzig, where he spent three years working under the direction of Doctor Rudolf Leuckart; he was awarded his Ph.D. in 1884.
Returning to Japan in 1886, at the age of 25 he was appointed Professor of Zoology at the Imperial University, Tokyo, where he remained until his death. In 1893, with the description of Parus owstoni (now Sittiparus owstoni or Owston's tit), he became the first zoologist from Japan to describe a bird. In 1903, he was involved in the establishment of and in 1904 he was appointed the second director of the Misaki Marine Biological Station [ja]. In 1912, he was the founding president of the Ornithological Society of Japan. In 1918, he published his influential . In his personal life, Ijima enjoyed hunting, shooting, fishing, w |
https://en.wikipedia.org/wiki/State%20of%20health | In the domain of Electric Vehicles, State of health (SoH) is a figure of merit of the condition of a battery (or a cell, or a battery pack), compared to its ideal conditions. The unit of SoH is percent (100% = the battery's conditions match the battery's specifications).
Typically, a battery's SoH will be 100% at the time of manufacture and will decrease over time and use. However, a battery's performance at the time of manufacture may not meet its specifications, in which case its initial SoH will be less than 100%. The biggest factors that contribute to battery degradation are driver patterns, driver aggression, climate, cabin thermal dynamics, and infrastructure, with driver patterns and climate being the biggest.
SoH evaluation
First, a battery management system evaluates the SoH of the battery under its management and reports it.
Then, the SoH is compared to a threshold (typically done by the application in which the battery is used), to determine the suitability of the battery to a given application.
Knowing the SoH of a given battery and the SoH threshold of a given application:
a determination can be made whether the present battery conditions make it suitable for that application
an estimate can be made of the battery's useful lifetime in that application
Parameters
As SoH does not correspond to a particular physical quality, there is no consensus in the industry on how SoH should be determined.
The designer of a battery management system may use any of the following parameters (singly or in combination) to derive an arbitrary value for the SoH.
Internal resistance / impedance / conductance
Capacity
Voltage
Self-discharge
Ability to accept a charge
Number of charge–discharge cycles
Age of the battery
Temperature of battery during its previous uses
Total energy charged and discharged
In addition, the designer of the battery management system defines an arbitrary weight for each of the parameter's contribution to the SoH value. The definitio |
https://en.wikipedia.org/wiki/Cromemco%20Z-2 | Z-2 is a series of microcomputers made by Cromemco, Inc. which were introduced to the market in the middle to late 1970s. They were S-100 bus machines powered by the Zilog Z80 processor and typically ran on the CP/M operating system.
They were originally available in assembled or kit form to serve both a commercial market and the computer enthusiast market. Later the machines were only available factory-assembled. The machines were widely respected for their speed, configurability, durability, and reliability.
The Z-2 was a Z80–based microcomputer system that was introduced in 1977. The original Z-2 in kit form included a ZPU-K Z80 CPU card, S-100 bus motherboard, all-metal rack-mount chassis and dust case, card socket and card guide; the assembled form included a complete set of sockets and card guides, and a cooling fan. The Z-2 series was capable of supporting up to 21 S-100 boards and could be configured with any of the boards supplied by Cromemco.
The Z-2 gave an impression of solidity due to its hefty 450-watt power supply and heavy metal chassis. A TU-ART (dual serial and parallel board), 4FDC Floppy Disk Controller, one or more 16KZRAM cards, and a Wangco 5¼" floppy disk drive would be added to form a basic system.
An unusual feature of the Z-2 was switch–selectable CPU speed; 250 or 500 nanosecond cycle time were available. The ZPU speed was 4 MHz at a time when less than 2 MHz was normal, and boards from other manufacturers might still require the slower speed. The ZPU card in the Z-2 could address up to 64 kilobytes (65,536 bytes) of RAM. However, the 16KZ memory card supported bank-switching with 8 banks of 64 kilobytes each. When using the 16KZ, the maximum RAM of the Z-2 was limited by the available S-100 slots. If 16 of the slots were occupied by 16KZ cards, then the system had 4 banks of 64 kilobytes each, for a total of 256 kilobytes (262,144 bytes).
Additional S-100 slots were required for cards controlling peripherals, disk drives, and I/O in |
https://en.wikipedia.org/wiki/Functional%20fixedness | Functional fixedness is a cognitive bias that limits a person to use an object only in the way it is traditionally used. The concept of functional fixedness originated in Gestalt psychology, a movement in psychology that emphasizes holistic processing. Karl Duncker defined functional fixedness as being a mental block against using an object in a new way that is required to solve a problem. This "block" limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer's use as anything other than for pounding nails; the person couldn't think to use the hammer in a way other than in its conventional function.
When tested, 5-year-old children show no signs of functional fixedness. It has been argued that this is because at age 5, any goal to be achieved with an object is equivalent to any other goal. However, by age 7, children have acquired the tendency to treat the originally intended purpose of an object as special.
Examples in research
Experimental paradigms typically involve solving problems in novel situations in which the subject has the use of a familiar object in an unfamiliar context. The object may be familiar from the subject's past experience or from previous tasks within an experiment.
Candle box
In a classic experiment demonstrating functional fixedness, Duncker (1945) gave participants a candle, a box of thumbtacks, and a book of matches, and asked them to attach the candle to the wall so that it did not drip onto the table below. Duncker found that participants tried to attach the candle directly to the wall with the tacks, or to glue it to the wall by melting it. Very few of them thought of using the inside of the box as a candle-holder and tacking this to the wall. In D |
https://en.wikipedia.org/wiki/Solitaire%20%28cipher%29 | The Solitaire cryptographic algorithm was designed by Bruce Schneier at the request of Neal Stephenson for use in his novel Cryptonomicon, in which field agents use it to communicate securely without having to rely on electronics or having to carry incriminating tools. It was designed to be a manual cryptosystem calculated with an ordinary deck of playing cards. In Cryptonomicon, this algorithm was originally called Pontifex to hide the fact that it involved playing cards.
One of the motivations behind Solitaire's creation is that in totalitarian environments, a deck of cards is far more affordable (and less incriminating) than a personal computer with an array of cryptological utilities. However, as Schneier warns in the appendix of Cryptonomicon, just about everyone with an interest in cryptanalysis will now know about this algorithm, so carrying a deck of cards may also be considered incriminating. Furthermore, analysis has revealed flaws in the cipher such that it is now considered insecure.
Encryption and decryption
This algorithm uses a standard deck of cards with 52 suited cards and two jokers which are distinguishable from each other, called the A joker and the B joker. For simplicity's sake, only two suits will be used in this example, clubs and diamonds. Each card is assigned a numerical value: the clubs will be numbered from 1 to 13 (Ace through King) and the diamonds will be numbered 14 through 26 in the same manner. The jokers will be assigned the values of 27 and 28. Thus, the jack of clubs would have the value 11, and the two of diamonds would have the value 15. (In a full deck of cards, the suits are valued in bridge order: clubs, diamonds, hearts, spades, with the suited cards numbered 1 through 52, and the jokers numbered 53 and 54.)
To begin encryption or decryption, arrange the deck of cards face-up in an order previously agreed upon. The person decrypting a message must have a deck arranged in the same order as the deck used by the person wh |
https://en.wikipedia.org/wiki/Von%20Neumann%27s%20inequality | In operator theory, von Neumann's inequality, due to John von Neumann, states that, for a fixed contraction T, the polynomial functional calculus map is itself a contraction.
Formal statement
For a contraction T acting on a Hilbert space and a polynomial p, then the norm of p(T) is bounded by the supremum of |p(z)| for z in the unit disk."
Proof
The inequality can be proved by considering the unitary dilation of T, for which the inequality is obvious.
Generalizations
This inequality is a specific case of Matsaev's conjecture. That is that for any polynomial P and contraction T on
where S is the right-shift operator. The von Neumann inequality proves it true for and for and it is true by straightforward calculation.
S.W. Drury has shown in 2011 that the conjecture fails in the general case. |
https://en.wikipedia.org/wiki/Mark%20Bew | Mark Bew MBE (born 5 March 1967) is an English engineer and chairman of the PCSG consultancy business, formerly ECS. Until January 2015, he was also chairman of BuildingSMART's UK chapter, and, from 2011 to 2016, was chair of the UK Government's BIM Task Group, a body created to drive implementation of building information modelling (BIM) across UK public sector construction projects.
Mark Bew worked at John Laing Construction, was business systems director with Costain, and then director of business information systems at URS Scott Wilson, before founding his own company, Engineering Construction Strategies (now a division of PCSG), to focus on the UK BIM strategy. In January 2012, he was recognised in the New Year Honours with an MBE for services to the construction sector. |
https://en.wikipedia.org/wiki/Bioretention | Bioretention is the process in which contaminants and sedimentation are removed from stormwater runoff. The main objective of the bioretention cell is to attenuate peak runoff as well as to remove stormwater runoff pollutants.
Construction of a bioretention area
Stormwater is firstly directed into the designed treatment area, which conventionally consists of a sand bed (which serves as a transition to the actual soil), a filter media layer (which consists of layered materials of various composition), and plants atop the filter media. Various soil amendment such as water treatment residue (WTR), Coconut husk, biochar etc have been proposed over the years. These materials were reported to have enhanced performance in terms of pollutant removal. Runoff passes first over or through a sand bed, which slows the runoff's velocity, distributes it evenly along the length of the ponding area, which consists of a surface organic layer and/or groundcover and the underlying planting soil. Stored water in the bioretention area planting soil exfiltrates over a period of days into the underlying soils.
Filtration
Each of the components of the bioretention area is designed to perform a specific function. The grass buffer strip reduces incoming runoff velocity and filters particulates from the runoff. The sand bed also reduces the velocity, filters particulates, and spreads flow over the length of the bioretention area. Aeration and drainage of the planting soil are provided by the deep sand bed. The ponding area provides a temporary storage location for runoff prior to its evaporation or infiltration. Some particulates not filtered out by the grass filter strip or the sand bed settle within the ponding area.
The organic or mulch layer also filters pollutants and provides an environment conducive to the growth of microorganisms, which degrade petroleum-based products and other organic material. This layer acts in a similar way to the leaf litter in a forest and prevents the e |
https://en.wikipedia.org/wiki/Tarjan%27s%20algorithm | Tarjan's algorithm may refer to one of several algorithms attributed to Robert Tarjan, including:
Tarjan's strongly connected components algorithm
Tarjan's off-line lowest common ancestors algorithm
Tarjan's algorithm for finding bridges in an undirected graph
Tarjan's algorithm for finding simple circuits in a directed graph
See also
List of algorithms |
https://en.wikipedia.org/wiki/Pileus%20%28hat%29 | The pileus (, ; also or in Latin) was a brimless felt cap worn in Ancient Greece, Etruria, Illyria (especially Pannonia), later also introduced in Ancient Rome. The pileus also appears on Apulian red-figure pottery.
The pilos together with the petasos were the most common types of hats in Archaic and Classical era (8th–4th century BC) Greece. In the 5th century BC, a bronze version began to appear in Ancient Greece and it became a popular infantry helmet. It occasionally had a horsehair crest. The Greek pilos resembled the Roman and Etruscan pileus, which were typically made of felt. The Greek () and Latin were smaller versions, similar to a skullcap.
Similar caps were worn in later antiquity and the early medieval ages in various parts of Europe, as seen in Gallic and Frankish dress. The Albanian traditional felt cap, the plis, worn today in Albania, Kosovo and adjacent areas, originated from a similar felt cap worn by the ancient Illyrians.
A pointed version called pileus cornutus served as a distinguishing sign for the Jewish people in the Holy Roman Empire for five centuries (12th–17th centuries).
Name
The word for the cap in antiquity was pil(l)eus or pilos, indicating a kind of felt. Greek πῖλος , Latin , Albanian , as well as Old High German and Proto-Slavic *pьlstь are considered to come from a common Proto-Indo-European root meaning "felt".
History
Ancient Greece
Pilos hat
The pilos (Greek: πῖλος, felt) was a typical conical hat in Ancient Greece among travelers, workmen and sailors, though sometimes a low, broad-rimmed version was also preferred, known as petasos. It could be made of felt or leather. The pilos together with the petasos were the most common types of hats in Archaic and Classical era (8th - 4th century B.C) Greece.
Pilos caps often identify the mythical twins, or Dioscuri, Castor and Pollux, as represented in sculptures, bas-reliefs and on ancient ceramics. Their caps were supposedly the remnants of the egg from which they hatc |
https://en.wikipedia.org/wiki/Myhill%20isomorphism%20theorem | In computability theory the Myhill isomorphism theorem, named after John Myhill, provides a characterization for two numberings to induce the same notion of computability on a set.
Theorem
Definitions
Sets A and B of natural numbers are said to be recursively isomorphic if there is a total computable bijective function f on the natural numbers such that for any , .
A set A of natural numbers is said to be one-one reducible to a set B if there is a total computable injective function f on the natural numbers such that and .
Stetement
Myhill's isomorphism theorem states that two sets A and B of natural numbers are recursively isomorphic if and only if A is one-reducible to B and B is one-reducible to A.
Corollaries
Two total numberings are one-equivalent if and only if they are recursively isomorphic.
Discussion
The theorem implies that given two injective reductions in opposing directions, there is a computable bijection on the naturals that puts the sets in question in bijective correspondence. This is reminiscent of the Schröder–Bernstein theorem about general sets, and Myhill's theorem has been called a constructive version of it.
Their proofs are however different. The proof of Schröder-Bernstein uses the inverses of the two injections, which is impossible in the setting of the Myhill theorem since these inverses might not be recursive. The proof of the Myhill theorem, on the other hand, defines the bijection inductively, which is impossible in the setting of Schröder-Bernstein unless one uses the Axiom of Choice (which is not necessary for the proof of the Myhill theorem).
See also
Berman–Hartmanis conjecture, an analogous statement in computational complexity theory |
https://en.wikipedia.org/wiki/Similarity%20%28psychology%29 | Similarity refers to the psychological degree of identity of two mental representations. It is fundamental to human cognition since it provides the basis for categorization of entities into kinds and for various other cognitive processes. It underpins our ability to interact with unknown entities by predicting how they will behave based on their similarity to entities we are familiar with. Research in cognitive psychology has taken a number of approaches to the concept of similarity. Each of them is related to a particular set of assumptions about knowledge representation.
Cognitive psychological approaches
Mental distance approaches
Mental distance approaches assume that mental representations can be conceptualized as some kind of mental space. Concepts are represented as points within the space. Similarity between concepts is a function of the distance between the concepts in space. Concepts represented by points that are near to each other are more psychologically similar than are points that are conceptually distant. A strength of this approach is there are many mathematical techniques for deriving spaces from data such as multidimensional scaling and latent semantic analysis .
Featural approaches
Featural approaches were developed to address limitations of the mental distance approaches. For example, spaces are symmetric. The distance between two points is the same regardless of which point you start from. However, psychological similarity is not symmetric. For example, we often prefer to state similarity in one direction. For example, it feels more natural to say that 101 is like 100 than to say that 100 is like 101. Furthermore, many metaphors are also directional. Saying "That surgeon is a butcher" means something quite different from saying "That butcher is a surgeon."
Featural approaches assumed that people represent concepts by lists of features that describe properties of the items. A similarity comparison involves comparing the feature lists that |
https://en.wikipedia.org/wiki/Outline%20of%20brain%20mapping | The following outline is provided as an overview of and topical guide to brain mapping:
Brain mapping – set of neuroscience techniques predicated on the mapping of (biological) quantities or properties onto spatial representations of the (human or non-human) brain resulting in maps. Brain mapping is further defined as the study of the anatomy and function of the brain and spinal cord through the use of imaging (including intra-operative, microscopic, endoscopic and multi-modality imaging), immunohistochemistry, molecular and optogenetics, stem cell and cellular biology, engineering (material, electrical and biomedical), neurophysiology and nanotechnology.
Broad scope
History of neuroscience
History of neurology
Brain mapping
Human brain
Neuroscience
Nervous system.
The neuron doctrine
Neuron doctrine – A set of carefully constructed elementary set of observations regarding neurons. For more granularity, more current, and more advanced topics, see the cellular level section
Asserts that neurons fall under the broader cell theory, which postulates:
All living organisms are composed of one or more cells.
The cell is the basic unit of structure, function, and organization in all organisms.
All cells come from preexisting, living cells.
The Neuron doctrine postulates several elementary aspects of neurons:
The brain is made up of individual cells (neurons) that contain specialized features such as dendrites, a cell body, and an axon.
Neurons are cells differentiable from other tissues in the body.
Neurons differ in size, shape, and structure according to their location or functional specialization.
Every neuron has a nucleus, which is the trophic center of the cell (The part which must have access to nutrition). If the cell is divided, only the portion containing the nucleus will survive.
Nerve fibers are the result of cell processes and the outgrowths of nerve cells. (Several axons are bound together to form one nerve fibril. See also: Neurofilament. |
https://en.wikipedia.org/wiki/Hardbass | Hardbass or hard bass () is a subgenre of pumping house that originated in Saint Petersburg, Russia during the late 1990s, drawing inspiration from bouncy techno, hardstyle, as well as local Russian influences. Hardbass is characterized by its fast tempo (usually 150–175 BPM), donks, distinctive basslines (commonly known as "hard bounce"), distorted sounds, heavy kicks and occasional chants or rapping. In several European countries, so-called "hardbass scenes" have sprung up, which are events related to the genre that involve multiple people dancing in public while masked, sometimes with moshing involved.
History
Late 1990s–mid 2000s: Saint Petersburg, metal shade, drug raves
Hardbass first began to emerge in the late 1990s, mainly in the Saint Petersburg electronic dance music underground, when the pumping house genre, built around the bamboo bass, or donk bass (a type of metallic bass synthesizer sound, first invented by Klubbheads in 1997), became a staple in local raves. Eventually, party nights dedicated solely to pumping house were held in Saint Petersburg and to a lesser extent, in Moscow. The most famous venues for pumping raves in Saint Petersburg included those held in the "Rassvet" (Dawn) club and forest raves in a quarry near , an artificial lake not far from Saint Petersburg. Among the DJs kickstarting the domestic pumping house production in Russia were DJ Tolstyak, DJ 8088, DJ Yurbanoid, DJ Solovey, Dj Glyuk, and many others.
This raving scene was markedly different from its later offshoots. It formed a distinct subculture, mostly catering to the lower and middle class youth of Saint Petersburg. Drug use (especially barbiturate, xyrem and amphetamine use) became prevalent in the scene.
To increase the energy of the parties, Saint Petersburg producers and DJs started to increase the BPM of the pumping house they played and produced, eventually reaching 150 BPM and beyond. Saint Petersburg producers would include distinct whistles and other samples |
https://en.wikipedia.org/wiki/Annona%20longiflora | Annona longiflora is a species of plant in the family Annonaceae. It is endemic to Mexico. Sereno Watson, the American botanist who first formally described the species, named it after its long ( in Latin) flowers.
Description
It is a bush reaching 0.9 meters in height. Its leaves are 5.1-10.2 centimeters long and come to a point at their tip. Its leaves are nearly hairless on their upper surface and covered in soft short hairs on their lower surface. Its triangular to oval sepals are 5.6 millimeters long. Its oblong, outer petals are 5.1 centimeters long. The outer petals are white with a black base. The outer petals are convex at their base and hairless on their inner surface. Its inner petals are essentially absent. Its fruit is globe-shaped or oval, 3.8 centimeters long with a reticulated surface. Its seeds are smooth and shiny.
Reproductive biology
The pollen of A. longiflora is shed as permanent tetrads.
Distribution and habitat
It grows in ravines.
Uses
It is used as a native uncultivated edible fruit in Mexico. Representations of A. longiflora have been found on ceramic jars dating from 100 to 400 C.E. supporting the idea that it was used as part of early food systems. |
https://en.wikipedia.org/wiki/Steadfast%20Networks | Steadfast Networks is a Chicago, Illinois-based Internet Service Provider primarily focused on Cloud Computing, Dedicated Servers and Colocation. It is a division of Nozone, Inc., a company founded in 1998 by then high-school student Karl Zimmerman in Fond du Lac, Wisconsin and incorporated in 2000. In 2008 it was named number 370 on the Inc 500 5000 Fastest Growing Private Companies in America List and was on the Inc 5000 List of the Largest Companies in America in both 2009 and 2010.
Locations
Steadfast currently has datacenter facilities in 3 locations: Two in downtown Chicago and one in Edison, New Jersey.
Steadfast has a private network with peering points throughout the United States along with London and Amsterdam.
Controversy
In November 2007 the Chicago Tribune published an article on the hosting of hate sites in America, where they are protected by the First Amendment. The Anti-Defamation League noted that Steadfast hosted 17 such sites. Steadfast responded on the official company blog to clarify that they are strong supporters of the First Amendment, but against hate speech. |
https://en.wikipedia.org/wiki/Fission%20product%20yield | Nuclear fission splits a heavy nucleus such as uranium or plutonium into two lighter nuclei, which are called fission products. Yield refers to the fraction of a fission product produced per fission.
Yield can be broken down by:
Individual isotope
Chemical element spanning several isotopes of different mass number but same atomic number.
Nuclei of a given mass number regardless of atomic number. Known as "chain yield" because it represents a decay chain of beta decay.
Isotope and element yields will change as the fission products undergo beta decay, while chain yields do not change after completion of neutron emission by a few neutron-rich initial fission products (delayed neutrons), with half-life measured in seconds.
A few isotopes can be produced directly by fission, but not by beta decay because the would-be precursor with atomic number one greater is stable and does not decay. Chain yields do not account for these "shadowed" isotopes; however, they have very low yields (less than a millionth as much as common fission products) because they are far less neutron-rich than the original heavy nuclei.
Yield is usually stated as percentage per fission, so that the total yield percentages sum to 200%. Less often, it is stated as percentage of all fission products, so that the percentages sum to 100%. Ternary fission, about 0.2–0.4% of fissions, also produces a third light nucleus such as helium-4 (90%) or tritium (7%).
Mass vs. yield curve
If a graph of the mass or mole yield of fission products against the atomic number of the fragments is drawn then it has two peaks, one in the area zirconium through to palladium and one at xenon through to neodymium. This is because the fission event causes the nucleus to split in an asymmetric manner, as nuclei closer to magic numbers are more stable.
Yield vs. Z - This is a typical distribution for the fission of uranium. Note that in the calculations used to make this graph the activation of fission products was ignor |
https://en.wikipedia.org/wiki/Homothetic%20center | In geometry, a homothetic center (also called a center of similarity or a center of similitude) is a point from which at least two geometrically similar figures can be seen as a dilation or contraction of one another. If the center is external, the two figures are directly similar to one another; their angles have the same rotational sense. If the center is internal, the two figures are scaled mirror images of one another; their angles have the opposite sense.
General polygons
If two geometric figures possess a homothetic center, they are similar to one another; in other words they must have the same angles at corresponding points and differ only in their relative scaling. The homothetic center and the two figures need not lie in the same plane; they can be related by a projection from the homothetic center.
Homothetic centers may be external or internal. If the center is internal, the two geometric figures are scaled mirror images of one another; in technical language, they have opposite chirality. A clockwise angle in one figure would correspond to a counterclockwise angle in the other. Conversely, if the center is external, the two figures are directly similar to one another; their angles have the same sense.
Circles
Circles are geometrically similar to one another and mirror symmetric. Hence, a pair of circles has both types of homothetic centers, internal and external, unless the centers are equal or the radii are equal; these exceptional cases are treated after general position. These two homothetic centers lie on the line joining the centers of the two given circles, which is called the line of centers (Figure 3). Circles with radius zero can also be included (see exceptional cases), and negative radius can also be used, switching external and internal.
Computing homothetic centers
For a given pair of circles, the internal and external homothetic centers may be found in various ways. In analytic geometry, the internal homothetic center is the wei |
https://en.wikipedia.org/wiki/Plant%20stress%20measurement | Plant stress measurement is the quantification of environmental effects on plant health. When plants are subjected to less than ideal growing conditions, they are considered to be under stress. Stress factors can affect growth, survival and crop yields. Plant stress research looks at the response of plants to limitations and excesses of the main abiotic factors (light, temperature, water and nutrients), and of other stress factors that are important in particular situations (e.g. pests, pathogens, or pollutants). Plant stress measurement usually focuses on taking measurements from living plants. It can involve visual assessments of plant vitality, however, more recently the focus has moved to the use of instruments and protocols that reveal the response of particular processes within the plant (especially, photosynthesis, plant cell signalling and plant secondary metabolism)
Determining the optimal conditions for plant growth, e.g. optimising water use in an agricultural system
Determining the climatic range of different species or subspecies
Determining which species or subspecies are resistant to a particular stress factor
Instruments used to measure plant stress
Measurements can be made from living plants using specialised equipment. Among the most commonly used instruments are those that measure parameters related to photosynthesis (chlorophyll content, chlorophyll fluorescence, gas exchange) or water use (porometer, pressure bomb). In addition to these general purpose instruments, researchers often design or adapt other instruments tailored to the specific stress response they are studying.
Photosynthesis systems
Photosynthesis systems use infrared gas analyzers (IRGAS) for measuring photosynthesis. CO2 concentration changes in leaf chambers are measured to provide carbon assimilation values for leaves or whole plants. Research has shown that the rate of photosynthesis is directly related to the amount of carbon assimilated by the plant. Measuring CO2 in th |
https://en.wikipedia.org/wiki/Melting%20curve%20analysis | Melting curve analysis is an assessment of the dissociation characteristics of double-stranded DNA during heating. As the temperature is raised, the double strand begins to dissociate leading to a rise in the absorbance intensity, hyperchromicity. The temperature at which 50% of DNA is denatured is known as the melting temperature. Measurement of melting temperature can help us predict species by just studying the melting temperature. This is because every organism has a specific melting curve.
The information gathered can be used to infer the presence and identity of single-nucleotide polymorphisms (SNP). This is because G-C base pairing have 3 hydrogen bonds between them while A-T base pairs have only 2. DNA with mutations from either A or T to either C or G will create a higher melting temperature.
The information also gives vital clues to a molecule's mode of interaction with DNA. Molecules such as intercalators slot in between base pairs and interact through pi stacking. This has a stabilizing effect on DNA's structure which leads to a raise in its melting temperature. Likewise, increasing salt concentrations helps diffuse negative repulsions between the phosphates in the DNA's backbone. This also leads to a rise in the DNA's melting temperature. Conversely, pH can have a negative effect on DNA's stability which may lead to a lowering of its melting temperature.
Implementation
The energy required to break the base-base hydrogen bonding between two strands of DNA is dependent on their length, GC content and their complementarity. By heating a reaction-mixture that contains double-stranded DNA sequences and measuring dissociation against temperature, these attributes can be inferred.
Originally, strand dissociation was observed using UV absorbance measurements, but techniques based on fluorescence measurements are now the most common approach.
The temperature-dependent dissociation between two DNA-strands can be measured using a DNA-intercalating fluorophor |
https://en.wikipedia.org/wiki/Basel%20Computational%20Biology%20Conference | The Basel Computational Biology Conference (stylized as [BC]2) is a scientific meeting on the subjects of bioinformatics and computational biology. It covers a wide spectrum of disciplines, including bioinformatics, computational biology, genomics, computational structural biology, and systems biology. The conference is organized biannually by the SIB Swiss Institute of Bioinformatics in Basel, Switzerland.
The next conference
2021 [BC]2 Basel Computational Biology Conference
List of previous conferences
2019 [BC]2 Basel Computational Biology Conference "Big Data in Molecular Medicine" in association with BASEL LIFE
2017 [BC]2 Basel Computational Biology Conference
2015 [BC]2 Basel Computational Biology Conference.
2013 "Genetic Variation + Human Health"
2012 "ECCB'12, 11th European Conference on Computational Biology" in association with the 10th Basel Computational Biology conference.
2011 "Multiscale Modeling"
2010 "Regulation & Control in Biological Systems"
2009 "Molecular Evolution"
2008 "Computational Structural Biology"
2007 "From Euler to Computational Biology" in association with USGEB
2006 "Comparative Genomics"
2005 "Biological Systems In Silico"
2004 "From Information to Simulation"
2003 "Life Sciences Meet IT" |
https://en.wikipedia.org/wiki/Jour%20de%20neige | "Jour de neige" is a 1988 song recorded by French singer Elsa Lunghini. Written by Pierre Grosz with a music composed by Vincent-Marie Bouvot and Georges Lunghini, it was released in November 1988 as the third single from her debut album Elsa. As for the previous three singles, it had a great success in France, reaching number two. Two years later, it was released in Italy and Spain in the languages of these countries.
Background
As for the other songs from Elsa's debut album, the lyrics were written by Pierre Grosz. The music was composed by Vincent-Marie Bouvot and Georges Lunghini, the singer's father.
The song was also recorded in Italian-language ("Gli anni miei") and Spanish-language ("Solo era un sueno"). All these versions are available on 1997 Elsa's best of Elsa, l'essentiel 1986-1993. Two remixed and extended versions, as well as two instrumental versions, feature on the various formats. The megamix club version is also available on the CD maxi for "Quelque chose dans mon cœur".
Video and performances
As the title suggests, the text is about the snow and the enjoyments that it brings. The music video was shot in the snow and the strict secondary school "Les Chassagnes", in Oullins, France, and was directed by Bernard Schmitt.
Elsa sang it live during her concert at the Olympia in 1990 and during the next concert. She also sang during her concert at the Bataclan in 1997. During the preparation of her concert at the European in 2005, the song was scheduled but, as it did not comply with the setlist, it was withdrawn.
Chart performances
In France, "Jour de neige" was successful, debuting straight to number 12 on the chart edition of 26 November 1988, and entered the top ten two weeks later, stayed there for 14 consecutive weeks, peaking at number two for non consecutive two weeks, being first blocked by but did not manage to dislodge Mylène Farmer's "Pourvu qu'elles soient douces", then David Hallyday's "High", which topped the chart then. It fell off t |
https://en.wikipedia.org/wiki/Dolbear%27s%20law | Dolbear's law states the relationship between the air temperature and the rate at which crickets chirp. It was formulated by Amos Dolbear and published in 1897 in an article called "The Cricket as a Thermometer". Dolbear's observations on the relation between chirp rate and temperature were preceded by an 1881 report by Margarette W. Brooks, although this paper went unnoticed until after Dolbear's publication.
Dolbear did not specify the species of cricket which he observed, although subsequent researchers assumed it to be the snowy tree cricket, Oecanthus niveus. However, the snowy tree cricket was misidentified as O. niveus in early reports and the correct scientific name for this species is Oecanthus fultoni.
The chirping of the more common field crickets is not as reliably correlated to temperature—their chirping rate varies depending on other factors such as age and mating success. In many cases, though, the Dolbear's formula is a close enough approximation for field crickets, too.
Dolbear expressed the relationship as the following formula which provides a way to estimate the temperature in degrees Fahrenheit from the number of chirps per minute :
This formula is accurate to within a degree or so when applied to the chirping of the field cricket.
Counting can be sped up by simplifying the formula and counting the number of chirps produced in 15 seconds ():
Reformulated to give the temperature in degrees Celsius (°C), it is:
A shortcut method for degrees Celsius is to count the number of chirps in 8 seconds () and add 5 (this is fairly accurate between 5 and 30°C):
The above formulae are expressed in terms of integers to make them easier to remember—they are not intended to be exact.
In math classes
Math textbooks will sometimes cite this as a simple example of where mathematical models break down, because at temperatures outside of the range that crickets live in, the total of chirps is zero as the crickets are dead. You can apply algebra to the e |
https://en.wikipedia.org/wiki/Sense%20Plan%20Act | Sense-Plan-Act was the predominant robot control methodology through 1985.
Sense - gather information using the sensors
Plan - create a world model using all the information, and plan the next move
Act
SPA is used in iterations: After the acting phase, the sensing phase, and the entire cycle, is repeated.
see also: OODA loop, PDCA, Continual improvement process
Robot architectures |
https://en.wikipedia.org/wiki/Faber%20polynomials | In mathematics, the Faber polynomials Pm of a Laurent series
are the polynomials such that
vanishes at z=0. They were introduced by and studied by and . |
https://en.wikipedia.org/wiki/Atiyah%E2%80%93Segal%20completion%20theorem | The Atiyah–Segal completion theorem is a theorem in mathematics about equivariant K-theory in homotopy theory. Let G be a compact Lie group and let X be a G-CW-complex. The theorem then states that the projection map
induces an isomorphism of prorings
Here, the induced map has as domain the completion of the G-equivariant K-theory of X with respect to I, where I denotes the augmentation ideal of the representation ring of G.
In the special case of X being a point, the theorem specializes to give an isomorphism between the K-theory of the classifying space of G and the completion of the representation ring.
The theorem can be interpreted as giving a comparison between the geometrical process of taking the homotopy quotient of a G-space, by making the action free before passing to the quotient, and the algebraic process of completing with respect to an ideal.
The theorem was first proved for finite groups by Michael Atiyah in 1961,
and a proof of the general case was published by Atiyah together with Graeme Segal in 1969.
Different proofs have since appeared generalizing the theorem to completion with respect to families of subgroups.
The corresponding statement for algebraic K-theory was proven by Alexander Merkurjev, holding in the case that the group is algebraic over the complex numbers.
See also
Segal conjecture |
https://en.wikipedia.org/wiki/Lymphocyte | A lymphocyte is a type of white blood cell (leukocyte) in the immune system of most vertebrates. Lymphocytes include T cells (for cell-mediated, cytotoxic adaptive immunity), B cells (for humoral, antibody-driven adaptive immunity), and Innate lymphoid cells (ILCs) ("innate T cell-like" cells involved in mucosal immunity and homeostasis), of which natural killer cells are an important subtype (which functions in cell-mediated, cytotoxic innate immunity). They are the main type of cell found in lymph, which prompted the name "lymphocyte" (with cyte meaning cell). Lymphocytes make up between 18% and 42% of circulating white blood cells.
Types
The three major types of lymphocyte are T cells, B cells and natural killer (NK) cells. Lymphocytes can be identified by their large nucleus.
T cells and B cells
T cells (thymus cells) and B cells (bone marrow- or bursa-derived cells) are the major cellular components of the adaptive immune response. T cells are involved in cell-mediated immunity, whereas B cells are primarily responsible for humoral immunity (relating to antibodies). The function of T cells and B cells is to recognize specific "non-self" antigens, during a process known as antigen presentation. Once they have identified an invader, the cells generate specific responses that are tailored maximally to eliminate specific pathogens or pathogen-infected cells. B cells respond to pathogens by producing large quantities of antibodies which then neutralize foreign objects like bacteria and viruses. In response to pathogens some T cells, called T helper cells, produce cytokines that direct the immune response, while other T cells, called cytotoxic T cells, produce toxic granules that contain powerful enzymes which induce the death of pathogen-infected cells. Following activation, B cells and T cells leave a lasting legacy of the antigens they have encountered, in the form of memory cells. Throughout the lifetime of an animal, these memory cells will "remember" each s |
https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach%20experiment | In quantum physics, the Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized. Thus an atomic-scale system was shown to have intrinsically quantum properties. In the original experiment, silver atoms were sent through a spatially-varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment were deflected, owing to the magnetic field gradient, from a straight path. The screen revealed discrete points of accumulation, rather than a continuous distribution, owing to their quantized spin. Historically, this experiment was decisive in convincing physicists of the reality of angular-momentum quantization in all atomic-scale systems.
After its conception by Otto Stern in 1921, the experiment was first successfully conducted with Walther Gerlach in early 1922.
Description
The Stern–Gerlach experiment involves sending silver atoms through an inhomogeneous magnetic field and observing their deflection.
The results show that particles possess an intrinsic angular momentum that is closely analogous to the angular momentum of a classically spinning object, but that takes only certain quantized values. Another important result is that only one component of a particle's spin can be measured at one time, meaning that the measurement of the spin along the z-axis destroys information about a particle's spin along the x and y axis.
The experiment is normally conducted using electrically neutral particles such as silver atoms. This avoids the large deflection in the path of a charged particle moving through a magnetic field and allows spin-dependent effects to dominate.
If the particle is treated as a classical spinning magnetic dipole, it will precess in a magnetic field because of the torque that the magnetic field exerts on the dipole (see torque-induced precession). If it moves through a homogeneous magnetic field, the forces exerted on opposite |
https://en.wikipedia.org/wiki/Microbiology | Microbiology () is the scientific study of microorganisms, those being of unicellular (single-celled), multicellular (consisting of complex cells), or acellular (lacking cells). Microbiology encompasses numerous sub-disciplines including virology, bacteriology, protistology, mycology, immunology, and parasitology.
Eukaryotic microorganisms possess membrane-bound organelles and include fungi and protists, whereas prokaryotic organisms—all of which are microorganisms—are conventionally classified as lacking membrane-bound organelles and include Bacteria and Archaea. Microbiologists traditionally relied on culture, staining, and microscopy for the isolation and identification of microorganisms. However, less than 1% of the microorganisms present in common environments can be cultured in isolation using current means. With the emergence of biotechnology, Microbiologists currently rely on molecular biology tools such as DNA sequence-based identification, for example, the 16S rRNA gene sequence used for bacterial identification.
Viruses have been variably classified as organisms, as they have been considered either as very simple microorganisms or very complex molecules. Prions, never considered as microorganisms, have been investigated by virologists, however, as the clinical effects traced to them were originally presumed due to chronic viral infections, virologists took a search—discovering "infectious proteins".
The existence of microorganisms was predicted many centuries before they were first observed, for example by the Jains in India and by Marcus Terentius Varro in ancient Rome. The first recorded microscope observation was of the fruiting bodies of moulds, by Robert Hooke in 1666, but the Jesuit priest Athanasius Kircher was likely the first to see microbes, which he mentioned observing in milk and putrid material in 1658. Antonie van Leeuwenhoek is considered a father of microbiology as he observed and experimented with microscopic organisms in the 1670s, us |
https://en.wikipedia.org/wiki/Hamilton%27s%20principle | In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action. It states that the dynamics of a physical system are determined by a variational problem for a functional based on a single function, the Lagrangian, which may contain all physical information concerning the system and the forces acting on it. The variational problem is equivalent to and allows for the derivation of the differential equations of motion of the physical system. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and plays an important role in quantum mechanics, quantum field theory and criticality theories.
Mathematical formulation
Hamilton's principle states that the true evolution of a system described by generalized coordinates between two specified states and at two specified times and is a stationary point (a point where the variation is zero) of the action functional
where is the Lagrangian function for the system. In other words, any first-order perturbation of the true evolution results in (at most) second-order changes in . The action is a functional, i.e., something that takes as its input a function and returns a single number, a scalar. In terms of functional analysis, Hamilton's principle states that the true evolution of a physical system is a solution of the functional equation
That is, the system takes a path in configuration space for which the action is stationary, with fixed boundary conditions at the beginning and the end of the path.
Euler–Lagrange equations derived from the action integral
See also more rigorous derivation Euler–Lagrange equation
Requiring that the true trajectory be a stationary point of the action functional is equivalent to a set of differential equations for (the Euler–Lagrange equations), which may be derived as follows.
Let represent the true evolution of the syst |
https://en.wikipedia.org/wiki/Cheng%20cycle | The Cheng cycle is a thermodynamic cycle which uses a combination of two working fluids, one gas and one steam. It can therefore be considered a combination of the Brayton cycle and the Rankine cycle. It was named for Dr. Dah Yu Cheng.
The company founded by Dr. Cheng has developed systems in partnership with both GM and GE turbine manufacturers to take advantage of the Cheng cycle by modification of existing turbine designs before construction. The Cheng cycle involves the heated exhaust gas from the turbine being used to make steam in a heat recovery steam generator (HRSG). The steam so produced is injected into the gas turbine's combustion chamber to increase power output. The process can be thought of as a parallel combination of the gas turbine Brayton cycle and a steam turbine Rankine cycle. The cycle was invented by Prof. Dah Yu Cheng of the University of Santa Clara who patented it in 1976.
A fully Cheng Cycle design Gas/Steam two fluid flow turbine can achieve a theoretical thermal efficiency of 60% matching or even exceeding many traditional Combined Cycle Gas Turbines that keep the steam Rankine Cycle and gas Brayton Cycle as separate loops.
See also
Combined cycle
Brayton cycle
Rankine cycle
Cogeneration |
https://en.wikipedia.org/wiki/Home%20range | A home range is the area in which an animal lives and moves on a periodic basis. It is related to the concept of an animal's territory which is the area that is actively defended. The concept of a home range was introduced by W. H. Burt in 1943. He drew maps showing where the animal had been observed at different times. An associated concept is the utilization distribution which examines where the animal is likely to be at any given time. Data for mapping a home range used to be gathered by careful observation, but nowadays, the animal is fitted with a transmission collar or similar GPS device.
The simplest way of measuring the home range is to construct the smallest possible convex polygon around the data but this tends to overestimate the range. The best known methods for constructing utilization distributions are the so-called bivariate Gaussian or normal distribution kernel density methods. More recently, nonparametric methods such as the Burgman and Fox's alpha-hull and Getz and Wilmers local convex hull have been used. Software is available for using both parametric and nonparametric kernel methods.
History
The concept of the home range can be traced back to a publication in 1943 by W. H. Burt, who constructed maps delineating the spatial extent or outside boundary of an animal's movement during the course of its everyday activities. Associated with the concept of a home range is the concept of a utilization distribution, which takes the form of a two dimensional probability density function that represents the probability of finding an animal in a defined area within its home range. The home range of an individual animal is typically constructed from a set of location points that have been collected over a period of time, identifying the position in space of an individual at many points in time. Such data are now collected automatically using collars placed on individuals that transmit through satellites or using mobile cellphone technology and global pos |
https://en.wikipedia.org/wiki/Tate%20vector%20space | In mathematics, a Tate vector space is a vector space obtained from finite-dimensional vector spaces in a way that makes it possible to extend concepts such as dimension and determinant to an infinite-dimensional situation. Tate spaces were introduced by , who named them after John Tate.
Introduction
A typical example of a Tate vector space over a field k are the Laurent power series
It has two characteristic features:
as n grows, V is the union of its submodules , where denotes the power series ring. These submodules are referred to as lattices.
Even though each lattice is an infinite-dimensional vector space, the quotients of any individual lattices,
are finite-dimensional k-vector spaces.
Tate modules
Tate modules were introduced by to serve as a notion of infinite-dimensional vector bundles. For any ring R, Drinfeld defined elementary Tate modules to be topological R-modules of the form
where P and Q are projective R-modules (of possibly infinite rank) and * denotes the dual.
For a field, Tate vector spaces in this sense are equivalent to locally linearly compact vector spaces, a concept going back to Lefschetz. These are characterized by the property that they have a base of the topology consisting of commensurable sub-vector spaces.
Tate objects
Tate objects can be defined in the context of any exact category C. Briefly, an exact category is way to axiomatize certain features of short exact sequences. For example, the category of finite-dimensional k-vector spaces, or the category of finitely generated projective R-modules, for some ring R, is an exact category, with its usual notion of short exact sequences.
The extension of the above example to a more general situation is based on the following observation: there is an exact sequence
whose outer terms are an inverse limit and a direct limit, respectively, of finite-dimensional k-vector spaces
In general, for an exact category C, there is the category Pro(C) of pro-objects and the category Ind |
https://en.wikipedia.org/wiki/Gauss%20pseudospectral%20method | The Gauss pseudospectral method (GPM), one of many topics named after Carl Friedrich Gauss, is a direct transcription method for discretizing a continuous optimal control problem into a nonlinear program (NLP). The Gauss pseudospectral method differs from several other pseudospectral methods in that the dynamics are not collocated at either endpoint of the time interval. This collocation, in conjunction with the proper approximation to the costate, leads to a set of KKT conditions that are identical to the discretized form of the first-order optimality conditions. This equivalence between the KKT conditions and the discretized first-order optimality conditions leads to an accurate costate estimate using the KKT multipliers of the NLP.
Description
The method is based on the theory of orthogonal collocation where the collocation points (i.e., the points at which the optimal control problem is discretized) are the Legendre–Gauss (LG) points. The approach used in the GPM is to use a Lagrange polynomial approximation for the state that includes coefficients for the initial state plus the values of the state at the N LG points. In a somewhat opposite manner, the approximation for the costate (adjoint) is performed using a basis of Lagrange polynomials that includes the final value of the costate plus the costate at the N LG points. These two approximations together lead to the ability to map the KKT multipliers of the nonlinear program (NLP) to the costates of the optimal control problem at the N LG points PLUS the boundary points. The costate mapping theorem that arises from the GPM has been described in several references including two PhD theses and journal articles that include the theory along with applications
Background
Pseudospectral methods, also known as orthogonal collocation methods, in optimal control arose from spectral methods which were traditionally used to solve fluid dynamics problems. Seminal work in orthogonal collocation methods for optimal |
https://en.wikipedia.org/wiki/Applicative%20computing%20systems | Applicative computing systems, or ACS are the systems of object calculi founded on combinatory logic and lambda calculus.
The only essential notion which is under consideration in these systems is the representation of object. In combinatory logic the only metaoperator is application in a sense of applying one object to other. In lambda calculus two metaoperators are used: application – the same as in combinatory logic, and functional abstraction which binds the only variable in one object.
Features
The objects generated in these systems are the functional entities with the following features:
the number of argument places, or object arity is not fixed but is enabling step by step in interoperations with other objects;
in a process of generating the compound object one of its counterparts—function—is applied to other one—argument—but in other contexts they can change their roles, i.e. functions and arguments are considered on the equal rights;
the self-applying of functions is allowed, i.e. any object can be applied to itself.
ACS give a sound ground for applicative approach to programming.
Research challenge
Applicative computing systems' lack of storage and history sensitivity is the basic reason they have not provided a foundation for computer design. Moreover, most applicative systems employ the substitution operation of the lambda calculus as their basic operation. This operation is one of virtually unlimited power, but its complete and efficient realization presents great difficulties to the machine designer.
See also
Applicative programming language
Categorical abstract machine
Combinatory logic
Functional programming
Lambda calculus |
https://en.wikipedia.org/wiki/Lonesome%20George | Lonesome George ( or , 1910 – June 24, 2012) was a male Pinta Island tortoise (Chelonoidis niger abingdonii) and the last known individual of the subspecies. In his last years, he was known as the rarest creature in the world. George serves as an important symbol for conservation efforts in the Galápagos Islands and throughout the world.
Discovery
George was first seen on the island of Pinta on November 1, 1971, by Hungarian malacologist József Vágvölgyi. The island's vegetation had been devastated by introduced feral goats, and the indigenous C. n. abingdonii population had been reduced to a single individual. It is thought that he was named after a character played by American actor George Gobel. He was relocated for his own safety to the Charles Darwin Research Station on Santa Cruz Island, where he spent his life under the care of Fausto Llerena, for whom the tortoise breeding center is named.
It was hoped that more Pinta Island tortoises would be found, either on Pinta Island or in one of the world's zoos, similar to the discovery of the Española Island male in San Diego. No other Pinta Island tortoises were found. The Pinta Island tortoise was pronounced functionally extinct, as George was in captivity.
Mating attempts
Over the decades, all attempts at mating Lonesome George had been unsuccessful. This prompted researchers at the Darwin Station to offer a $10,000 reward for a suitable mate.
Until January 2011, George was penned with two females of the species Chelonoidis niger becki (from the Wolf Volcano region of Isabela Island), in the hope his genotype would be retained in any resulting progeny. This species was then thought to be genetically closest to George's; however, any potential offspring would have been hybrids, not purebreds of the Pinta Island species.
In July 2008, George mated with one of his female companions. 13 eggs were collected and placed in incubators. On November 11, 2008, the Charles Darwin Foundation reported 80% of the eggs s |
https://en.wikipedia.org/wiki/Seasilver | Seasilver is the trademarked name of a commercial dietary supplement produced and sold by the companies Seasilver USA, Inc. and Americaloe, Inc.
The product was promoted with the false claim that it could "cure 650 diseases", resulting in the prosecution and fining of the companies' owners.
Corporate history
Seasilver USA, Inc. was founded in 1992,
and ceased operating at the end of 2006.
Americaloe, Inc. was founded in Nevada in 1997.
Law suit
In 2002 the US Food and Drug Administration sent a warning letter to the product's promoters for making unsubstantied health claims.
On June 12, 2003, the FDA and FTC lodged a complaint that the two companies and their owners, Jason and Bela Berkes, had misled their customers with claims that Seasilver cured 650 diseases, including AIDS and some types of cancer.
In 2003 US$5.6 million worth of product was seized by the FDA and on March 4, 2004, the companies and their owners agreed to pay $3 million in six months as redress to their customers, to destroy the stocks of product, and to avoid making misleading statements to settle the FTC's complaint. They could still sell Seasilver, as long as any claims were supported by sufficient proof. If they did not pay the $3 million, they would have to pay the full $120 million judgement. The FTC found that the claims that the products were "clinically proven to treat or cure 650 diseases, including cancer and AIDS, and cause rapid, substantial and permanent weight loss without dieting" were false and could not be substantiated.
This was upheld by a federal judge in June 2006; the company was required to pay a penalty of $120 million because they had paid less than one-third of the required $3 million to refund customers.
Following non-payment and a failed appeal, the full fine of $120 million was re-affirmed by a ruling of the Ninth Circuit Court of Appeals on April 10, 2008.
Product
The Seasilver product includes a variety of ingredients including the herb pau d'arco, cranberry |
https://en.wikipedia.org/wiki/Quaternionic%20projective%20space | In mathematics, quaternionic projective space is an extension of the ideas of real projective space and complex projective space, to the case where coordinates lie in the ring of quaternions Quaternionic projective space of dimension n is usually denoted by
and is a closed manifold of (real) dimension 4n. It is a homogeneous space for a Lie group action, in more than one way. The quaternionic projective line is homeomorphic to the 4-sphere.
In coordinates
Its direct construction is as a special case of the projective space over a division algebra. The homogeneous coordinates of a point can be written
where the are quaternions, not all zero. Two sets of coordinates represent the same point if they are 'proportional' by a left multiplication by a non-zero quaternion c; that is, we identify all the
.
In the language of group actions, is the orbit space of by the action of , the multiplicative group of non-zero quaternions. By first projecting onto the unit sphere inside one may also regard as the orbit space of by the action of , the group of unit quaternions. The sphere then becomes a principal Sp(1)-bundle over :
This bundle is sometimes called a (generalized) Hopf fibration.
There is also a construction of by means of two-dimensional complex subspaces of , meaning that lies inside a complex Grassmannian.
Topology
Homotopy theory
The space , defined as the union of all finite 's under inclusion, is the classifying space BS3. The homotopy groups of are given by These groups are known to be very complex and in particular they are non-zero for infinitely many values of . However, we do have that
It follows that rationally, i.e. after localisation of a space, is an Eilenberg–Maclane space . That is (cf. the example K(Z,2)). See rational homotopy theory.
In general, has a cell structure with one cell in each dimension which is a multiple of 4, up to . Accordingly, its cohomology ring is , where is a 4-dimensional generator. This is analogous to |
https://en.wikipedia.org/wiki/Reilly%20formula | In the mathematical field of Riemannian geometry, the Reilly formula is an important identity, discovered by Robert Reilly in 1977. It says that, given a smooth Riemannian manifold-with-boundary and a smooth function on , one has
in which is the second fundamental form of the boundary of , is its mean curvature, and is its unit normal vector. This is often used in combination with the observation
with the consequence that
This is particularly useful since one can now make use of the solvability of the Dirichlet problem for the Laplacian to make useful choices for . Applications include eigenvalue estimates in spectral geometry and the study of submanifolds of constant mean curvature. |
https://en.wikipedia.org/wiki/Slave%20clock | In telecommunication and horology, a slave clock is a clock that depends on another clock, the master clock. Modern clocks are synchronized through the Internet or by radio time signals, to Coordinated Universal Time. UTC is based on a network of atomic clocks in many countries. For scientific purposes, precision clocks can be synchronized to within nanoseconds by dedicated satellite channels. Slave clock synchronization is usually achieved by phase-locking the slave clock signal to a signal received from the master clock. To adjust for the transit time of the signal from the master clock to the slave clock, the phase of the slave clocks are adjusted so that both clocks are in phase. Thus, the time markers of both clocks, at the output of the clocks, occur simultaneously.
The predecessors of atomic clocks, computer clocks, and digital clocks, these electric clocks were synchronized by an electrical pulse, wired to their master clock in the same facility. Thus the terms "master" and "slave." From the late 19th to the mid 20th centuries, electrical master/slave clock systems were installed, all clocks in a building or facility synchronized through electric wires to a central master clock. Slave clocks either kept time by themselves, and were periodically corrected by the master clock, or required impulses from the master clock. Many slave clocks of these types were in operation, most commonly in schools, offices, military bases, hospitals, railway networks, telephone exchanges and factories the world over. School bells of elementary schools, high schools, and others were able to be sychronized across an entire campus, connected to the system. In schools, the master clock was in the principal's office, with slave units in classrooms which were in other buildings on campus. In factories, a system with a bell or horn could signal the end of a shift, lunchtime or break time. Very few relics of this electrical, analogue system operate in the 21st century. Most 21st c |
https://en.wikipedia.org/wiki/Elasticity%20of%20cell%20membranes | A cell membrane defines a boundary between a cell and its environment. The primary constituent of a membrane is a phospholipid bilayer that forms in a water-based environment due to the hydrophilic nature of the lipid head and the hydrophobic nature of the two tails. In addition there are other lipids and proteins in the membrane, the latter typically in the form of isolated rafts.
Of the numerous models that have been developed to describe the deformation of cell membranes, a widely accepted model is the fluid mosaic model proposed by Singer and Nicolson in 1972. In this model, the cell membrane surface is modeled as a two-dimensional fluid-like lipid bilayer where the lipid molecules can move freely. The proteins are partially or fully embedded in the lipid bilayer. Fully embedded proteins are called integral membrane proteins because they traverse the entire thickness of the lipid bilayer. These communicate information and matter between the interior and the exterior of the cell. Proteins that are only partially embedded in the bilayer are called peripheral membrane proteins. The membrane skeleton is a network of proteins below the bilayer that links with the proteins in the lipid membrane.
Elasticity of closed lipid vesicles
The simplest component of a membrane is the lipid bilayer which has a thickness that is much smaller than the length scale of the cell. Therefore, the lipid bilayer can be represented by a two-dimensional mathematical surface. In 1973, based on similarities between lipid bilayers and nematic liquid crystals, Helfrich proposed the following expression for the curvature energy per unit area of the closed lipid bilayer
where are bending rigidities, is the spontaneous curvature of the membrane, and and are the mean and Gaussian curvature of the membrane surface, respectively.
The free energy of a closed bilayer under the osmotic pressure (the outer pressure minus the inner one) as:
where dA and dV are the area element of the |
https://en.wikipedia.org/wiki/Venoms%20in%20medicine | Venom in medicine is the medicinal use of venoms for therapeutic benefit in treating diseases.
Venom is any poisonous compound secreted by an animal intended to harm or disable another. When an organism produces a venom, its final form may contain hundreds of different bioactive elements that interact with each other inevitably producing its toxic effects. This mixture of ingredients includes various proteins, peptides, and non-peptidic small molecules. The active components of these venoms are isolated, purified, and screened in assays. These may be either phenotypic assays to identify component that may have desirable therapeutic properties (forward pharmacology) or target directed assays to identify their biological target and mechanism of action (reverse pharmacology).
Background
Venoms are naturally occurring substances that organisms evolved to deploy against other organisms, in defense or attack. They are often mixtures of proteins that act together or singly to attack their specific targets within the organism against which they are used, generally with high specificity and generally easily accessible through the vascular system. This has made venoms a subject of study for people who work in drug discovery. With developments in omic technologies (proteomics, genomics, etc.), researchers in this field became able to identify genes that produce certain elements in an animal's venom, as well as protein domains that have been used as building blocks across many species. In conjunction with methods of separation and purification of compounds, scientists are able to study each individual compound that exists within a venom "concoction", looking for compounds to serve as drug leads or other use. Each venomous organism produces thousands of different proteins giving access to millions of different molecules that still have potential uses. In addition, nature is continuously evolving; as prey develop resistance to these venoms, the predators also evolve as we |
https://en.wikipedia.org/wiki/Flour%20bomb | A flour bomb is a fragile container (e.g. a paper bag) filled with flour for the purpose to be thrown at a person or object to cause an inconvenient and messy stain, called flour bombing.
Flour bombs and flour bombing are a classic protest method, along with the throwing of eggs and overripe tomatoes.
The effect of flour bombs is made worse by the inclusion of eggs, or containers of other liquid, making the removal of the resultant mixture difficult.
Notable incidents
Flour bombs saw notable use during the controversial 1981 Springbok Tour at Eden Park in Auckland, New Zealand. In an attempt to disrupt the match, flour bombs, along with flares, leaflets and a parachute-support banner reading "Biko" were dropped into Eden Park from a light plane flying overhead. A New Zealand All Blacks player was felled by one of the flour bombs.
On 19 May 2004, during Prime Minister's Questions, two members of the Fathers 4 Justice organisation threw condoms filled with purple-dyed flour at Tony Blair, in the chamber of the House of Commons of the United Kingdom. The event highlighted the poor security methods employed in and around the Houses of Parliament at the time.
Several candidates in the 2017 French Presidential Election, including future president Emmanuel Macron, were hit by flour bombs.
See also
Acid throwing
Egging
Glitter bombing
Inking (attack)
Milkshaking
Pieing
Shoe-throwing
Zapping
Zelyonka attack |
https://en.wikipedia.org/wiki/Jacobi%20method | In numerical linear algebra, the Jacobi method (a.k.a. the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.
Description
Let be a square system of n linear equations, where:
When and are known, and is unknown, we can use the Jacobi method to approximate . The vector denotes our initial guess for (often for ). We denote as the k-th approximation or iteration of , and is the next (or k+1) iteration of .
Matrix-based formula
Then A can be decomposed into a diagonal component D, a lower triangular part L and an upper triangular part U:The solution is then obtained iteratively via
Element-based formula
The element-based formula for each row is thus:The computation of requires each element in except itself. Unlike the Gauss–Seidel method, we can't overwrite with , as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n.
Algorithm
Input: , (diagonal dominant) matrix A, right-hand side vector b, convergence criterion
Output:
Comments: pseudocode based on the element-based formula above
while convergence not reached do
for i := 1 step until n do
for j := 1 step until n do
if j ≠ i then
end
end
end
increment k
end
Convergence
The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1:
A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominan |
https://en.wikipedia.org/wiki/Bugaboo%20%28The%20Flea%29 | Bugaboo (The Flea), later published in Spain as La Pulga, is a video game written by the Spanish programming duo Paco Portalo and Paco Suarez for the ZX Spectrum and published by Quicksilva in 1983. It was later released for the Commodore 64 and MSX were produced. The Amstrad CPC port was published under the name Roland in the Caves using the Roland character.
Bugaboo, besides being the first video game made in Spain, is one of the first computer games to include cutscenes. Its publication marked the beginning of the Golden Era of Spanish Software. A sequel was released in Spain by Opera Soft under the title Poogaboo, made by Paco Suarez. Paco Portalo, the other member of Paco & Paco, left the project after the publication of the original game for the ZX Spectrum.
The player takes control of a flea who has fallen into a cavern and must escape.
Gameplay
The game begins with an animation depicting Bugaboo, a small, yellow creature with two extremely long legs, jumping around on a colourful planet before accidentally falling through a crack in the planet's surface and falling to the bottom of a cavern.
The player must control Bugaboo and guide him back to the top of the cavern, and out to the safety of the planet's surface.
There are only two control keys: left and right. When a key is held down a gauge at the bottom of the screen begins to fill up. When the key is released, Bugaboo will jump in that direction, with the strength of the jump being determined by how long the key was held down. The cavern is made up of various rocky ledges which Bugaboo may land on; however he can only stand on a flat area and, if a jump is mistimed, Bugaboo may end up on an angled area of rock, or miss the ledge altogether, which will cause him to fall straight down, landing on whatever is below.
Bugaboo can fall from any distance without dying. The only way to lose a life is for Bugaboo to make contact with the large, yellow dragon which wanders around the cave. Bugaboo ca |
https://en.wikipedia.org/wiki/Animal | Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad |
https://en.wikipedia.org/wiki/Sinkov%20statistic | Sinkov statistics, also known as log-weight statistics, is a specialized field of statistics that was developed by Abraham Sinkov, while working for the small Signal Intelligence Service organization, the primary mission of which was to compile codes and ciphers for use by the U.S. Army. The mathematics involved include modular arithmetic, a bit of number theory, some linear algebra of two dimensions with matrices, some combinatorics, and a little statistics.
Sinkov did not explain the theoretical underpinnings of his statistics, or characterized its distribution, nor did he give a decision procedure for accepting or rejecting candidate plaintexts on the basis of their S1 scores. The situation becomes more difficult when comparing strings of different lengths because Sinkov does not explain how the distribution of his statistics changes with length, especially when applied to higher-order grams. As for how to accept or reject a candidate plaintext, Sinkov simply said to try all possibilities and to pick the one with the highest S1 value. Although the procedure works for some applications, it is inadequate for applications that require on-line decisions. Furthermore, it is desirable to have a meaningful interpretation of the S1 values. |
https://en.wikipedia.org/wiki/Cartwheel%20pattern | A cartwheel pattern pattern is a histopathologic architectural pattern. Microscopically, cartwheel arrangements appear to have center points that radiate cells or connective tissue outward. Cartwheel patterns may be irregular and, at lower magnification, can cause tissue to appear tangled into clumps.
Skin tumors that can be classified as "storiform," having spindle cells with elongated nuclei radiating from a center point, are mainly:
Fibrous histiocytoma (dermatofibroma)
Soft tissue perineurioma
Dermatofibrosarcoma protuberans |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.