source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/COWSEL | COWSEL (COntrolled Working SpacE Language) is a programming language designed between 1964 and 1966 by Robin Popplestone. It was based on an reverse Polish notation (RPN) form of the language Lisp, combined with some ideas from Combined Programming Language (CPL).
COWSEL was initially implemented on a Ferranti Pegasus computer at the University of Leeds and on a Stantec Zebra at the Bradford Institute of Technology. Later, Rod Burstall implemented it on an Elliot 4120 at the University of Edinburgh.
COWSEL was renamed POP-1 in 1966, during summer, and development continued under that name from then on.
Example code
function member
lambda x y
comment Is x a member of list y;
define y atom then *0 end
y hd x equal then *1 end
y tl -> y repeat up
Reserved words (keywords) were also underlined in the original printouts. Popplestone performed syntax highlighting by using underscoring on a Friden Flexowriter.
See also
POP-2 programming language
POP-11 programming language
Poplog programming environment |
https://en.wikipedia.org/wiki/MAGPIE | MAGPIE (Mega Ampere Generator for Plasma Implosion Experiments) is a pulsed power generator based at Imperial College London, United Kingdom. The generator was originally designed to produce a current pulse with a maximum of 1.8 million amperes in 240 nanoseconds (150 nanoseconds rise time). At present the machine is operated with a maximum current of approximately 1.4 million amperes and operates as a z-pinch facility.
The generator consists of four voltage multipliers (Marx generators), each one containing 24 capacitors. At the maximum charging voltage of 100 kilo-volts, an output voltage of 2.4 million volts is produced and delivered into the load section. The pulses have a rise time of 150 ns and can be delivered in a high impedance load through a 1.25 Ω final line impedance.
Research at the MAGPIE generator has focused in the past on the field of inertial confinement fusion, but has recently seen significant adaptations for studies of Laboratory Astrophysics. In particular, the study of astrophysical jets in young stellar objects (see Herbig–Haro object) has been motivated by improved observational capabilities in the recent years. The simulation of such large-scale events has been undertaken at MAGPIE both from a computational point of view, through the GORGON code, and from an experimental one by means of the generator.
MAGPIE is one of several similar pulsed power machines worldwide, of which the largest and most powerful is the Z-machine at Sandia National Laboratories, Albuquerque, New Mexico.
See also
List of plasma (physics) articles |
https://en.wikipedia.org/wiki/Compton%20wavelength | The Compton wavelength is a quantum mechanical property of a particle, defined as the wavelength of a photon whose energy is the same as the rest energy of that particle (see mass–energy equivalence). It was introduced by Arthur Compton in 1923 in his explanation of the scattering of photons by electrons (a process known as Compton scattering).
The standard Compton wavelength of a particle of mass is given by
where is the Planck constant and is the speed of light.
The corresponding frequency is given by
and the angular frequency is given by
The CODATA 2018 value for the Compton wavelength of the electron is . Other particles have different Compton wavelengths.
Reduced Compton wavelength
The reduced Compton wavelength (barred lambda, denoted below by ) is defined as the Compton wavelength divided by :
where is the reduced Planck constant.
Role in equations for massive particles
The inverse reduced Compton wavelength is a natural representation for mass on the quantum scale, and as such, it appears in many of the fundamental equations of quantum mechanics. The reduced Compton wavelength appears in the relativistic Klein–Gordon equation for a free particle:
It appears in the Dirac equation (the following is an explicitly covariant form employing the Einstein summation convention):
The reduced Compton wavelength is also present in Schrödinger's equation, although this is not readily apparent in traditional representations of the equation. The following is the traditional representation of Schrödinger's equation for an electron in a hydrogen-like atom:
Dividing through by and rewriting in terms of the fine-structure constant, one obtains:
Distinction between reduced and non-reduced
The reduced Compton wavelength is a natural representation of mass on the quantum scale and is used in equations that pertain to inertial mass, such as the Klein–Gordon and Schrödinger's equations.
Equations that pertain to the wavelengths of photons interacting with ma |
https://en.wikipedia.org/wiki/No-communication%20theorem | In physics, the no-communication theorem or no-signaling principle is a no-go theorem from quantum information theory which states that, during measurement of an entangled quantum state, it is not possible for one observer, by making a measurement of a subsystem of the total state, to communicate information to another observer. The theorem is important because, in quantum mechanics, quantum entanglement is an effect by which certain widely separated events can be correlated in ways that, at first glance, suggest the possibility of communication faster-than-light. The no-communication theorem gives conditions under which such transfer of information between two observers is impossible. These results can be applied to understand the so-called paradoxes in quantum mechanics, such as the EPR paradox, or violations of local realism obtained in tests of Bell's theorem. In these experiments, the no-communication theorem shows that failure of local realism does not lead to what could be referred to as "spooky communication at a distance" (in analogy with Einstein's labeling of quantum entanglement as requiring "spooky action at a distance" on the assumption of QM's completeness).
Informal overview
The no-communication theorem states that, within the context of quantum mechanics, it is not possible to transmit classical bits of information by means of carefully prepared mixed or pure states, whether entangled or not. The theorem is only a sufficient condition that states that if the Kraus matrices commute then there can be no communication through the quantum entangled states and this is applicable to all communication. From a relativity and quantum field perspective also faster than light or "instantaneous" communication is disallowed.
Being only a sufficient condition there can be extra cases where communication is not allowed and there can be also cases where is still possible to communicate through the quantum channel encoding more than the classical information.
In |
https://en.wikipedia.org/wiki/Mirror%20neuron | A mirror neuron is a neuron that fires both when an organism acts and when the organism observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. Mirror neurons are not always physiologically distinct from other types of neurons in the brain; their main differentiating factor is their response patterns. By this definition, such neurons have been directly observed in humans and primate species, and in birds.
In humans, brain activity consistent with that of mirror neurons has been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex, and the inferior parietal cortex. The function of the mirror system in humans is a subject of much speculation. Birds have been shown to have imitative resonance behaviors and neurological evidence suggests the presence of some form of mirroring system.
To date, no widely accepted neural or computational models have been put forward to describe how mirror neuron activity supports cognitive functions.
The subject of mirror neurons continues to generate intense debate. In 2014, Philosophical Transactions of the Royal Society B published a special issue entirely devoted to mirror neuron research. Some researchers speculate that mirror systems may simulate observed actions, and thus contribute to theory of mind skills, while others relate mirror neurons to language abilities. Neuroscientists such as Marco Iacoboni have argued that mirror neuron systems in the human brain help humans understand the actions and intentions of other people. In addition, Iacoboni has argued that mirror neurons are the neural basis of the human capacity for emotions such as empathy.
Discovery
In the 1980s and 1990s, neurophysiologists Giacomo Rizzolatti, Giuseppe Di Pellegrino, Luciano Fadiga, Leonardo Fogassi, and Vittorio Gallese at the University of Parma placed electrodes in the ventral premotor cortex of the macaque monkey to study |
https://en.wikipedia.org/wiki/Zener%20pinning | Zener pinning is the influence of a dispersion of fine particles on the movement of low- and high-angle grain boundaries through a polycrystalline material. Small particles act to prevent the motion of such boundaries by exerting a pinning pressure which counteracts the driving force pushing the boundaries. Zener pinning is very important in materials processing as it has a strong influence on recovery, recrystallization and grain growth.
Origin of the pinning force
A boundary is an imperfection in the crystal structure and as such is associated with a certain quantity of energy. When a boundary passes through an incoherent particle then the portion of boundary that would be inside the particle essentially ceases to exist. In order to move past the particle some new boundary must be created, and this is energetically unfavourable. While the region of boundary near the particle is pinned, the rest of the boundary continues trying to move forward under its own driving force. This results in the boundary becoming bowed between those points where it is anchored to the particles.
Mathematical description
The figure illustrates a boundary intersecting with an incoherent particle of radius . The pinning force acts along the line of contact between the boundary and the particle, i.e., a circle of diameter . The force per unit length of boundary in contact is , where is the interfacial energy. Hence, the total force acting on the particle-boundary interface is
The maximum restraining force occurs when , so .
In order to determine the pinning force resulting from a given dispersion of particles, Clarence Zener made several important assumptions:
The particles are spherical.
The passage of the boundary does not alter the particle-boundary interaction.
Each particle exerts the maximum pinning force on the boundary, regardless of contact position.
The contacts between particles and boundaries are completely random.
The number density of particles on the boundary is |
https://en.wikipedia.org/wiki/All-or-nothing%20transform | In cryptography, an all-or-nothing transform (AONT), also known as an all-or-nothing protocol, is an encryption mode which allows the data to be understood only if all of it is known. AONTs are not encryption, but frequently make use of symmetric ciphers and may be applied before encryption. In exact terms, "an AONT is an unkeyed, invertible, randomized transformation, with the property that it is hard to invert unless all of the output is known."
Algorithms
The original AONT, the package transform, was described by Ronald L. Rivest in his 1997 paper "All-Or-Nothing Encryption and The Package Transform". The transform that Rivest proposed involved preprocessing the plaintext by XORing each plaintext block with that block's index encrypted by a randomly chosen key, then appending one extra block computed by XORing that random key and the hashes of all the preprocessed blocks. The result of this preprocessing is called the pseudomessage, and it serves as the input to the encryption algorithm. Undoing the package transform requires hashing every block of the pseudomessage except the last, XORing all the hashes with the last block to recover the random key, and then using the random key to convert each preprocessed block back into its original plaintext block. In this way, it's impossible to recover the original plaintext without first having access to every single block of the pseudomessage.
Although Rivest's paper only gave a detailed description of the package transform as it applies to CBC mode, it can be implemented using a cipher in any mode. Therefore, there are multiple variants: the package ECB transform, package CBC transform, etc.
In 1999 Victor Boyko proposed another AONT, provably secure under the random oracle model.
Apparently at about the same time, D. R. Stinson proposed a different implementation of AONT, without any cryptographic assumptions. This implementation is a linear transform, perhaps highlighting some security weakness of the original de |
https://en.wikipedia.org/wiki/Saturday%20Night%20Seder | The Saturday Night Seder was a Passover Seder held on April 11, 2020 by StoryCourse in response to the COVID-19 pandemic; to provide relief and support to the public in an effort to combat the spread of the COVID-19 virus. The seder was sponsored by BuzzFeed and aired on their Tasty YouTube channel.
Overview
The seder was hosted by Jason Alexander on the fourth night of Passover. The Saturday Seder coincided with the COVID-19 pandemic, which resulted in many physical seders being canceled throughout the world. The seder aimed to raise funds to benefit the CDC Foundation's Coronavirus Emergency Response Fund. In total, the seder raised more than $2.9 million for charity.
The seder covered the story of the Jewish Exodus from Egypt in a humorous light. It featured both Jews and non-Jews.
Participants
Appearances
Pamela Adlon
Julie Klausner
Fran Drescher
David Wolpe*
Kendell Pinkney
Mayim Bialik
Dana Benson*
Ilana and Eliot Glazer
Debra Messing
Richard Kind
Judith Light
Amichai Lau-Lavie*
Michael Solomonov
Dan Levy
Andy Cohen
Nick Kroll
Finn Wolfhard
Joshua Malina
Judy Gold
Michael Zegen
Jimmy Wolk
D'Arcy Carden
Billy Eichner
Reza Aslan
Tan France
Beanie Feldstein
Isaac Mizrahi
Sarah Hurwitz
Jessica Chaffin (as Ronna Glickman)
Chuck Schumer
Nina West
Mordechai Lightstone*
Alex Edelman
Sharon Brous*
Sarah Silverman
Bette Midler
Harvey Fierstein
Whoopi Goldberg
Dulcé Sloan
Liz Feldman
Adam Kantor
Camryn Manheim
Milo Manheim
Busy Philipps
Seth Rudetsky
Ari Shapiro
Leigh Silverman
* Rabbis who appeared in the seder.
Performances
Broadcast
The Saturday Night Seder could be seen on BuzzFeed's Tasty YouTube Channel and was simulcasted on Saturday Night Seder's website and the CDC Foundation's website. In total, more than 1 million people watched the Saturday Night Seder.
Performance of "When You Believe"
Following its broadcast, the program became notable for Cynthia Erivo and Shoshana Bean's performance of "When You Believe" from the DreamWorks film The Prince of Eg |
https://en.wikipedia.org/wiki/Joan%20Mott%20Prize%20Lecture | The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture |
https://en.wikipedia.org/wiki/Karoubi%20conjecture | In mathematics, the Karoubi conjecture is a conjecture by that the algebraic and topological K-theories coincide on C* algebras spatially tensored with the algebra of compact operators. It was proved by . |
https://en.wikipedia.org/wiki/Extraterrestrial%20sample%20curation | The curation of extraterrestrial samples (astromaterials) obtained by sample-return missions takes place at facilities specially designed to preserve both the sample integrity and protect the Earth. Astromaterials are classified as either non-restricted or restricted, depending on the nature of the Solar System body. Non-restricted samples include the Moon, asteroids, comets, solar particles and space dust. Restricted bodies include planets or moons suspected to have either past or present habitable environments to microscopic life, and therefore must be treated as extremely biohazardous.
Overview
Spacecraft instruments are subject to mass and power constraints, in addition to the limitations imposed by the extreme environment of outer space on the sensitive science instruments, so bringing extraterrestrial material to Earth is desired for extensive scientific analyses. For the purpose of planetary protection, astromaterial samples brought to Earth by sample-return missions must be received and curated in a specially-designed and equipped biocontainment facility that must also double as a cleanroom to preserve the science value of the samples.
Samples brought from non-restricted bodies such as the Moon, asteroids, comets, solar particles and space dust, are processed at specialized facilities rated Biosafety level-3 (BSL-3). Samples brought to Earth from a planet or moon suspected to have either past or present habitable environments to microscopic life would make it a Category V body, and must be curated at facilities rated Biosafety level-4 (BSL-4), as agreed in the Article IX of the Outer Space Treaty. However, the existing BSL-4 facilities in the world do not have the complex requirements to ensure the preservation and protection of Earth and the sample simultaneously. While existing BSL-4 facilities deal primarily with fairly well-known organisms, a BSL-4 facility focused on extraterrestrial samples must pre-plan the systems carefully while being mindful t |
https://en.wikipedia.org/wiki/Surrogate%20decision-maker | A surrogate decision maker, also known as a health care proxy or as agents, is an advocate for incompetent patients. If a patient is unable to make decisions for themselves about personal care, some agent must make decisions for them. If there is a durable power of attorney for health care, the agent appointed by that document is authorized to make health care decisions within the scope of authority granted by the document. If people have court-appointed guardians with authority to make health care decisions, the guardian is the authorized surrogate.
Background
At the 1991 Annual Meeting of the American Medical Association, the AMA adopted the report of the Council on Ethical and Judicial Affairs known as, "Decisions to Forgo Life-Sustaining Treatment for Incompetent Patients." The recommendations of the report were the basis for amendments to Opinion 2.20 known as, "Withholding or Withdrawing Life-Sustaining Medical Treatment." The report itself provides guidelines for physicians who may have to identify a surrogate decision maker, assist a surrogate (proxy) in making decisions for incompetent patients, and resolve conflicts that may arise between decision makers, or between the decision maker's choice and medically appropriate options. Since the first incorporation of these guidelines to the AMA Code of Medical Ethics, the council has deferred to Opinion 2.20 to address inquiries involving surrogate decision making, even though the guidelines presented in this Opinion refer only to decisions made near the end of life.
With continued discussion concerning health care preferences for all patients, including those who are incompetent, greater options have been made available to secure health care directives. The involvement of third parties in a patient's health becomes more likely in decisions that may occur in instances other than the end of life.
In addition, the council recognizes that there is a spectrum of decision-making capacity ranging from immaturity, to |
https://en.wikipedia.org/wiki/Self-modifying%20code | In computer science, self-modifying code (SMC or SMoC) is code that alters its own instructions while it is executing – usually to reduce the instruction path length and improve performance or simply to reduce otherwise repetitively similar code, thus simplifying maintenance. The term is usually only applied to code where the self-modification is intentional, not in situations where code accidentally modifies itself due to an error such as a buffer overflow.
Self-modifying code can involve overwriting existing instructions or generating new code at run time and transferring control to that code.
Self-modification can be used as an alternative to the method of "flag setting" and conditional program branching, used primarily to reduce the number of times a condition needs to be tested.
The method is frequently used for conditionally invoking test/debugging code without requiring additional computational overhead for every input/output cycle.
The modifications may be performed:
only during initialization – based on input parameters (when the process is more commonly described as software 'configuration' and is somewhat analogous, in hardware terms, to setting jumpers for printed circuit boards). Alteration of program entry pointers is an equivalent indirect method of self-modification, but requiring the co-existence of one or more alternative instruction paths, increasing the program size.
throughout execution ("on the fly") – based on particular program states that have been reached during the execution
In either case, the modifications may be performed directly to the machine code instructions themselves, by overlaying new instructions over the existing ones (for example: altering a compare and branch to an unconditional branch or alternatively a 'NOP').
In the IBM System/360 architecture, and its successors up to z/Architecture, an EXECUTE (EX) instruction logically overlays the second byte of its target instruction with the low-order 8 bits of register 1. T |
https://en.wikipedia.org/wiki/Taper%20burn%20mark | Taper burn marks are deep flame shaped scorch marks often found on the timber beams of early modern houses. They were originally thought to have been accidental scorches from a taper candle, but research suggests that most marks may have been made deliberately, as there is clear patterning of the activity. They are theorised to have been made as part of a folk superstition, then thought to protect the building from fire and lightning.
They are often found around entrances to the home such as fireplaces, doors and windows.
Over 80 such marks have been discovered in the Tower of London.
See also
Apotropaic mark
Apotropaic magic |
https://en.wikipedia.org/wiki/KIF9 | Kinesin family member 9 (KIF9), also known as kinesin-9, is a human protein encoded by the KIF9 gene. It is part of the kinesin family of motor proteins.
Function
The beating of the flagella in sperm is regulated by KIF9 activity. |
https://en.wikipedia.org/wiki/Max%20Planck%20Medal | The Max Planck medal is the highest award of the German Physical Society , the world's largest organization of physicists, for extraordinary achievements in theoretical physics. The prize has been awarded annually since 1929, with few exceptions, and usually to a single person. The winner is awarded with a gold medal and hand-written parchment.
In 1943 it was not possible to manufacture the gold medal because the Berlin foundry was hit by a bomb. The board of directors of the German Physical Society decided to manufacture the medals in a substitute metal and to deliver the gold medals later.
The highest award of the German Physical Society for outstanding results in experimental physics is the Stern–Gerlach Medal.
List of recipients
2023 Rashid A. Sunyaev
2022 Annette Zippelius
2021 Alexander Markovich Polyakov
2020 Andrzej Buras
2019 Detlef Lohse
2018 Juan Ignacio Cirac
2017 Herbert Spohn
2016 Herbert Wagner
2015 Viatcheslav Mukhanov
2014 David Ruelle
2013 Werner Nahm
2012 Martin Zirnbauer
2011 Giorgio Parisi
2010 Dieter Vollhardt
2009 Robert Graham
2008 Detlev Buchholz
2007 Joel Lebowitz
2006 Wolfgang Götze
2005 Peter Zoller
2004 Klaus Hepp
2003 Martin Gutzwiller
2002 Jürgen Ehlers
2001 Jürg Fröhlich
2000 Martin Lüscher
1999 Pierre Hohenberg
1998 Raymond Stora
1997 Gerald E. Brown
1996 Ludvig Faddeev
1995 Siegfried Grossmann
1994 Hans-Jürgen Borchers
1993 Kurt Binder
1992 Elliott H. Lieb
1991 Wolfhart Zimmermann
1990 Hermann Haken
1989 Bruno Zumino
1988 Valentine Bargmann
1987 Julius Wess
1986 Franz Wegner
1985 Yoichiro Nambu
1984 Res Jost
1983 Nicholas Kemmer
1982 Hans-Arwed Weidenmüller
1981 Kurt Symanzik
1980 not awarded
1979 Markus Fierz
1978 Paul Peter Ewald
1977 Walter Thirring
1976 Ernst Stueckelberg
1975 Gregor Wentzel
1974 Léon Van Hove
1973 Nikolay Bogolyubov
1972 Herbert Fröhlich
1971 not awarded
1970 Rudolf Haag
1969 Freeman Dyson
1968 Walter Heitler
1967 Harry Lehmann
1966 Gerhart Lüders
1965 not awarded
1964 Samuel Goudsmit and George Uhlenbeck
1 |
https://en.wikipedia.org/wiki/Biased%20random%20walk%20on%20a%20graph | In network science, a biased random walk on a graph is a time path process in which an evolving variable jumps from its current state to one of various potential new states; unlike in a pure random walk, the probabilities of the potential new states are unequal.
Biased random walks on a graph provide an approach for the structural analysis of undirected graphs in order to extract their symmetries when the network is too complex or when it is not large enough to be analyzed by statistical methods. The concept of biased random walks on a graph has attracted the attention of many researchers and data companies over the past decade especially in the transportation and social networks.
Model
There have been written many different representations of the biased random walks on graphs based on the particular purpose of the analysis. A common representation of the mechanism for undirected graphs is as follows:
On an undirected graph, a walker takes a step from the current node, to node Assuming that each node has an attribute the probability of jumping from node to is given by:
where represents the topological weight of the edge going from to
In fact, the steps of the walker are biased by the factor of which may differ from one node to another.
Depending on the network, the attribute can be interpreted differently. It might be implied as the attraction of a person in a social network, it might be betweenness centrality or even it might be explained as an intrinsic characteristic of a node. In case of a fair random walk on graph is one for all the nodes.
In case of shortest paths random walks is the total number of the shortest paths between all pairs of nodes that pass through the node . In fact the walker prefers the nodes with higher betweenness centrality which is defined as below:
Based on the above equation, the recurrence time to a node in the biased walk is given by:
Applications
There are a variety of applications using biased random walks on gr |
https://en.wikipedia.org/wiki/Symbian%20Foundation | The Symbian Foundation was a non-profit organisation that stewarded the Symbian operating system for mobile phones which previously had been owned and licensed by Symbian Ltd. Symbian Foundation never directly developed the platform, but evangelised, co-ordinated and ensured compatibility. It also provided key services to its members and the community such as collecting, building and distributing Symbian source code. During its time it competed against the Open Handset Alliance and the LiMo Foundation.
Operational phase (2009-2010)
The Foundation was founded by Nokia, Sony Ericsson, NTT DoCoMo, Motorola, Texas Instruments, Vodafone, LG Electronics, Samsung Electronics, STMicroelectronics and AT&T. Due to a change in their device strategy, LG and Motorola left the Foundation board soon after its creation. They were later replaced by Fujitsu and Qualcomm Innovation Center.
During its operational phase (from 2009 to 2010), it also provided:
platform development kits and tools
documentation and example code
discussion forums and mailing lists
application signing (Symbian Signed)
application distribution (Symbian Horizon)
idea gathering and feedback (Symbian Ideas)
an annual conference (Symbian Exchange and Exposition, abbreviated "SEE")
Members
The Symbian Foundation invited companies to join as members, and attracted over 200, from a large number of categories:
Device manufacturers (e.g. Nokia, Fujitsu)
Financial services companies (e.g. Visa)
Semiconductor vendors (e.g. ARM, Broadcom)
Mobile network operators (e.g. China Mobile, Vodafone, AT&T)
Software companies
Professional services firms
Closure of Symbian Foundation
Following "a change in focus for some of [the] funding board members", the Symbian Foundation announced in November 2010 that it would transition to "a legal entity responsible for licensing software and other intellectual property", with no operational responsibilities or staff. The transition is a result of changes in global |
https://en.wikipedia.org/wiki/Armodafinil | Armodafinil (trade name Nuvigil) is the enantiopure compound of the eugeroic modafinil (Provigil). It consists of only the (R)-(−)-enantiomer of the racemic modafinil. Armodafinil is produced by the pharmaceutical company Cephalon Inc. and was approved by the U.S. Food and Drug Administration (FDA) in June 2007. In 2016, the FDA granted Mylan rights for the first generic version of Cephalon's Nuvigil to be marketed in the U.S.
Because armodafinil has a longer half-life than modafinil does, it may be more effective at improving wakefulness in patients with excessive daytime sleepiness.
Medical uses
Armodafinil is currently FDA-approved to treat excessive daytime sleepiness associated with obstructive sleep apnea, narcolepsy, and shift work disorder. It is commonly used off-label to treat attention deficit hyperactivity disorder, chronic fatigue syndrome, and major depressive disorder, and has been repurposed as an adjunctive treatment for bipolar disorder. It has been shown to improve vigilance in air traffic controllers, however in the United States, sleep prevention medications such as Provigil (Modafinil) and Nuvigil (Armodafinil) are not approved by the FAA for civilian controllers or pilots.
Psychiatry
Bipolar disorder
Armodafinil, along with racemic modafinil, has been repurposed as an adjunctive treatment for acute depression in people with bipolar disorder. Meta-analytic evidence showed that add-on modafinil and armodafinil were more effective than placebo on response to treatment, clinical remission, and reduction in depressive symptoms, with only minor side effects, but the effect sizes are small and the quality of evidence has to be considered low, limiting the clinical relevance of current evidence. However current dosage for bipolar disorder is 150 mg OD. Also, paradoxical tiredness and sleeping is observed in some cases.
Schizophrenia
In June 2010, it was revealed that a phase II study of armodafinil as an adjunctive therapy in adults with schizo |
https://en.wikipedia.org/wiki/Twinkie%20the%20Kid | Twinkie the Kid is the mascot for Twinkies, Hostess's golden cream-filled snack cakes. He is a registered trademark of Hostess Brands. He made his debut in 1971. He has appeared on product packaging, in commercials and as related collectible merchandise, except for a brief period between 1988 and 1990.
Description
Twinkie the Kid is an anthropomorphized Twinkie appearing as a wrangler. He wears boots, gloves, a kerchief with hearts, and a ten-gallon hat with the words "Twinkie the Kid" on the band. He was created by Denny Lesser, a route delivery driver for Hostess in the San Fernando Valley. He designed the mascot and his wife made the costume that he used for a traveling promotional campaign.
Animated commercial appearances
The character appeared in animated TV advertisements for Twinkies in the 1970s, voiced by Allen Swift.
See also
Captain Cupcake
Chauncey Chocodile
Fruit Pie the Magician
Notes
Cartoon mascots
Food advertising characters
Male characters in advertising
Fictional food characters
Fictional cowboys and cowgirls
Hostess Brands
Mascots introduced in 1971 |
https://en.wikipedia.org/wiki/116th%20meridian%20east | The meridian 116° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Indian Ocean, Australasia, the Southern Ocean, and Antarctica to the South Pole.
The 116th meridian east forms a great circle with the 64th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 116th meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Laptev Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Sakha Republic — Peschanyy Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Laptev Sea
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Sakha Republic Irkutsk Oblast — from Republic of Buryatia — from Irkutsk Oblast — from Zabaykalsky Krai — from Republic of Buryatia — from Zabaykalsky Krai — from
|-
|
! scope="row" |
|
|-valign="top"
|
! scope="row" |
| Inner Mongolia
|-
|
! scope="row" |
|
|-valign="top"
|
! scope="row" |
| Inner MongoliaHebei – from Beijing – from Hebei – from Shandong – from Henan – for about 8 km from Shandong – from Henan – from Anhui – from Hubei – for about 10 km from Anhui – from Hubei – from Jiangxi – from , passing just east of Nanchang (at )Fujian – from Guangdong – from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | South China Sea
| style="background:#b0e0e6;" | Passing through the disputed Spratly Islands
|-
|
! scope="row" |
| Sabah – on the island of Borneo
|-
|
! scope="row" |
| North KalimantanEast KalimantanSouth Kalimantan
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | J |
https://en.wikipedia.org/wiki/Solidarity%20logo | The Solidarity logo designed by Jerzy Janiszewski in 1980 is considered as an important example of Polish Poster School creations. The logo was awarded the Grand Prix of the Biennale of Posters, Katowice 1981. By that time it was already well known in Poland and had become an internationally recognized icon.
According to the artist, the letters were designed to represent united individuals. This characteristic font, colloquially known as solidaryca ("Solidaric"), was implemented many times in posters and other pieces of art in different contexts. Notable examples include a film poster for Man of Iron by Andrzej Wajda and, in 1989, a poster by Tomasz Sarnecki designed for the first (semi-)free elections in Poland. |
https://en.wikipedia.org/wiki/Kantorovich%20theorem | The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point.
Newton's method constructs a sequence of points that under certain conditions will converge to a solution of an equation or a vector solution of a system of equation . The Kantorovich theorem gives conditions on the initial point of this sequence. If those conditions are satisfied then a solution exists close to the initial point and the sequence converges to that point.
Assumptions
Let be an open subset and a differentiable function with a Jacobian that is locally Lipschitz continuous (for instance if is twice differentiable). That is, it is assumed that for any there is an open subset such that and there exists a constant such that for any
holds. The norm on the left is some operator norm that is compatible with the vector norm on the right. This inequality can be rewritten to only use the vector norm. Then for any vector the inequality
must hold.
Now choose any initial point . Assume that is invertible and construct the Newton step
The next assumption is that not only the next point but the entire ball is contained inside the set . Let be the Lipschitz constant for the Jacobian over this ball (assuming it exists).
As a last preparation, construct recursively, as long as it is possible, the sequences , , according to
Statement
Now if then
a solution of exists inside the closed ball and
the Newton iteration starting in converges to with at least linear order of convergence.
A statement that is more precise but slightly more difficult to prove uses the roots of the quadratic polynomial
,
and their ratio
Then
a solution exists inside the closed ball
it is unique inside the bigger ball
and the convergenc |
https://en.wikipedia.org/wiki/The%20Game%20%28mind%20game%29 | The Game is a mind game in which the objective is to avoid thinking about The Game itself. Thinking about The Game constitutes a loss, which must be announced each time it occurs. It is impossible to win most versions of The Game. Depending on the variation, it is held that the whole world, or all those who are aware of the game, are playing it at all times. Tactics have been developed to increase the number of people who are aware of The Game, and thereby increase the number of losses.
Origin
The origins of The Game are uncertain. The most common hypothesis as is that The Game derives from another mental game, Finchley Central. While the original version of Finchley Central involves taking turns to name stations, in 1976 some members of the Cambridge University Science Fiction Society (CUSFS) developed a variant where the first person to think of the titular station loses. The game in this form demonstrates ironic processing, in which attempts to suppress or avoid certain thoughts make those thoughts more common or persistent than they would be at random.
How this became simplified into The Game is unknown; one hypothesis is that once it spread outside the Greater London area, among people who are less familiar with London stations, it morphed into its self-referential form. The creators of "LoseTheGame.net", a website which aims to catalogue information relating to the phenomenon, have received messages from multiple former members of the CUSFS commenting on the similarity between the Finchley Central variant and the modern Game. The first known reference to The Game is a blog post from 2002 – the author states that they "found out about it online about 6 months ago".
The Game is most commonly spread through the internet, such as via Facebook or Twitter, or by word of mouth.
Gameplay
There are three commonly reported rules to The Game:
Everyone in the world is playing The Game. (This is alternatively expressed as, "Everybody in the world who knows about Th |
https://en.wikipedia.org/wiki/The%20Fairyland%20Story | is a platform arcade video game developed and published by Taito in 1985. In the game, the player controls the witch Ptolemy, with the objective being to clear the screen of all enemies. Ptolemy can use her wand to turn the enemies into large cakes, which she can then push off of platforms onto other enemies, which will squash them and award bonus points. Various Items that increase Ptolemy's projectile radius, as well as kill multiple enemies at the same time, will also appear throughout the stages.
Gameplay has been compared to later Taito platform games, such as Bubble Bobble and The NewZealand Story. It has been ported to various home systems, and has seen releases in various Taito compilations.
Gameplay
The Fairyland Story is a platform arcade game. The player controls the witch Ptolemy through a series of single-screen stages, with the objective being to defeat all of the enemies on each screen. Ptolemy's main weapon is her projectile magic, which will temporarily transform the enemies into large cakes. While in a "caked" form, the enemies can be destroyed either by further magic attacks or by being dropped off a platform, possibly squashing other enemies below. Squashing more than one enemy results in an award of more points and, sometimes, in extra bonuses. 2000 points are awarded for squashing an enemy below a cake, with each additional enemy doubling the number of points awarded. If two or more enemies are killed at once in one spot, a coin will appear in that spot, which may be collected for additional points, and if more are collected, will multiply in points, so as long the player doesn't lose a life.
Ptolemy's enemies are based upon typical fantasy beings - these include Orcs, pig-like soldiers, Salamanders, dragon-like creatures that can breathe fire, Wizards, mages that can make Ptolemy shrink and disappear, Clerics, bishops who can multiply themselves, Golems, and Wraiths, hooded creatures can phase through Ptolemy's magic. If Ptolemy takes too |
https://en.wikipedia.org/wiki/Rejection%20of%20evolution%20by%20religious%20groups | Recurring cultural, political, and theological rejection of evolution by religious groups exists regarding the origins of the Earth, of humanity, and of other life. In accordance with creationism, species were once widely believed to be fixed products of divine creation, but since the mid-19th century, evolution by natural selection has been established by the scientific community as an empirical scientific fact.
Any such debate is universally considered religious, not scientific, by professional scientific organizations worldwide: in the scientific community, evolution is accepted as fact, and efforts to sustain the traditional view are universally regarded as pseudoscience. While the controversy has a long history, today it has retreated to be mainly over what constitutes good science education, with the politics of creationism primarily focusing on the teaching of creationism in public education. Among majority-Christian countries, the debate is most prominent in the United States, where it may be portrayed as part of a culture war. Parallel controversies also exist in some other religious communities, such as the more fundamentalist branches of Judaism and Islam. In Europe and elsewhere, creationism is less widespread (notably, the Catholic Church and Anglican Communion both accept evolution), and there is much less pressure to teach it as fact.
Christian fundamentalists reject the evidence of common descent of humans and other animals as demonstrated in modern paleontology, genetics, histology and cladistics and those other sub-disciplines which are based upon the conclusions of modern evolutionary biology, geology, cosmology, and other related fields. They argue for the Abrahamic accounts of creation, and, in order to attempt to gain a place alongside evolutionary biology in the science classroom, have developed a rhetorical framework of "creation science". In the landmark Kitzmiller v. Dover, the purported basis of scientific creationism was judged to be a |
https://en.wikipedia.org/wiki/Biointerface | A biointerface is the region of contact between a biomolecule, cell, biological tissue or living organism or organic material considered living with another biomaterial or inorganic/organic material. The motivation for biointerface science stems from the urgent need to increase the understanding of interactions between biomolecules and surfaces. The behavior of complex macromolecular systems at materials interfaces are important in the fields of biology, biotechnology, diagnostics, and medicine. Biointerface science is a multidisciplinary field in which biochemists who synthesize novel classes of biomolecules (peptide nucleic acids, peptidomimetics, aptamers, ribozymes, and engineered proteins) cooperate with scientists who have developed the tools to position biomolecules with molecular precision (proximal probe methods, nano-and micro contact methods, e-beam and X-ray lithography, and bottom up self-assembly methods), scientists who have developed new spectroscopic techniques to interrogate these molecules at the solid-liquid interface, and people who integrate these into functional devices (applied physicists, analytical chemists and bioengineers). Well-designed biointerfaces would facilitate desirable interactions by providing optimized surfaces where biological matter can interact with other inorganic or organic materials, such as by promoting cell and tissue adhesion onto a surface.
Topics of interest include, but are not limited to:
Neural interfaces
Cells in engineered microenvironments and regenerative medicine
Computational and modeling approaches to biointerfaces
Membranes and membrane-based biosensing
Peptides, carbohydrates and DNA at biointerfaces
Pathogenesis and pathogen detection
Molecularly designed interfaces
Nanotube/nanoparticle interfaces
Related fields for biointerfaces are biomineralization, biosensors, medical implants, and so forth.
Nanostructure interfaces
Nanotechnology is a rapidly growing field that has allowed for the crea |
https://en.wikipedia.org/wiki/Chronology%20of%20the%20universe | The chronology of the universe describes the history and future of the universe according to Big Bang cosmology.
Research published in 2015 estimates the earliest stages of the universe's existence as taking place 13.8 billion years ago, with an uncertainty of around 21 million years at the 68% confidence level.
Outline
Chronology in five stages
For the purposes of this summary, it is convenient to divide the chronology of the universe since it originated, into five parts. It is generally considered meaningless or unclear whether time existed before this chronology:
The very early universe
The first picosecond (10−12) of cosmic time. It includes the Planck epoch, during which currently established laws of physics may not have applied; the emergence in stages of the four known fundamental interactions or forces—first gravitation, and later the electromagnetic, weak and strong interactions; and the accelerated expansion of the universe due to cosmic inflation.
Tiny ripples in the universe at this stage are believed to be the basis of large-scale structures that formed much later. Different stages of the very early universe are understood to different extents. The earlier parts are beyond the grasp of practical experiments in particle physics but can be explored through the extrapolation of known physical laws to extreme high temperatures.
The early universe
This period lasted around 370,000 years. Initially, various kinds of subatomic particles are formed in stages. These particles include almost equal amounts of matter and antimatter, so most of it quickly annihilates, leaving a small excess of matter in the universe.
At about one second, neutrinos decouple; these neutrinos form the cosmic neutrino background (CνB). If primordial black holes exist, they are also formed at about one second of cosmic time. Composite subatomic particles emerge—including protons and neutrons—and from about 2 minutes, conditions are suitable for nucleosynthesis: around 25% of the |
https://en.wikipedia.org/wiki/Multicloud | Multicloud (also written as multi-cloud or multi cloud) refers to a company utilizing multiple cloud computing services from various public vendors within a single, heterogeneous architecture. This approach enhances cloud infrastructure capabilities and optimizes costs. It also refers to the distribution of cloud assets, software, applications, etc. across several cloud-hosting environments. With a typical multicloud architecture utilizing two or more public clouds as well as multiple private clouds, a multicloud environment aims to eliminate the reliance on any single cloud provider and thereby alleviate vendor lock-in.
For instance, an enterprise may use separate cloud providers for infrastructure (IaaS), platform (PaaS), software (SaaS) and container (FaaS) services. In the latter case, they may use different infrastructure providers for different workloads, deploy a single workload load balanced across multiple providers (active-active), or deploy a single workload on one provider, with a backup on another (active-passive).
Advantages and challenges
There are several advantages to using a multicloud approach, including the ability to negotiate better pricing with cloud providers, the ability to quickly switch to another provider if needed, and the ability to avoid vendor lock-in. Multicloud can also be a good way to hedge against the risks of obsolescence, as it allows you to rely on multiple vendors and open standards, which can prolong the life of your systems.
Additional benefits of the multicloud architecture include adherence to local policies that require certain data to be physically present within the area/country, geographical distribution of processing requests from physically closer cloud unit which in turn reduces latency and protect against disasters.
Various issues and challenges also present themselves in a multicloud environment. Security and governance is more complicated, and more "moving parts" may create resiliency issues.
Difference be |
https://en.wikipedia.org/wiki/Asymptotic%20theory%20%28statistics%29 | In statistics, asymptotic theory, or large sample theory, is a framework for assessing properties of estimators and statistical tests. Within this framework, it is often assumed that the sample size may grow indefinitely; the properties of estimators and tests are then evaluated under the limit of . In practice, a limit evaluation is considered to be approximately valid for large finite sample sizes too.
Overview
Most statistical problems begin with a dataset of size . The asymptotic theory proceeds by assuming that it is possible (in principle) to keep collecting additional data, thus that the sample size grows infinitely, i.e. . Under the assumption, many results can be obtained that are unavailable for samples of finite size. An example is the weak law of large numbers. The law states that for a sequence of independent and identically distributed (IID) random variables , if one value is drawn from each random variable and the average of the first values is computed as , then the converge in probability to the population mean as .
In asymptotic theory, the standard approach is . For some statistical models, slightly different approaches of asymptotics may be used. For example, with panel data, it is commonly assumed that one dimension in the data remains fixed, whereas the other dimension grows: and , or vice versa.
Besides the standard approach to asymptotics, other alternative approaches exist:
Within the local asymptotic normality framework, it is assumed that the value of the "true parameter" in the model varies slightly with , such that the -th model corresponds to . This approach lets us study the regularity of estimators.
When statistical tests are studied for their power to distinguish against the alternatives that are close to the null hypothesis, it is done within the so-called "local alternatives" framework: the null hypothesis is and the alternative is . This approach is especially popular for the unit root tests.
There are models where the |
https://en.wikipedia.org/wiki/Dorsal%20carpal%20branch%20of%20the%20ulnar%20artery | The dorsal carpal branch of the ulnar artery arises from the ulnar artery immediately above the pisiform bone, and winds backward beneath the tendon of the flexor carpi ulnaris; it passes across the dorsal surface of the carpus beneath the extensor tendons, to anastomose with a corresponding branch of the radial artery.
Immediately after its origin, it gives off a small branch, which runs along the ulnar side of the fifth metacarpal bone, and supplies the ulnar side of the dorsal surface of the little finger. |
https://en.wikipedia.org/wiki/Sensory%20stimulation%20therapy | Sensory stimulation therapy (SST) is an experimental therapy that aims to use neural plasticity mechanisms to aid in the recovery of somatosensory function after stroke or cognitive ageing. Stroke and cognitive ageing are well known sources of cognitive loss, the former by neuronal death, the latter by weakening of neural connections. SST stimulates a specific sense at a specific frequency. Research suggests that this technique may reverse cognitive ageing by up to 30 years, and may selectively improve or impair two point discrimination thresholds.
History and motivation
By 2025, it is estimated that 34 million people in the United States will have dementia. It is extremely important, then, that we establish an effective treatment for people with such symptoms to either reduce, or diminish dementia altogether. In modern-day treatment not involving pharmacological treatment, psychosocial therapies are a great intervention. With psychosocial therapies such as massage, aromatherapy, multi-sensory stimulation, music therapy, and reality orientation, treatment of dementia and dementia related diseases has become possible in a less traditional yet non-pharmacological form. It was once believed that the brain was largely unchanging and that its function was decided at a young age. Along this train of thought, cognitive loss from strokes and ageing were viewed as unrecoverable. Functional Localization is a theory which suggests that each section of the brain has a specific function, and that loss of a section equates to permanent loss of function. Traditional models even specialize between hemispheres of the brain and describe "artistic and logical sections of the brain." This fatalistic outlook has been dramatically challenged by the recent paradigm of Brain Plasticity.
Brain plasticity refers to the ability of the brain to restructure itself, form new connections, or adjust the strength of existing connections. The current paradigm allow for conceptualization of brain t |
https://en.wikipedia.org/wiki/Pertactin | In molecular biology, pertactin (PRN) is a highly immunogenic virulence factor of Bordetella pertussis, the bacterium that causes pertussis. Specifically, it is an outer membrane protein that promotes adhesion to tracheal epithelial cells. PRN is purified from Bordetella pertussis and is used for the vaccine production as one of the important components of acellular pertussis vaccine.
A large part of the N-terminus of the pertactin protein is composed of beta helix repeats. This region of the pertactin protein is secreted through the C-terminal autotransporter. The N-terminal signal sequences promotes the secretion of PRN into the periplasm through the bacterial secretion system (Sec) and consequently, the translocation into the outer membrane where it is proteolytically cleaved. The loops in the right handed β-helix of the N-terminus that protrudes out of cell surface (region R1) contains sequence repeats Gly-Gly-Xaa-Xaa-Pro and the RGD domain Arg-Gly-Asp. This RGD domain allows PRN to function as an adhesin and invasin, binding to integrins on the outer membrane of the cell. Another loop of the extending β-helix is region 2 (R2) which contains Pro-Gln-Pro (PQP) repeats towards the C-terminus. This protein’s contribution to immunity is still premature. Reports suggest that R1 and R2 are immunogenic regions, however, recent studies regarding genetic variation of those regions prove otherwise.
In B.bronchiseptica
Pertactin adheres to only ciliated epithelial cells of B. bronchiseptica in vivo. However, in vitro, PRN does not adhere to either. PRN does however help provide resistance towards a hyperinflammatory response of innate immunity for B. bronchiseptica. With respect to the adaptive immunity, studies show that PRN plays a role in combating neutrophil-mediated clearance of B. bronchiseptica. |
https://en.wikipedia.org/wiki/Kleiner%20Perkins | Kleiner Perkins, formerly Kleiner Perkins Caufield & Byers (KPCB), is an American venture capital firm which specializes in investing in incubation, early stage and growth companies. Since its founding in 1972, the firm has backed entrepreneurs in over 900 ventures, including America Online, Amazon.com, Tandem Computers, Compaq, Electronic Arts, JD.com, Square, Genentech, Google, Netscape, Sun Microsystems, Nest, Palo Alto Networks, Synack, Snap, AppDynamics, and Twitter. By 2019 it had raised around $9 billion in 19 venture capital funds and four growth funds.
Kleiner Perkins is headquartered in Menlo Park in Silicon Valley, with offices in San Francisco and Shanghai, China.
The New York Times described Kleiner Perkins as "perhaps Silicon Valley's most famous venture firm". The Wall Street Journal called it one of the "largest and most established" venture capital firms and Dealbook named it "one of Silicon Valley's top venture capital providers."
History
The firm was formed in 1972 as Kleiner Perkins. When Caufield and Byers became partners as well, the name was changed to Kleiner, Perkins, Caufield & Byers (KPCB) in Menlo Park, California, with a focus on seed, early-stage, and growth companies. The firm is named after its four founding partners: Eugene Kleiner, Tom Perkins, Frank J. Caufield, and Brook Byers. Kleiner was a founder of Fairchild Semiconductor, and Perkins was an early Hewlett-Packard executive. Byers joined in 1977. It was the very first venture capital firm to open an office on Sand Hill Road and is credited with creating the cluster of venture capital firms in that area.
Located in Menlo Park, California, Kleiner Perkins had access to the growing technology industries in the area. By the early 1970s, there were many semiconductor companies based in the Santa Clara Valley as well as early computer firms using their devices and programming and service companies. Venture capital firms suffered a temporary downturn in 1974, when the stoc |
https://en.wikipedia.org/wiki/Mutation%20rate | In genetics, the mutation rate is the frequency of new mutations in a single gene or organism over time. Mutation rates are not constant and are not limited to a single type of mutation; there are many different types of mutations. Mutation rates are given for specific classes of mutations. Point mutations are a class of mutations which are changes to a single base. Missense and Nonsense mutations are two subtypes of point mutations. The rate of these types of substitutions can be further subdivided into a mutation spectrum which describes the influence of the genetic context on the mutation rate.
There are several natural units of time for each of these rates, with rates being characterized either as mutations per base pair per cell division, per gene per generation, or per genome per generation. The mutation rate of an organism is an evolved characteristic and is strongly influenced by the genetics of each organism, in addition to strong influence from the environment. The upper and lower limits to which mutation rates can evolve is the subject of ongoing investigation. However, the mutation rate does vary over the genome. Over DNA, RNA or a single gene, mutation rates are changing.
When the mutation rate in humans increases certain health risks can occur, for example, cancer and other hereditary diseases. Having knowledge of mutation rates is vital to understanding the future of cancers and many hereditary diseases.
Background
Different genetic variants within a species are referred to as alleles, therefore a new mutation can create a new allele. In population genetics, each allele is characterized by a selection coefficient, which measures the expected change in an allele's frequency over time. The selection coefficient can either be negative, corresponding to an expected decrease, positive, corresponding to an expected increase, or zero, corresponding to no expected change. The distribution of fitness effects of new mutations is an important parameter in po |
https://en.wikipedia.org/wiki/M%2Cn%2Ck-game | An m,n,k-game is an abstract board game in which two players take turns in placing a stone of their color on an m-by-n board, the winner being the player who first gets k stones of their own color in a row, horizontally, vertically, or diagonally. Thus, tic-tac-toe is the 3,3,3-game and free-style gomoku is the 15,15,5-game. An m,n,k-game is also called a k-in-a-row game on an m-by-n board.
The m,n,k-games are mainly of mathematical interest. One seeks to find the game-theoretic value, the result of the game with perfect play. This is known as solving the game.
Strategy stealing argument
A standard strategy stealing argument from combinatorial game theory shows that in no m,n,k-game can there be a strategy that assures that the second player will win (a second-player winning strategy). This is because an extra stone given to either player in any position can only improve that player's chances. The strategy stealing argument assumes that the second player has a winning strategy and demonstrates a winning strategy for the first player. The first player makes an arbitrary move, to begin with. After that, the player pretends that they are the second player and adopts the second player's winning strategy. They can do this as long as the strategy doesn't call for placing a stone on the 'arbitrary' square that is already occupied. If this happens, though, they can again play an arbitrary move and continue as before with the second player's winning strategy. Since an extra stone cannot hurt them, this is a winning strategy for the first player. The contradiction implies that the original assumption is false, and the second player cannot have a winning strategy.
This argument tells nothing about whether a particular game is a draw or a win for the first player. Also, it does not actually give a strategy for the first player.
Applying results to different board sizes
A useful notion is a "weak (m,n,k) game", where k-in-a-row by the second player does not end the game |
https://en.wikipedia.org/wiki/Waru%20Waru | Waru Waru is an Aymara term for the agricultural technique developed by pre-Hispanic people in the Andes region of South America from Ecuador to Bolivia; this regional agricultural technique is also referred to as camellones in Spanish. Functionally similar agricultural techniques have been developed in other parts of the world, all of which fall under the broad category of raised field agriculture.
This type of altiplano field agriculture consists of parallel canals alternated by raised planting beds, which would be strategically located on floodplains or near a water source so that the fields could be properly irrigated. These flooded fields were composed of soil that was rich in nutrients due to the presence of aquatic plants and other organic materials. Through the process of mounding up this soil to create planting beds, natural, recyclable fertilizer was made available in a region where nitrogen-rich soils were rare. By trapping solar radiation during the day, this raised field agricultural method also protected crops from freezing overnight. These raised planting beds were irrigated very efficiently by the adjacent canals which extended the growing season significantly, allowing for more food yield. Waru Waru were able to yield larger amounts of food than previous agricultural methods due to the overall efficiency of the system.
This technique is dated to around 300 B.C., and is most commonly associated with the Tiwanaku culture of the Lake Titicaca region in southern Bolivia, who used this method to grow crops like potatoes and quinoa. This type of agriculture also created artificial ecosystems, which attracted other food sources such as fish and lake birds. Past cultures in the Lake Titicaca region likely utilized these additional resources as a subsistence method. It combines raised beds with irrigation channels to prevent damage by soil erosion during floods. These fields ensure both collecting of water (either fluvial water, rainwater or phreatic |
https://en.wikipedia.org/wiki/Integrin%20alpha-1 | Integrin alpha-1 also CD49a is an integrin alpha subunit encoded in humans by the gene ITGA1. It makes up half of the α1β1 integrin duplex. Though CD49a can bind a number of ligands including collagen IV, collagen I, and others.
CD49a has been implicated as a marker of tissue resident memory T cells, where it may be coexpressed with other markers CD103 and CD69. It has been shown to affect the motility of T cells. |
https://en.wikipedia.org/wiki/Non-uniform%20random%20variate%20generation | Non-uniform random variate generation or pseudo-random number sampling is the numerical practice of generating pseudo-random numbers (PRN) that follow a given probability distribution.
Methods are typically based on the availability of a uniformly distributed PRN generator. Computational algorithms are then used to manipulate a single random variate, X, or often several such variates, into a new random variate Y such that these values have the required distribution.
The first methods were developed for Monte-Carlo simulations in the Manhattan project, published by John von Neumann in the early 1950s.
Finite discrete distributions
For a discrete probability distribution with a finite number n of indices at which the probability mass function f takes non-zero values, the basic sampling algorithm is straightforward. The interval [0, 1) is divided in n intervals [0, f(1)), [f(1), f(1) + f(2)), ... The width of interval i equals the probability f(i).
One draws a uniformly distributed pseudo-random number X, and searches for the index i of the corresponding interval. The so determined i will have the distribution f(i).
Formalizing this idea becomes easier by using the cumulative distribution function
It is convenient to set F(0) = 0. The n intervals are then simply [F(0), F(1)), [F(1), F(2)), ..., [F(n − 1), F(n)). The main computational task is then to determine i for which F(i − 1) ≤ X < F(i).
This can be done by different algorithms:
Linear search, computational time linear in n.
Binary search, computational time goes with log n.
Indexed search, also called the cutpoint method.
Alias method, computational time is constant, using some pre-computed tables.
There are other methods that cost constant time.
Continuous distributions
Generic methods for generating independent samples:
Rejection sampling for arbitrary density functions
Inverse transform sampling for distributions whose CDF is known
Ratio of uniforms, combining a change of variables and reject |
https://en.wikipedia.org/wiki/Lambda-CDM%20model | The Lambda-CDM, Lambda cold dark matter or ΛCDM model is a mathematical model of the Big Bang theory with three major components:
a cosmological constant denoted by lambda (Λ) associated with dark energy,
the postulated cold dark matter, and
ordinary matter.
It is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of:
the existence and structure of the cosmic microwave background
the large-scale structure in the distribution of galaxies
the observed abundances of hydrogen (including deuterium), helium, and lithium
the accelerating expansion of the universe observed in the light from distant galaxies and supernovae
The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period of time when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe.
The ΛCDM model can be extended by adding cosmological inflation, quintessence, and other areas of speculation and research in cosmology.
Some alternative models challenge the assumptions of the ΛCDM model. Examples of these are modified Newtonian dynamics, entropic gravity, modified gravity, theories of large-scale variations in the matter density of the universe, bimetric gravity, scale invariance of empty space, and decaying dark matter (DDM).
Overview
The ΛCDM model includes an expansion of metric space that is well documented both as the red shift of prominent spectral absorption or emission lines in the light from distant galaxies and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitation |
https://en.wikipedia.org/wiki/Soil%20pH | Soil pH is a measure of the acidity or basicity (alkalinity) of a soil. Soil pH is a key characteristic that can be used to make informative analysis both qualitative and quantitatively regarding soil characteristics. pH is defined as the negative logarithm (base 10) of the activity of hydronium ions ( or, more precisely, ) in a solution. In soils, it is measured in a slurry of soil mixed with water (or a salt solution, such as ), and normally falls between 3 and 10, with 7 being neutral. Acid soils have a pH below 7 and alkaline soils have a pH above 7. Ultra-acidic soils (pH < 3.5) and very strongly alkaline soils (pH > 9) are rare.
Soil pH is considered a master variable in soils as it affects many chemical processes. It specifically affects plant nutrient availability by controlling the chemical forms of the different nutrients and influencing the chemical reactions they undergo. The optimum pH range for most plants is between 5.5 and 7.5; however, many plants have adapted to thrive at pH values outside this range.
Classification of soil pH ranges
The United States Department of Agriculture Natural Resources Conservation Service classifies soil pH ranges as follows:
0 to 6=acidic,7=neutral and 8 and above alkalinity
Determining pH
Methods of determining pH include:
Observation of soil profile: certain profile characteristics can be indicators of either acid, saline, or sodic conditions. Examples are:
Poor incorporation of the organic surface layer with the underlying mineral layer – this can indicate strongly acidic soils;
The classic podzol horizon sequence, since podzols are strongly acidic: in these soils, a pale eluvial (E) horizon lies under the organic surface layer and overlies a dark B horizon;
Presence of a caliche layer indicates the presence of calcium carbonates, which are present in alkaline conditions;
Columnar structure can be an indicator of sodic condition.
Observation of predominant flora. Calcifuge plants (those that prefer an acidic s |
https://en.wikipedia.org/wiki/Split%20supersymmetry | In particle physics, split supersymmetry is a proposal for physics beyond the Standard Model.
History
It was proposed separately in three papers. The first by James Wells in June 2003 in a more modest form that mildly relaxed the assumption about naturalness in the Higgs potential. In May 2004 Nima Arkani-Hamed and Savas Dimopoulos argued that naturalness in the Higgs sector may not be an accurate guide to propose new physics beyond the Standard Model and argued that supersymmetry may be realized in a different fashion that preserved gauge coupling unification and has a dark matter candidate. In June 2004 Gian Giudice and Andrea Romanino argued from a general point of view that if one wants gauge coupling unification and a dark matter candidate, that split supersymmetry is one amongst a few theories that exists.
Overview
The new light (~TeV) particles in Split Supersymmetry (beyond the Standard Models particles) are
The Lagrangian for Split Supersymmetry is constrained from the existence of high energy supersymmetry. There are five couplings in Split Supersymmetry: the Higgs quartic coupling and four Yukawa couplings between the Higgsinos, Higgs and gauginos. The couplings are set by one parameter, , at the scale where the supersymmetric scalars decouple. Beneath the supersymmetry breaking scale, these five couplings evolve through the renormalization group equation down to the TeV scale. At a future Linear collider, these couplings could be measured at the 1% level and then renormalization group evolved up to high energies to show that the theory is supersymmetric at an exceedingly high scale.
Long Lived Gluinos
The striking feature of split supersymmetry is that the gluino becomes a quasi-stable particle with a lifetime that could be up to 100 seconds long. A gluino that lived longer than this would disrupt Big Bang nucleosynthesis or would have been observed as an additional source of cosmic gamma rays. The gluino is long lived because it can only de |
https://en.wikipedia.org/wiki/Chloroxine | Chloroxine (trade name Capitrol; Kloroxin, Dichlorchinolinol, chlorquinol, halquinol(s)); Latin cloroxinum, dichlorchinolinolum) is an antibacterial drug. Oral formulations (under trade name such as Endiaron) are used in infectious diarrhea, disorders of the intestinal microflora (e.g. after antibiotic treatment), giardiasis, inflammatory bowel disease. It is also useful for dandruff and seborrheic dermatitis., as used in shampoos (Capitrol) and dermal creams like (Valpeda, Triaderm).
Mechanism of action
Chloroxine has bacteriostatic, fungistatic, and antiprotozoal properties. It is effective against Streptococci, Staphylococci, Candida, Candida albicans, Shigella, and Trichomonads.
Adverse effects
Rarely occurs, but may cause nausea and vomiting associated with oral administration. It may also cause skin irritation.
Pregnancy and lactation
The FDA lists chloroxine in Pregnancy Category C (risk cannot be ruled out) because no pregnancy studies on the medication have been performed with animals or humans. For this reason, use of chloroxine oral or topical during pregnancy or when breast-feeding is not recommended.
History
Chloroxine was first prepared in 1888 by A. Hebebrand. |
https://en.wikipedia.org/wiki/Regulated%20power%20supply | A regulated power supply is an embedded circuit; it converts unregulated AC (alternating current) into a constant DC. With the help of a rectifier it converts AC supply into DC. Its function is to supply a stable voltage (or less often current), to a circuit or device that must be operated within certain power supply limits. The output from the regulated power supply may be alternating or unidirectional, but is nearly always DC (direct current). The type of stabilization used may be restricted to ensuring that the output remains within certain limits under various load conditions, or it may also include compensation for variations in its own supply source. The latter is much more common today.
Applications
D.C. variable bench supply
A bench power supply usually refers to a power supply capable of supplying a variety of output voltages useful for BE (bench testing) electronic circuits, possibly with continuous variation of the output voltage, or just some preset voltages. Some have multiple selectable ranges of current/voltage limits which tend to be anti-proportional.
A laboratory ("lab") power supply normally implies an accurate bench power supply, while a balanced or tracking power supply refers to twin supplies for use when a circuit requires both positive and negative supply rails).
Types
Variable bench power supplies exist both as linear (transformer first) and switched-mode power supply (full-bridge rectifier first), each with a different set of benefits and disadvantages:
Linear
The linear type produces only very little noise (or "ripple voltage") and is less prone to external electromagnetic and radio frequency interference (EMI, RFI), making it preferable for audio equipment and radio-related applications and for powering delicate circuitry. Linear power supplies also have fewer failable parts which increases longevity, and have a quicker transient response. Linear variable bench power supplies have existed since longer ago, dating back at least to |
https://en.wikipedia.org/wiki/Offline%20private%20key%20protocol | The Offline Private Key Protocol (OPKP) is a cryptographic protocol to prevent unauthorized access to back up or archive data. The protocol results in a public key that can be used to encrypt data and an offline private key that can later be used to decrypt that data.
The protocol is based on three rules regarding the key. An offline private key should:
not be stored with the encrypted data (obviously)
not be kept by the organization that physically stores the encrypted data, to ensure privacy
not be stored at the same system as the original data, to avoid the possibility that theft of only the private key would give access to all data at the storage provider; and to avoid that when the key would be needed to restore a backup, the key would be lost together with the data loss that made the restore necessary in the first place
To comply with these rules, the offline private key protocol uses a method of asymmetric key wrapping.
Security
As the protocol does not provide rules on the strength of the encryption methods and keys to be used, the security of the protocol depends on the actual cryptographic implementation. When used in combination with strong encryption methods, the protocol can provide extreme security.
Operation
Initially:
a client program (program) on a system (local system) with data to back up or archive generates a random private key PRIV
program creates a public key PUB based on PRIV
program stores PUB on the local system
program presents PRIV to user who can store the key, e.g. printed as a trusted paper key, or on a memory card
program destroys PRIV on the local system
When archiving or creating a backup, for each session or file:
program generates a one-time random key OTRK
program encrypts data using OTRK and a symmetric encryption method
program encrypts the (optionally padded) key OTRK using PUB to OTRKCR
program stores the OTRKCR and the encrypted data to a server
program destroys OTRK on the local system
program dest |
https://en.wikipedia.org/wiki/XScale | XScale is a microarchitecture for central processing units initially designed by Intel implementing the ARM architecture (version 5) instruction set. XScale comprises several distinct families: IXP, IXC, IOP, PXA and CE (see more below), with some later models designed as system-on-a-chip (SoC). Intel sold the PXA family to Marvell Technology Group in June 2006. Marvell then extended the brand to include processors with other microarchitectures, like Arm's Cortex.
The XScale architecture is based on the ARMv5TE ISA without the floating-point instructions. XScale uses a seven-stage integer and an eight-stage memory super-pipelined microarchitecture. It is the successor to the Intel StrongARM line of microprocessors and microcontrollers, which Intel acquired from DEC's Digital Semiconductor division as part of a settlement of a lawsuit between the two companies. Intel used the StrongARM to replace its ailing line of outdated RISC processors, the i860 and i960.
All the generations of XScale are 32-bit ARMv5TE processors manufactured with a 0.18 μm or 0.13 μm (as in IXP43x parts) process and have a 32 KB data cache and a 32 KB instruction cache. First- and second-generation XScale multi-core processors also have a 2 KB mini data cache (claimed it "avoids 'thrashing' of the D-Cache for frequently changing data streams"). Products based on the third-generation XScale have up to 512 KB unified L2 cache.
Processor families
The XScale core is used in a number of microcontroller families manufactured by Intel and Marvell:
Application processors (with the prefix PXA). There are four generations of XScale application processors, described below: PXA210/PXA25x, PXA26x, PXA27x, and PXA3xx.
I/O processors (with the prefix IOP).
Network processors (with the prefix IXP).
Control plane processors (with the prefix IXC).
Consumer electronics processors (with the prefix CE).
There are also standalone processors: the 80200 and 80219 (targeted primarily at PCI applications).
PX |
https://en.wikipedia.org/wiki/Dear%20enemy%20effect | The dear enemy effect or dear enemy recognition is an ethological phenomenon in which two neighbouring territorial animals become less aggressive toward one another once territorial borders are well established. As territory owners become accustomed to their neighbours, they expend less time and energy on defensive behaviors directed toward one another. However, aggression toward unfamiliar neighbours remains the same. Some authors have suggested the dear enemy effect is territory residents displaying lower levels of aggression toward familiar neighbours compared to unfamiliar individuals who are non-territorial "floaters".
The dear enemy effect has been observed in a wide range of animals including mammals, birds, reptiles, amphibians, fish and invertebrates. It can be modulated by factors such as the location of the familiar and unfamiliar animal, the season, and the presence of females.
The effect is the converse of the nasty neighbour effect, in which some species are more aggressive towards their neighbours than towards unfamiliar strangers.
Function
The ultimate function of the dear enemy effect is to increase the individual fitness of the animal expressing the behaviour. This increase in fitness is achieved by reducing the time, energy or risk of injury unnecessarily incurred by defending a territory or its resources (e.g. mate, food, space) against a familiar animal with its own territory; the territory-holder already knows about the abilities of the neighbour, and also knows that the neighbour is unlikely to try to take over the territory because it already has one.
Mechanism
The interaction between two neighbours can be modelled as an iterated prisoner's dilemma game. In this view, a territory owner that acts non-aggressively towards a neighbour can be thought of as cooperating, while a territory owner that acts aggressively towards its neighbour can be considered to have defected. A necessary condition for the prisoner’s dilemma game to hold is that |
https://en.wikipedia.org/wiki/Cartouche | In Egyptian hieroglyphs, a cartouche is an oval with a line at one end tangent to it, indicating that the text enclosed is a royal name. The first examples of the cartouche are associated with pharaohs at the end of the Third Dynasty, but the feature did not come into common use until the beginning of the Fourth Dynasty under Pharaoh Sneferu. While the cartouche is usually vertical with a horizontal line, if it makes the name fit better it can be horizontal, with a vertical line at the end (in the direction of reading). The ancient Egyptian word for cartouche was , and the cartouche was essentially an expanded shen ring. Demotic script reduced the cartouche to a pair of brackets and a vertical line.
Of the five royal titularies it was the prenomen (the throne name), and the "Son of Ra" titulary (the so-called nomen name given at birth), which were enclosed by a cartouche.
At times amulets took the form of a cartouche displaying the name of a king and placed in tombs. Archaeologists often find such items important for dating a tomb and its contents. Cartouches were formerly only worn by pharaohs. The oval surrounding their name was meant to protect them from evil spirits in life and after death. The cartouche has become a symbol representing good luck and protection from evil.
The term "cartouche" was first applied by French soldiers who fancied that the symbol they saw so frequently repeated on the pharaonic ruins they encountered resembled a muzzle-loading firearm's paper powder cartridge ( in French).
As a hieroglyph, a cartouche can represent the Egyptian-language word for "name". It is listed as no. V10 in Gardiner's Sign List.
The cartouche in half-section, Gardiner no. V11 (as seen below) has a separate meaning in the Egyptian language as a determinative for actions and nouns dealing with items: "to divide", "to exclude". V11
The cartouche hieroglyph is used as a determinative for Egyptian language šn-(sh)n, for "circuit", or "ring"-(like the shen ring o |
https://en.wikipedia.org/wiki/Lag%20%28video%20games%29 | In computers, lag is delay (latency) between the action of the user (input) and the reaction of the server supporting the task, which has to be sent back to the client.
The player's ability to tolerate lag depends on the type of game being played. For instance, a strategy game or a turn-based game with a slow pace may have a high threshold or even be mostly unaffected by high lag. A game with twitch gameplay such as a first-person shooter or a fighting game with a considerably faster pace may require a significantly lower lag to provide satisfying gameplay.
Ping time
Ping time is the network delay for a round trip between a player's client and the game server as measured with the ping utility or equivalent. Ping time is an average time measured in milliseconds (ms). The lower one's ping is, the lower the latency is and the less lag the player will experience. High ping and low ping are commonly used terms in online gaming, where high ping refers to a ping that causes a severe amount of lag; while any level of ping may cause lag, severe lag is usually indicated by a ping of over 100 ms. This usage is a gaming cultural colloquialism and is not commonly found or used in professional computer networking circles. In games where timing is key, such as first-person shooter and real-time strategy games, a low ping is always desirable, as a low ping means smoother gameplay by allowing faster updates of game data between the players' clients and game server.
High latency can cause lag. Game servers may disconnect a client if the latency is too high and may pose a detriment to other players' gameplay. Similarly, client software will often mandate disconnection if the latency is too high. High ping may also cause servers to crash due to instability.
In some first-person shooter games, a high ping may cause the player to unintentionally gain unfair advantages, such as disappearing from one location and instantaneously reappearing in another, simulating the effect of teleport |
https://en.wikipedia.org/wiki/Band%20%28order%20theory%29 | In mathematics, specifically in order theory and functional analysis, a band in a vector lattice is a subspace of that is solid and such that for all such that exists in we have
The smallest band containing a subset of is called the band generated by in
A band generated by a singleton set is called a principal band.
Examples
For any subset of a vector lattice the set of all elements of disjoint from is a band in
If () is the usual space of real valued functions used to define Lp spaces then is countably order complete (that is, each subset that is bounded above has a supremum) but in general is not order complete.
If is the vector subspace of all -null functions then is a solid subset of that is a band.
Properties
The intersection of an arbitrary family of bands in a vector lattice is a band in
See also |
https://en.wikipedia.org/wiki/Headless%20software | Headless software (e.g. "headless Linux",) is software capable of working on a device without a graphical user interface. Such software receives inputs and provides output through other interfaces like network or serial port and is common on servers and embedded devices.
The term "headless" is most often used when the ordinary version of the program requires that a graphics card or similar graphical interface device be present. For instance, the absence of a graphic card, mouse or keyboard may cause an initialization process that assumes their presence to fail, or the graphics card may be relied upon to build some offline image that is later served through network.
A headless computer (for example, and most commonly, a server) may be missing many of the system libraries that support the display of graphical interfaces. Software that expects these libraries may fail to start or even to compile if such libraries are not present.
Headless agents and games
Video games typically use a headless server for simulation of a multiplayer environment.
Additionally, headless clients can be used to automate testing, play as NPC AIs, or integrate with an external artificial human companion system.
Headless simulations of games are used to accelerate the rate of gradient descent in machine learning, for example, by enabling large batches of simulation to be run in parallel.
Headless rendering
When no physical screen is present, software can still be used to render images for many applications.
In a headless website configuration, the frontend presentation is server-side rendered.
Headless rendering is also used in films and generation of synthetic data. For example, Blender provides Command Line Rendering.
See also
OS-level virtualization
Secure Shell
Headless browser
Headless computer
Headless content management system
Compositing window manager |
https://en.wikipedia.org/wiki/TranSMART | tranSMART is an open-source data warehouse designed to store large amounts of clinical data from clinical trials, as well as data from basic research, so that it can be interrogated together for translational research. It is also designed to be used by many people, across organizations. It was developed by Johnson & Johnson, in partnership with Recombinant Data Corporation. The platform was released in Jan 2012 and has been governed by the tranSMART Foundation since its initiation in 2013. In May 2017, the tranSMART Foundation merged with the i2b2 Foundation to create an organization with the key mission to advance the field of precision medicine.
The tranSMART platform has been adopted and evaluated by numerous pharmaceutical companies, not-for-profits and patient advocacy groups, academics, governmental organisations and service providers. At the Bio-IT World industry conference both the Innovative Medicines Initiative's U-BIOPRED project and The Michael J. Fox Foundation were awarded a Best Practices Award for their application of the platform.
tranSMART is built on top of the i2b2 clinical data warehouse and leverages the i2b2 star schema for modelling clinical and low-dimensional data. High-dimensional omics data is stored in dedicated tables where each of the data types (e.g., gene expression, SNP or metabolomics) retains its specific data structure. Both the Oracle and PostgreSQL database management systems are supported for its data storage.
tranSMART 17.1
Development project
Researchers reported missing functionalities in the earlier versions of tranSMART (version 16.2 and before), which restricted the capabilities of the tool and opportunities for research. In response, tranSMART Foundation brought together four leading pharmaceutical companies – Pfizer, Sanofi, Abbvie and Roche – together in October 2016 to help sponsor a joint project to develop the new functionality in a coherent way and improve tranSMART. The Hyve BV was the IT company responsibl |
https://en.wikipedia.org/wiki/Alpha-2%20adrenergic%20receptor | The alpha-2 (α2) adrenergic receptor (or adrenoceptor) is a G protein-coupled receptor (GPCR) associated with the Gi heterotrimeric G-protein. It consists of three highly homologous subtypes, including α2A-, α2B-, and α2C-adrenergic. Some species other than humans express a fourth α2D-adrenergic receptor as well. Catecholamines like norepinephrine (noradrenaline) and epinephrine (adrenaline) signal through the α2-adrenergic receptor in the central and peripheral nervous systems.
Cellular localization
The α2A adrenergic receptor is localised in the following central nervous system (CNS) structures:
Brainstem (especially the locus coeruleus as presynaptic & somatodendritic autoreceptor )
Midbrain
Hypothalamus
Olfactory system
Hippocampus
Spinal cord
Cerebral cortex
Cerebellum
Septum
Whereas the α2B adrenergic receptor is localised in the following CNS structures:
Thalamus
Pyramidal layer of the hippocampus
Cerebellar Purkinje layer
and the α2C adrenergic receptor is localised in the CNS structures:
Midbrain
Thalamus
Amygdala
Dorsal root ganglia
Olfactory system
Hippocampus
Cerebral cortex
Basal ganglia
Substantia nigra
Ventral tegmentum
Effects
The α2-adrenergic receptor is classically located on vascular prejunctional terminals where it inhibits the release of norepinephrine (noradrenaline) in a form of negative feedback. It is also located on the vascular smooth muscle cells of certain blood vessels, such as those found in skin arterioles or on veins, where it sits alongside the more plentiful α1-adrenergic receptor. The α2-adrenergic receptor binds both norepinephrine released by sympathetic postganglionic fibers and epinephrine (adrenaline) released by the adrenal medulla, binding norepinephrine with slightly higher affinity. It has several general functions in common with the α1-adrenergic receptor, but also has specific effects of its own. Agonists (activators) of the α2-adrenergic receptor are frequently used in anaesthesia where |
https://en.wikipedia.org/wiki/Ordinal%20number | In set theory, an ordinal number, or ordinal, is a generalization of ordinal numerals (first, second, th, etc.) aimed to extend enumeration to infinite sets.
A finite set can be enumerated by successively labeling each element with the least natural number that has not been previously used. To extend this process to various infinite sets, ordinal numbers are defined more generally as linearly ordered labels that include the natural numbers and have the property that every set of ordinals has a least element (this is needed for giving a meaning to "the least unused element"). This more general definition allows us to define an ordinal number (omega) that is greater than every natural number, along with ordinal numbers , , etc., which are even greater than .
A linear order such that every non-empty subset has a least element is called a well-order. The axiom of choice implies that every set can be well-ordered, and given two well-ordered sets, one is isomorphic to an initial segment of the other. So ordinal numbers exist and are essentially unique.
Ordinal numbers are distinct from cardinal numbers, which measure the size of sets. Although the distinction between ordinals and cardinals is not always apparent on finite sets (one can go from one to the other just by counting labels), they are very different in the infinite case, where different infinite ordinals can correspond to sets having the same cardinal. Like other kinds of numbers, ordinals can be added, multiplied, and exponentiated, although none of these operations are commutative.
Ordinals were introduced by Georg Cantor in 1883 in order to accommodate infinite sequences and classify derived sets, which he had previously introduced in 1872 while studying the uniqueness of trigonometric series.
Ordinals extend the natural numbers
A natural number (which, in this context, includes the number 0) can be used for two purposes: to describe the size of a set, or to describe the position of an element in a se |
https://en.wikipedia.org/wiki/Act%201%20adaptor%20protein | Act 1 adaptor protein (Act 1) is an essential intermediate in the interleukin-17 pathway. The IL-17 protein is a pro-inflammatory cytokine important for tissue inflammation in host defense against infection and in autoimmune disease. It is produced by the CD4 + T cells, in particular the Th17 cells. There are 6 subtypes of IL-17, from IL-17A to IL17-F, these subtypes have nearly identical structures. We know that the cytokines are interacting homotypically, but IL-17A and IL-17F are capable do perform heterotypic interaction too.
Each cytokine has its own receptor, IL-17RA to IL-17F, and their pathways are still under investigation. It has been proven that these receptors are not using MyD88 and IRAK in their signaling pathways. They indeed use the adaptor Act1 protein, and TRAF family protein in order to activate the nuclear factor-kappa B (NF-κB), a transcription factor involved in the immunity response (see the pathways explanation below). This protein has different binding sites, which can physically attach the different components in order to activate them.
Act1 is crucial in the IL-17 signaling pathway. Moreover, this protein is only used in cells expressing CD40 and CD40L, which are also from the tumor necrosis receptor superfamily and more importantly expressed by B cells. It can also be expressed by other cell types such as epithelial cells, monocytes, basophils, dendritic cells, fibroblasts, smooth muscle cells, endothelial cells. Therefore, Act1 has a very wide impact on the immune system.
Act1 malfunction could induce autoimmunity (see below).
Structure
Some studies have been done in order to understand the structure of each subunit, and their importance in the protein function:
C-Terminal SEFIR domain (residues 394 to 574): This domain is shared by the Act1 protein and the IL-17R. It has been shown that the two components interact through this domain physically. It allows to start the IL-17 pathway by direct contact between the receptor and th |
https://en.wikipedia.org/wiki/Frontal%20lobe | The frontal lobe is the largest of the four major lobes of the brain in mammals, and is located at the front of each cerebral hemisphere (in front of the parietal lobe and the temporal lobe). It is parted from the parietal lobe by a groove between tissues called the central sulcus and from the temporal lobe by a deeper groove called the lateral sulcus (Sylvian fissure). The most anterior rounded part of the frontal lobe (though not well-defined) is known as the frontal pole, one of the three poles of the cerebrum.
The frontal lobe is covered by the frontal cortex. The frontal cortex includes the premotor cortex and the primary motor cortex – parts of the motor cortex. The front part of the frontal cortex is covered by the prefrontal cortex. The nonprimary motor cortex is a functionally defined portion of the frontal lobe.
There are four principal gyri in the frontal lobe. The precentral gyrus is directly anterior to the central sulcus, running parallel to it and contains the primary motor cortex, which controls voluntary movements of specific body parts. Three horizontally arranged subsections of the frontal gyrus are the superior frontal gyrus, the middle frontal gyrus, and the inferior frontal gyrus. The inferior frontal gyrus is divided into three parts – the orbital part, the triangular part and the opercular part.
The frontal lobe contains most of the dopaminergic neurons in the cerebral cortex. The dopaminergic pathways are associated with reward, attention, short-term memory tasks, planning, and motivation. Dopamine tends to limit and select sensory information coming from the thalamus to the forebrain.
Structure
The frontal lobe is the largest lobe of the brain and makes up about a third of the surface area of each hemisphere. On the lateral surface of each hemisphere, the central sulcus separates the frontal lobe from the parietal lobe. The lateral sulcus separates the frontal lobe from the temporal lobe.
The frontal lobe can be divided into a later |
https://en.wikipedia.org/wiki/MirOS%20BSD | MirOS BSD (originally called MirBSD) is a free and open source operating system which started as a fork of OpenBSD 3.1 in August 2002. It was intended to maintain the security of OpenBSD with better support for European localisation. Since then it has also incorporated code from other free BSD descendants, including NetBSD, MicroBSD and FreeBSD. Code from MirOS BSD was also incorporated into ekkoBSD, and when ekkoBSD ceased to exist, artwork, code and developers ended up working on MirOS BSD for a while.
Unlike the three major BSD distributions, MirOS BSD supports only the x86 and SPARC architectures.
One of the project's goals was to be able to port the MirOS userland to run on the Linux kernel, hence the deprecation of the MirBSD name in favour of MirOS.
History
MirOS BSD originated as OpenBSD-current-mirabilos, an OpenBSD patchkit, but soon grew on its own after some differences in opinion between the OpenBSD project leader Theo de Raadt and Thorsten Glaser. Despite the forking, MirOS BSD was synchronised with the ongoing development of OpenBSD, thus inheriting most of its good security history, as well as NetBSD and other BSD flavours.
One goal was to provide a faster integration cycle for new features and software than OpenBSD. According to the developers, "controversial decisions are often made differently from OpenBSD; for instance, there won't be any support for SMP in MirOS". There will also be a more tolerant software inclusion policy, and "the end result is, hopefully, a more refined BSD experience".
Another goal of MirOS BSD was to create a more "modular" base BSD system, similar to Debian. While MirOS Linux (linux kernel + BSD userland) was discussed by the developers sometime in 2004, it has not materialised.
Features
Development snapshots are live and installation CD for x86 and SPARC architectures on one media, via the DuaLive technology.
Latest snapshots have been extended to further boot a grml (a Linux-based rescue system, x86 only) via |
https://en.wikipedia.org/wiki/VARAN | VARAN (Versatile Automation Random Access Network) is a Fieldbus Ethernet-based industrial communication system.
VARAN is a wired data network technology for local data networks (LAN) with the main application in the field of automation technology. It enables the exchange of data in the form of data frames between all LAN connected devices (controllers, input/output devices, drives, etc.).
VARAN includes the definitions for types of cables and connectors, describes the physical signalling and specifies packet formats and protocols. From the perspective of the OSI model, VARAN specifies both the physical layer (OSI Layer 1) and the data link layer (OSI Layer 2). VARAN is a protocol according to the principle master-slave. The VARAN BUS USER ORGANIZATION (VNO) is responsible for the care of the Protocol. |
https://en.wikipedia.org/wiki/Telescope%20Array%20Project | The Telescope Array project is an international collaboration involving research and educational institutions in Japan, The United States, Russia, South Korea, and Belgium. The experiment is designed to observe air showers induced by ultra-high-energy cosmic ray using a combination of ground array and air-fluorescence techniques. It is located in the high desert in Millard County, Utah, United States, at about above sea level.
Overview
The Telescope Array observatory is a hybrid detector system consisting of both an array of 507 scintillation surface detectors (SD) which measure the distribution of charged particles at the Earth's surface, and three fluorescence stations which observe the night sky above the SD array. Each fluorescence station is also accompanied by a LIDAR system for atmospheric monitoring. The SD array is much like that of the AGASA group, but covers an area that is nine times larger. The hybrid setup of the Telescope Array project allows for simultaneous observation of both the longitudinal development and the lateral distribution of the air showers. When a cosmic ray passes through the earth's atmosphere and triggers an air shower, the fluorescence telescopes measure the scintillation light generated as the shower passes through the gas of the atmosphere, while the array of scintillator surface detectors samples the footprint of the shower when it reaches the Earth's surface.
At the center of the ground array is the Central Laser Facility which is used for atmospheric monitoring and calibrations.
Surface detector
The surface detectors that make up the ground array are activated when ionizing particles from an extensive air shower pass through them. When these particles pass through the plastic scintillator within the detector, it induces photo electrons which are then gathered by wavelength-shifting fibers and sent to a photomultiplier tube. The electronic components within the detectors then filter the results, giving the detectors co |
https://en.wikipedia.org/wiki/Faraday%27s%20ice%20pail%20experiment | Faraday's ice pail experiment is a simple electrostatics experiment performed in 1843 by British scientist Michael Faraday that demonstrates the effect of electrostatic induction on a conducting container. For a container, Faraday used a metal pail made to hold ice, which gave the experiment its name. The experiment shows that an electric charge enclosed inside a conducting shell induces an equal charge on the shell, and that in an electrically conducting body, the charge resides entirely on the surface. It also demonstrates the principles behind electromagnetic shielding such as employed in the Faraday cage. The ice pail experiment was the first precise quantitative experiment on electrostatic charge. It is still used today in lecture demonstrations and physics laboratory courses to teach the principles of electrostatics.
Description of experiment
Faraday's description of the experiment, from a letter he wrote on February 4, 1843 to Richard Phillips, the editor of Philosophical Journal, and published in the March 1844 issue:
"Let A in the diagram represent an insulated pewter ice-pail...connected by a wire to a delicate gold-leaf electrometer E, and let C be a round brass ball insulated by a dry thread of white silk, three or four feet in length, so as to remove the influence of the hand holding it from the ice-pail below. Let A be perfectly discharged, and then let C be charged at a distance by a [electrostatic] machine or Leyden jar, and introduced into A.. If C be positive, E will also diverge positively; if C be taken away, E will collapse perfectly... As C enters the vessel A the divergence of E will increase until C is ... below the edge of the vessel, and will remain quite steady and unchanged for any greater depression. This shows that at that distance the inductive action of C is entirely exerted upon the interior of A, ... If C is made to touch the bottom of A, all of its charge is communicated to A, ... and C, upon being withdrawn, ... is found to |
https://en.wikipedia.org/wiki/Thermocline | A thermocline (also known as the thermal layer or the metalimnion in lakes) is
a distinct layer based on temperature within a large body of fluid (e.g. water, as in an ocean or lake; or air, e.g. an atmosphere) with a high gradient of distinct temperature differences associated with depth. In the ocean, the thermocline divides the upper mixed layer from the calm deep water below.
Depending largely on season, latitude, and turbulent mixing by wind, thermoclines may be a semi-permanent feature of the body of water in which they occur, or they may form temporarily in response to phenomena such as the radiative heating/cooling of surface water during the day/night. Factors that affect the depth and thickness of a thermocline include seasonal weather variations, latitude, and local environmental conditions, such as tides and currents.
Oceans
Most of the heat energy of the sunlight that strikes the Earth is absorbed in the first few centimeters at the ocean's surface, which heats during the day and cools at night as heat energy is lost to space by radiation. Waves mix the water near the surface layer and distribute heat to deeper water such that the temperature may be relatively uniform in the upper , depending on wave strength and the existence of surface turbulence caused by currents. Below this mixed layer, the temperature remains relatively stable over day/night cycles. The temperature of the deep ocean drops gradually with depth. As saline water does not freeze until it reaches (colder as depth and pressure increase) the temperature well below the surface is usually not far from zero degrees.
The thermocline varies in depth. It is semi-permanent in the tropics, variable in temperate regions and shallow to nonexistent in the polar regions, where the water column is cold from the surface to the bottom. A layer of sea ice will act as an insulation blanket. The first accurate global measurements were made during the oceanographic expedition of HMS Challenger.
In th |
https://en.wikipedia.org/wiki/Divar%20%28website%29 | Divar (; "wall") is an Iranian Persian classified ads and E-commerce mobile app, and an online platform for users in Iran founded in Iran in 2012 with sections devoted to real estate, vehicles, goods, community service, Industrial equipment and jobs. On average, Divar’s users post more than 139.7 million new ads & over 53.1 million users open the app annually based on the latest published annual report.
Ads
To use this app, users must register, have a valid National Iranian ID number, and they must pay for posting more than 3 ads.
Control and censor
The website periodically censors prices of cars, house rental, and real estate.
This program has put 3500 words in its black list, if the user uses these words, the user's chat or post will be blocked. Masseuse/massage ads are censored and deleted.
Features
Night Mode
Supports screen reader
Accessibility
Foreign investment
One Dutch company has invested $47 million in the company.
Service
The app sorts ads by Iranian cities, districts, and categories
Real estate
Vehicles
Electronics
Home
Services
Personal
Entertainment and leisure
Social
Business
Job hiring and employment
Award
Iran web and mobile festival – best accessibility features
See also
Iran economy
Sheypoor (software) |
https://en.wikipedia.org/wiki/One-compartment%20kinetics | One-compartment kinetics for a chemical compound specifies that the uptake in the compartment is proportional to the concentration outside the compartment, and the elimination is proportional to the concentration inside the compartment. Both the compartment and the environment outside the compartment are considered to be homogeneous (well mixed).The compartment typically represents some organism (e.g. a fish or a daphnid).
This model is used in the simplest versions of the DEBtox method for the quantification of effects of toxicants. |
https://en.wikipedia.org/wiki/Viscous%20stress%20tensor | The viscous stress tensor is a tensor used in continuum mechanics to model the part of the stress at a point within some material that can be attributed to the strain rate, the rate at which it is deforming around that point.
The viscous stress tensor is formally similar to the elastic stress tensor (Cauchy tensor) that describes internal forces in an elastic material due to its deformation. Both tensors map the normal vector of a surface element to the density and direction of the stress acting on that surface element. However, elastic stress is due to the amount of deformation (strain), while viscous stress is due to the rate of change of deformation over time (strain rate). In viscoelastic materials, whose behavior is intermediate between those of liquids and solids, the total stress tensor comprises both viscous and elastic ("static") components. For a completely fluid material, the elastic term reduces to the hydrostatic pressure.
In an arbitrary coordinate system, the viscous stress and the strain rate at a specific point and time can be represented by 3 × 3 matrices of real numbers. In many situations there is an approximately linear relation between those matrices; that is, a fourth-order viscosity tensor such that . The tensor has four indices and consists of 3 × 3 × 3 × 3 real numbers (of which only 21 are independent). In a Newtonian fluid, by definition, the relation between and is perfectly linear, and the viscosity tensor is independent of the state of motion or stress in the fluid. If the fluid is isotropic as well as Newtonian, the viscosity tensor will have only three independent real parameters: a bulk viscosity coefficient, that defines the resistance of the medium to gradual uniform compression; a dynamic viscosity coefficient that expresses its resistance to gradual shearing, and a rotational viscosity coefficient which results from a coupling between the fluid flow and the rotation of the individual particles. In the absence of such a |
https://en.wikipedia.org/wiki/Phylogenetic%20inference%20using%20transcriptomic%20data | In molecular phylogenetics, relationships among individuals are determined using character traits, such as DNA, RNA or protein, which may be obtained using a variety of sequencing technologies. High-throughput next-generation sequencing has become a popular technique in transcriptomics, which represent a snapshot of gene expression. In eukaryotes, making phylogenetic inferences using RNA is complicated by alternative splicing, which produces multiple transcripts from a single gene. As such, a variety of approaches may be used to improve phylogenetic inference using transcriptomic data obtained from RNA-Seq and processed using computational phylogenetics.
Sequence acquisition
There have been several transcriptomics technologies used to gather sequence information on transcriptomes. However the most widely used is RNA-Seq.
RNA-Seq
RNA reads may be obtained using a variety of RNA-seq methods.
Public databases
There are a number of public databases that contain freely available RNA-Seq data.
Assembly
Sequence assembly
RNA-Seq data may be directly assembled into transcripts using sequence assembly.
Two main categories of sequence assembly are often distinguished:
de novo transcriptome assembly - especially important when a reference genome is not available for a given species.
Genome-guided assembly (sometimes mapping or reference-guided assembly) - is capable of using a pre-existing reference to guide the assembly of transcripts
Both methods attempt to generate biologically representative isoform-level constructs from RNA-seq data and generally attempt to associate isoforms with a gene-level construct. However, proper identification of gene-level constructs may be complicated by recent duplications, paralogs, alternative splicing or gene fusions. These complications may also cause downstream issues during ortholog inference. When selecting or generating sequence data, it is also vital to consider the tissue type, developmental stage and environmental conditions |
https://en.wikipedia.org/wiki/McGurk%20effect | The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor-quality auditory information but good-quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, including brain damage and other disorders.
Background
It was first described in 1976 in a paper by Harry McGurk and John MacDonald, titled "Hearing Lips and Seeing Voices" in Nature (23 December 1976). This effect was discovered by accident when McGurk and his research assistant, MacDonald, asked a technician to dub a video with a different phoneme from the one spoken while conducting a study on how infants perceive language at different developmental stages. When the video was played back, both researchers heard a third phoneme rather than the one spoken or mouthed in the video.
This effect may be experienced when a video of one phoneme's production is dubbed with a sound-recording of a different phoneme being spoken. Often, the perceived phoneme is a third, intermediate phoneme. As an example, the syllables /ba-ba/ are spoken over the lip movements of /ga-ga/, and the perception is of /da-da/. McGurk and MacDonald originally believed that this resulted from the common phonetic and visual properties of /b/ and /g/. Two types of illusion in response to incongruent audiovisual stimuli have been observed: fusions ('ba' auditory and 'ga' visual |
https://en.wikipedia.org/wiki/Sardinas%E2%80%93Patterson%20algorithm | In coding theory, the Sardinas–Patterson algorithm is a classical algorithm for determining in polynomial time whether a given variable-length code is uniquely decodable, named after August Albert Sardinas and George W. Patterson, who published it in 1953. The algorithm carries out a systematic search for a string which admits two different decompositions into codewords. As Knuth reports, the algorithm was rediscovered about ten years later in 1963 by Floyd, despite the fact that it was at the time already well known in coding theory.
Idea of the algorithm
Consider the code . This code, which is based on an example by Berstel, is an example of a code which is not uniquely decodable, since the string
011101110011
can be interpreted as the sequence of codewords
01110 – 1110 – 011,
but also as the sequence of codewords
011 – 1 – 011 – 10011.
Two possible decodings of this encoded string are thus given by cdb and babe.
In general, a codeword can be found by the following idea: In the first round, we choose two codewords and such that is a prefix of , that is,
for some "dangling suffix" . If one tries first and , the dangling suffix is . If we manage to find two sequences and of codewords such that
, then we are finished: For then the string can alternatively be decomposed as , and we have found the desired string having at least two different decompositions into codewords.
In the second round, we try out two different approaches: the first trial is to look for a codeword that has w as prefix. Then we obtain a new dangling suffix w''', with which we can continue our search. If we eventually encounter a dangling suffix that is itself a codeword (or the empty word), then the search will terminate, as we know there exists a string with two decompositions. The second trial is to seek for a codeword that is itself a prefix of w. In our example, we have , and the sequence 1 is a codeword. We can thus also continue with w'=0 as the new dangling suffix.
Preci |
https://en.wikipedia.org/wiki/Polygon%20experiment | The POLYGON experiment was a pioneer experiment in oceanography conducted in the middle of the Atlantic Ocean during the 1970s. The experiment, led by Leonid Brekhovskikh, was the first to establish the existence of so-called mesoscale eddies, eddies at the and 100-day scale, which triggered the "mesoscale revolution". The existence of mesoscale eddies was predicted by Henry Stommel in the 1960s, but there was no way to observe them with traditional sampling methods.
Setup and results
POLYGON was led by Leonid Brekhovskikh, from the Andreev Acoustics Institute, involving six research vessels and an extensive network of current meters. The flow meters were disposed in a cross, spanning a region of 113 by 113 nautical miles dubbed the "polygon". The experiment recorded temperature and flow, replacing the meters every 25 days, while taking care that the replacements would not create gaps in the data. The research vessels involved were the Akademik Kurchatov, the Dmitri Mendeleev, the Andrei Vil'kitskii, the Akademik Vernadskii, the Sergei Vavilov and the Pyotr Lebedev.
Of the results, Brekhovskikh wrote in original breakthrough article "Even with somewhat less sophisticated gear than was desirable, the results... exceeded all expectations in terms of ... the significance of the scientific results obtained. Undoubtedly the experience... will be very useful in the preparation for the forthcoming international campaign MODE... It looks as though some largescale eddy or wave disturbances were travelling across the POLYGON site from east to west. Their scales were close to those of the planetary baroclinic Rossby waves..."
Follow up
POLYGON was followed by the MODE experiment (Mid Ocean Dynamics Experiment) led by Henry Stommel, and the POLYMODE experiment by Andrei Monin. Walter Munk commented that the POLYGON experiment "ignited the mesoscale revolution [and that] MODE defined the new order" and that "oceanography has never been the same" since.
Notes |
https://en.wikipedia.org/wiki/Matthew%20N.O.%20Sadiku | Matthew Nojimu Olanipekun Sadiku from the Prairie View A&M University, Cypress, TX was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2013 "for contributions to computational electromagnetics and engineering education".
He is a co-author of the textbook Fundamental of Electric Circuits with Charles K. Alexander. |
https://en.wikipedia.org/wiki/List%20of%20IEEE%20awards | Through its awards program, the Institute of Electrical and Electronics Engineers recognizes contributions that advance the fields of interest to the IEEE. For nearly a century, the IEEE Awards Program has paid tribute to technical professionals whose exceptional achievements and outstanding contributions have made a lasting impact on technology, society and the engineering profession. The IEEE Medals and IEEE Technical Field Awards are institution-level awards. They are considered more prestigious than IEEE Society level awards and are administered by IEEE Awards Board. Each year, the IEEE Board of Directors approved the winners of these prestigious medals and awards at their annual board meeting. An IEEE Honors Ceremony is organized and held in New York each year to present the medals and awards to the recipients.
Funds for the awards program, other than those provided by corporate sponsors for some awards, are administered by the IEEE Foundation.
IEEE Medals
IEEE top medal and highest honor
IEEE Medal of Honor
Medals not specific to a technology field
IEEE Edison Medal (IEEE's principal medal for a meritorious career)
IEEE Founders Medal (for leadership, planning, and administration)
IEEE Mildred Dresselhaus Medal (first presentation scheduled for June 2021)
IEEE James H. Mulligan, Jr. Education Medal
Medals in specific technology areas
IEEE Frances E. Allen Medal (for computing, first presentation scheduled for June 2022)
IEEE Alexander Graham Bell Medal (for telecommunications engineering)
IEEE Medal for Environmental and Safety Technologies
IEEE Richard W. Hamming Medal (for information theory and coding)
IEEE Medal for Innovations in Healthcare Technology
IEEE Jack S. Kilby Signal Processing Medal (for signal processing)
IEEE/RSE James Clerk Maxwell Medal (for electronics and telecommunications)
IEEE Jun-ichi Nishizawa Medal (for materials and device sciences)
IEEE Robert N. Noyce Medal (for microelectronics)
IEEE Dennis J. Picard Medal |
https://en.wikipedia.org/wiki/Demidov%20Prize | The Demidov Prize () is a national scientific prize in Russia awarded annually to the members of the Russian Academy of Sciences. Originally awarded from 1832 to 1866 in the Russian Empire, it was revived by the government of Russia's Sverdlovsk Oblast in 1993. In its original incarnation it was one of the first annual scientific awards, and its traditions influenced other awards of this kind including the Nobel Prize.
History
In 1831 Count Pavel Nikolaievich Demidov, representative of the famous Demidov family, established a scientific prize in his name. The Saint Petersburg Academy of Sciences (now the Russian Academy of Sciences) was chosen as the awarding institution. In 1832 the president of the Petersburg Academy of Sciences, Sergei Uvarov, awarded the first prizes.
From 1832 to 1866 the Academy awarded 55 full prizes (5,000 rubles) and 220 part prizes. Among the winners were many prominent Russian scientists: the founder of field surgery and inventor of the plaster immobilisation method in treatment of fractures, Nikolai Pirogov; the seafarer and geographer Adam Johann von Krusenstern, who led the first russian circumnavigation of the globe; Dmitri Mendeleev, the creator of the periodic table of elements; Boris Jacobi, pioneer of the first usable electric motors; and many others. One of the recipients was the founder's younger brother, Count Anatoly Nikolaievich Demidov, 1st Prince of San Donato, in 1847; Pavel had died in 1840, making Anatoly the Count Demidov (note that Russia did not recognize Anatoly's Italian title of prince).
From 1866, 25 years after Count Demidov's death, as was according to the terms of his bequest, there were no more awards.
In 1993, on the initiative of the vice-president of the Russian Academy of Sciences Gennady Mesyats and the governor of the Sverdlovsk Oblast Eduard Rossel, the Demidov Prize traditions were restored. The prize is awarded for outstanding achievements in natural sciences and humanities. The winners are elect |
https://en.wikipedia.org/wiki/The%20End%20of%20Average | The End of Average: How We Succeed in a World That Values Sameness is a book by Todd Rose. It was published by HarperCollins in 2016, and talks about the importance of individuality rather than the concept of average human beings.
In this book, the author argues that no individual can be accurately labeled as average. He presents an alternative to understanding individuals solely based on averages. He introduces three principles of individuality - the jaggedness principle (talent is always uneven), the context principle (traits are not fixed), and the pathways principle (we often take unconventional routes).
According to The New York Times, “Readers will be moved to examine their own averagerian prejudices, most so ingrained as to be almost invisible, all worthy of review.” As per Kirkus Reviews, the book is “an intriguing view into the evolution and imperfections of our current system but lacks a clear path toward implementing the proposed principles of individuality.” |
https://en.wikipedia.org/wiki/Paytm%20Payments%20Bank | Paytm Payments Bank (PPBL) is an Indian payments bank, founded in 2017 and headquartered in Noida and is part of mobile payment company paytm. In the same year, it received the license to run a payments bank from the Reserve Bank of India and was launched in November 2017. In 2021, the bank received a scheduled bank status from the RBI.
Vijay Shekhar Sharma holds 51 per cent in the entity with One97 Communications holding 49 per cent . Vijay Shekhar Sharma is the promoter of Paytm Payments Bank, and One97 Communications Limited is not categorized as one of its promoters.
History
In 2015, Paytm Payments Bank Limited had received in-principle approval from the Reserve Bank of India to set up a payments bank and was formally inaugurated on November 28, 2017.
In the financial year 2020, the bank facilitated more than 485 crore transactions worth ₹4.6 lakh crore. It processed over 778 million UPI transactions amounting to ₹89,388 crore in June 2022 and continues to be India’s biggest UPI beneficiary bank with over 1,370 million digital transactions in June 2022.
In March 2021, the bank received approval for its @Paytm UPI handle from the Securities and Exchange Board of India for issuing payment mandates for initial public offerings (IPOs) through the Unified Payments Interface (UPI).
Products and services
Paytm Payments Bank offers savings and current accounts with a debit card, facilitating fast and easy payments.
Paytm Payments Bank has issued seven million Visa debit cards through its platform in FY'21.
Partnership
In January 2018, Paytm Payments Bank partnered with IndusInd Bank to offer fixed deposits. It entered into a partnership with MasterCard for the issuance of virtual and physical debit cards in April 2020. In January 2021, it tied up with Suryoday Small Finance Bank to offer fixed deposit services to its account holders. Since June 2021, it is not providing new fixed deposit creation with Suryodaya Bank.
Financials
Paytm Payments Bank repo |
https://en.wikipedia.org/wiki/PTC%20Therapeutics | PTC Therapeutics is a US pharmaceutical company focused on the development of orally administered small molecule drugs and gene therapy which regulate gene expression by targeting post-transcriptional control (PTC) mechanisms in orphan diseases.
In September 2009, PTC entered into an agreement with Roche for the development of orally bioavailable small molecules for central nervous system diseases. PTC acquired the Bio-e platform in 2019.
Products
In 2017, PTC acquired Emflaza (deflazacort) from Marathon Pharmaceuticals. PTC also owns Translarna, (Ataluren) marketed for nonsense mutation Duchenne muscular dystrophy. Together, the two products generated revenues of 174 million dollars and 260 million dollars in 2017 and 2018 respectively.
PTC has the commercialization rights for WAYLIVRA (volanesorsen) in Latin America.
Pipeline
In 2018, PTC acquired Agilis Biotherapeutics and a gene therapy candidate, GT-AADC, with its compelling clinical data in treating aromatic L-amino acid decarboxylase (AADC) deficiency. AADC deficiency is a rare CNS disorder arising from reductions in the enzyme AADC that result from mutations in the dopa decarboxylase (DDC) gene.
In 2020, PTC acquired Censa Pharmaceuticals, Inc., a biopharmaceutical company focused on the development of CNSA-001 (sepiapterin), a clinical-stage investigational therapy for orphan metabolic diseases, including phenylketonuria (PKU) and other diseases associated with defects in the tetrahydrobiopterin (BH4) biochemical pathways diagnosed at birth.
In 2020, PTC announced the FDA approval of Evrysdi (risdiplam) for the treatment of spinal muscular atrophy (SMA) in adults and children 2 months and older. |
https://en.wikipedia.org/wiki/EOS%20%288-bit%20operating%20system%29 | EOS is the built-in operating system of the Coleco Adam. There are bindings in high-level programming languages like BASIC.
Overview
The functions are grouped into categories as follows.
Executive calls
eos_init
eos_hard_init
eos_hard_reset_net
eos_delay_after_hard_reset
eos_synchronize_clocks
eos_scan_for_devices
eos_relocate_pcb
eos_soft_init
eos_exit_to_smartwriter
eos_switch_memory_banks
Console Output
eos_console_init
eos_console_display_regular
eos_console_display_special
Printer Interface
eos_print_character
eos_print_buffer
eos_printer_status
eos_start_print_character
eos_end_print_character
Keyboard Interface
eos_keyboard_status
eos_read_keyboard
eos_start_read_keyboard
eos_end_read_keyboard
File Operations
eos_file_manager_init
eos_check_directory_for_file
eos_find_file_1
eos_find_file_2
eos_find_file_in_fcb
eos_check_file_mode
eos_make_file
eos_update_file_in_directory
eos_open_file
eos_close_file
eos_read_file
eos_write_file
eos_trim_file
eos_initialize_directory
eos_reset_file
eos_get_date
eos_put_date
eos_delete_file
eos_rename_file
Device Operations
eos_find_pcb
eos_find_dcb
eos_request_device_status
eos_get_device_status
eos_soft_reset_device
eos_soft_reset_keyboard
eos_soft_reset_printer
eos_read_block
eos_read_one_block
eos_start_read_one_block
eos_end_read_one_block
eos_write_block
eos_write_one_block
eos_start_write_one_block
eos_end_write_one_block
eos_start_read_character_device
eos_end_read_character_device
eos_read_character_device
eos_start_write_character_device
eos_end_write_character_device
eos_write_character_device
Video RAM Management
eos_set_vdp_ports
eos_set_vram_table_address
eos_load_ascii_in_vdp
eos_put_ascii_in_vdp
eos_write_vram
eos_read_vram
eos_put_vram
eos_get_vram
eos_write_vdp_register
eos_read_vdp_register
eos_fill_vram
eos_calculate_pattern_position
eos_point_to_pattern_position
eos_write_sprite_table
Game Controllers
eos_read_ |
https://en.wikipedia.org/wiki/CDC%20display%20code | Display code is the six-bit character code used by many computer systems manufactured by Control Data Corporation, notably the CDC 6000 series in 1964, the 7600 in 1967 and the following Cyber series in 1971. The CDC 6000 series and their successors had 60 bit words. As such, typical usage packed 10 characters per word. It is a six-bit extension of the four-bit BCD encoding, and was referred to as BCDIC (BCD interchange code.)
There were several variations of display code, notably the 63-character character set, and the 64-character character set. There were also 'CDC graphic' and 'ASCII graphic' variants of both the 63- and 64-character sets. The choice between 63- or 64-character character set, and between CDC or ASCII graphic was site-selectable. Generally, early CDC customers started out with the 63-character character set, and CDC graphic print trains on their line printers. As time-sharing became prevalent, almost all sites used the ASCII variant - so that line printer output would match interactive usage. Later CDC customers were also more likely to use the 64-character character set.
A later variation, called 6/12 display code, was used in the Kronos and NOS timesharing systems in order to support full ASCII capabilities. In 6/12 mode, an escape character (the circumflex, octal 76) would indicate that the following letter was lower case. Thus, upper case and other characters were 6 bits in length, and lower case characters were 12 bits in length.
The PLATO system used a further variant of 6/12 display code. Noting that lower case letters were most common in typical PLATO usage, the roles were reversed. Lower case letters were the norm, and the escape character preceded upper case letters.
The typical text file format used a zero-byte terminator to signify the end of each record. The zero-byte terminator was indicated by, at least, the final twelve bits of a 60-bit word being set to zero. The terminator could actually be anywhere from 12- to |
https://en.wikipedia.org/wiki/HACS | High Angle Control System (HACS) was a British anti-aircraft fire-control system employed by the Royal Navy from 1931 and used widely during World War II. HACS calculated the necessary deflection required to place an explosive shell in the location of a target flying at a known height, bearing and speed.
Early history
The HACS was first proposed in the 1920s and began to appear on Royal Navy (RN) ships in January 1930, when HACS I went to sea in . HACS I did not have any stabilization or power assist for director training. HACS III which appeared in 1935, had provision for stabilization, was hydraulically driven, featured much improved data transmission and it introduced the HACS III Table. The HACS III table (computer) had numerous improvements including raising maximum target speed to 350 knots, continuous automatic fuze prediction, improved geometry in the deflection Screen, and provisions for gyro inputs to provide stabilization of data received from the director. The HACS was a control system and was made possible by an effective data transmission network between an external gun director, a below decks fire control computer, and the ship's medium calibre anti-aircraft (AA) guns.
Development
Operation
The bearing and altitude of the target was measured directly on the UD4 Height Finder/Range Finder, a coincidence rangefinder located in the High Angle Director Tower (HADT). The direction of travel was measured by aligning a binocular graticule with the target aircraft fuselage. The early versions of HACS, Mk. I through IV, did not measure target speed directly, but estimated this value based on the target type. All of these values were sent via selsyn to the HACS in the High Angle Calculating Position (HACP) located below decks. The HACS used these values to calculate the range rate (often called rate along in RN parlance), which is the apparent target motion along the line of sight. This was also printed on a paper plot so that a range rate officer could ass |
https://en.wikipedia.org/wiki/Shark%20repellent | A shark repellent is any method of driving sharks away from an area. Shark repellents are a category of animal repellents. Shark repellent technologies include magnetic shark repellent, electropositive shark repellents, electrical repellents, and semiochemicals. Shark repellents can be used to protect people from sharks by driving the sharks away from areas where they are likely to kill human beings. In other applications, they can be used to keep sharks away from areas they may be a danger to themselves due to human activity. In this case, the shark repellent serves as a shark conservation method. There are some naturally occurring shark repellents; modern artificial shark repellents date to at least the 1940s, with the United States Navy using them in the Pacific Ocean theater of World War II.
Natural repellents
It has traditionally been believed that sharks are repelled by the smell of a dead shark; however, modern research has had mixed results.
The Pardachirus marmoratus fish (finless sole, Red Sea Moses sole) repels sharks through its secretions. The best-understood factor is pardaxin, acting as an irritant to the sharks' gills, but other chemicals have been identified as contributing to the repellent effect.
In 2017, the US Navy announced that it was developing a synthetic analog of hagfish slime with potential application as a shark repellent.
History
Some of the earliest research on shark repellents took place during the Second World War when military services sought to minimize the risk to stranded aviators and sailors in the water. Research has continued to the present, with notable researchers including Americans Eugenie Clark, and later Samuel H. Gruber, who has conducted tests at the Bimini Sharklab in Bimini, and the Japanese scientist Kazuo Tachibana. Future celebrity chef Julia Child developed shark repellent while working for the Office of Strategic Services
Initial work, which was based on historical research and studies at the time, focused |
https://en.wikipedia.org/wiki/Tesla%20%28microarchitecture%29 | Tesla is the codename for a GPU microarchitecture developed by Nvidia, and released in 2006, as the successor to Curie microarchitecture. It was named after the pioneering electrical engineer Nikola Tesla. As Nvidia's first microarchitecture to implement unified shaders, it was used with GeForce 8 Series, GeForce 9 Series, GeForce 100 Series, GeForce 200 Series, and GeForce 300 Series of GPUs collectively manufactured in 90 nm, 80 nm, 65 nm, 55 nm, and 40 nm. It was also in the GeForce 405 and in the Quadro FX, Quadro x000, Quadro NVS series, and Nvidia Tesla computing modules.
Tesla replaced the old fixed-pipeline microarchitectures, represented at the time of introduction by the GeForce 7 series. It competed directly with AMD's first unified shader microarchitecture named TeraScale, a development of ATI's work on the Xbox 360 which used a similar design. Tesla was followed by Fermi.
Overview
Tesla is Nvidia's first microarchitecture implementing the unified shader model. The driver supports Direct3D 10 Shader Model 4.0 / OpenGL 2.1 (later drivers have OpenGL 3.3 support) architecture. The design is a major shift for NVIDIA in GPU functionality and capability, the most obvious change being the move from the separate functional units (pixel shaders, vertex shaders) within previous GPUs to a homogeneous collection of universal floating point processors (called "stream processors") that can perform a more universal set of tasks.
GeForce 8's unified shader architecture consists of a number of stream processors (SPs). Unlike the vector processing approach taken with older shader units, each SP is scalar and thus can operate only on one component at a time. This makes them less complex to build while still being quite flexible and universal. Scalar shader units also have the advantage of being more efficient in a number of cases as compared to previous generation vector shader units that rely on ideal instruction mixture and ordering to reach peak throughput. The low |
https://en.wikipedia.org/wiki/Fortnite%20Battle%20Royale | Fortnite Battle Royale is a free-to-play battle royale video game developed and published by Epic Games. It is a companion game to Fortnite: Save the World, a cooperative survival game with construction elements. It was initially released in early access on September 26, 2017, for macOS, PlayStation 4, Windows, and Xbox One, followed by ports for iOS, Nintendo Switch, and Android. The following year, Epic dropped the early access label for the game on June 29, 2020. Versions for the Xbox Series X/S and PlayStation 5 were released as launch titles in late 2020.
The concept of the game is similar to previous games of the genre: 100 players skydive onto an island and scavenge for gear to defend themselves from other players. Players can fight alone, or with up to four other players. As the match progresses, the playable area within the island gradually constricts, giving the players less and less room to work with; outside this safe zone is "the Storm", which inflicts damage on those caught inside it, with the amount of damage growing as the Storm itself does. The last player or team alive wins the match. The main distinction from others in the genre is the game's construction elements, letting players build walls, obstacles, and other structures from collected resources to take cover from incoming fire or give one a strategic view advantage. Battle Royale uses a seasonal approach with battle passes to introduce new character customization content in the game, as well as limited-time events, some of which correspond with changes to the game map. Since its initial release, several other game modes have been introduced, including "Battle Lab" and "Party Royale".
The idea for Battle Royale arose following the release of PlayerUnknown's Battlegrounds in 2017, a similar battle royale game that was highly successful but noted for its technical flaws. Originally released as part of the early access version of Save the World, Epic later transitioned the game to a free-to-pla |
https://en.wikipedia.org/wiki/720%20%28number%29 | 720 (seven hundred [and] twenty) is the natural number following 719 and preceding 721.
It is 6! (6 factorial) and a composite number with thirty divisors, more than any number below, making it a highly composite number. It is a Harshad number in every base from binary to decimal.
720 is expressible as the product of consecutive integers in two different ways: and .
There are 49 solutions to the equation , more than any integer below it, making 720 a highly totient number.
720 is a 241-gonal number.
In other fields
720 is:
A common vertical display resolution for HDTV (see 720p).
720° is two full rotations; the term "720" refers to a skateboarding trick.
720° is also the name of a skateboarding video game.
720 is a dual area code in the Denver Metro Area along with 303.
720° is the sum of all the defects of any polyhedron.
720 is a short form of saying Boeing 720, an airliner which is no longer in service.
For the year AD, see 720. |
https://en.wikipedia.org/wiki/SubOS | In computing, a SubOS may mean several related concepts:
A process-specific protection mechanism allowing potentially dangerous applications to run in a restricted environment. It worked by setting a sub-user id which was user id of the owner of the file rather than the person running the file.
A substitute-operating system, which simulated a full operating system. These were mainly developed by the GameMaker community.
An interface (graphical or terminal based) that provides additional functions or command for a specific audience target.
It can also make processes easier and add details to the main operating system.
See also
Virtual machine
Sandbox (computer security)
Computing terminology
Operating system security
Virtualization software |
https://en.wikipedia.org/wiki/Revolutions%20in%20Mathematics | Revolutions in Mathematics is a 1992 collection of essays in the history and philosophy of mathematics.
Contents
Michael J. Crowe, Ten "laws" concerning patterns of change in the history of mathematics (1975) (15–20);
Herbert Mehrtens, T. S. Kuhn's theories and mathematics: a discussion paper on the "new historiography" of mathematics (1976) (21–41);
Herbert Mehrtens, Appendix (1992): revolutions reconsidered (42–48);
Joseph Dauben, Conceptual revolutions and the history of mathematics: two studies in the growth of knowledge (1984) (49–71);
Joseph Dauben, Appendix (1992): revolutions revisited (72–82);
Paolo Mancosu, Descartes's Géométrie and revolutions in mathematics (83–116);
Emily Grosholz, Was Leibniz a mathematical revolutionary? (117–133);
Giulio Giorello, The "fine structure" of mathematical revolutions: metaphysics, legitimacy, and rigour. The case of the calculus from Newton to Berkeley and Maclaurin (134–168);
Yu Xin Zheng, Non-Euclidean geometry and revolutions in mathematics (169–182);
Luciano Boi, The "revolution" in the geometrical vision of space in the nineteenth century, and the hermeneutical epistemology of mathematics (183–208);
Caroline Dunmore, Meta-level revolutions in mathematics (209–225);
Jeremy Gray, The nineteenth-century revolution in mathematical ontology (226–248);
Herbert Breger, A restoration that failed: Paul Finsler's theory of sets (249–264);
Donald A. Gillies, The Fregean revolution in logic (265–305);
Michael Crowe, Afterword (1992): a revolution in the historiography of mathematics? (306–316).
Reviews
The book was reviewed by Pierre Kerszberg for Mathematical Reviews and by Michael S. Mahoney for American Mathematical Monthly. Mahoney says "The title should have a question mark." He sets the context by referring to paradigm shifts that characterize scientific revolutions as described by Thomas Kuhn in his book The Structure of Scientific Revolutions. According to Michael Crowe in chapter one, revolutions never o |
https://en.wikipedia.org/wiki/Kelvin%27s%20circulation%20theorem | In fluid mechanics, Kelvin's circulation theorem (named after William Thomson, 1st Baron Kelvin who published it in 1869) states:In a barotropic, ideal fluid with conservative body forces, the circulation around a closed curve (which encloses the same fluid elements) moving with the fluid remains constant with time.
Stated mathematically:
where is the circulation around a material contour Stated more simply, this theorem says that if one observes a closed contour at one instant, and follows the contour over time (by following the motion of all of its fluid elements), the circulation over the two locations of this contour are equal.
This theorem does not hold in cases with viscous stresses, nonconservative body forces (for example the Coriolis force) or non-barotropic pressure-density relations.
Mathematical proof
The circulation around a closed material contour is defined by:
where u is the velocity vector, and ds is an element along the closed contour.
The governing equation for an inviscid fluid with a conservative body force is
where D/Dt is the convective derivative, ρ is the fluid density, p is the pressure and Φ is the potential for the body force. These are the Euler equations with a body force.
The condition of barotropicity implies that the density is a function only of the pressure, i.e. .
Taking the convective derivative of circulation gives
For the first term, we substitute from the governing equation, and then apply Stokes' theorem, thus:
The final equality arises since owing to barotropicity. We have also made use of the fact that the curl of any gradient is necessarily 0, or for any function .
For the second term, we note that evolution of the material line element is given by
Hence
The last equality is obtained by applying gradient theorem.
Since both terms are zero, we obtain the result
Poincaré–Bjerknes circulation theorem
A similar principle which conserves a quantity can be obtained for the rotating frame also, known as th |
https://en.wikipedia.org/wiki/Thiomer | Thiolated polymers designated thiomers are functional polymers used in biotechnology product development with the intention to prolong mucosal drug residence time and to enhance absorption of drugs. The name thiomer was coined by Andreas Bernkop-Schnürch in 2000. Thiomers have thiol bearing side chains. Sulfhydryl ligands of low molecular mass are covalently bound to a polymeric backbone consisting of mainly biodegradable polymers, such as chitosan, hyaluronic acid, cellulose derivatives, pullulan, starch, gelatin, polyacrylates, cyclodextrins, or silicones.
Thiomers exhibit properties potentially useful for non-invasive drug delivery via oral, ocular, nasal, vesical, buccal and vaginal routes. Thiomers show also potential in the field of tissue engineering and regenerative medicine. Various thiomers such as thiolated chitosan and thiolated hyaluronic acid are commercialy available as scaffold materials. Thiomers can be directly compressed to tablets or given as solutions. In 2012, a second generation of thiomers – called "preactivated" or "S-protected" thiomers – were introduced.
In contrast to thiomers of the first generation, preactivated thiomers are stable towards oxidation and display comparatively higher mucoadhesive and permeation enhancing properties. Approved thiomer products for human use are for example eyedrops for treatment of dry eye syndrome or adhesive gels for treatment of nickel allergy.
Properties and applications
Mucoadhesion
Thiomers are capable of forming disulfide bonds with cysteine substructures of the mucus gel layer covering mucosal membranes. Because of this property they exhibit up to 100-fold higher mucoadhesive properties in comparison to the corresponding unthiolated polymers. Because of their mucoadhesive properties, thiolated polymers are an effective tool in the treatment of diseases such as dry eye, dry mouth, and dry vagina syndrome where dry mucosal surfaces are involved.
In situ gelation
Various polymers such as polox |
https://en.wikipedia.org/wiki/Hub%20labels | In computer science, hub labels or the hub-labelling algorithm is a method that consumes much fewer resources than the lookup table but is still extremely fast for finding the shortest paths between nodes in a graph, which may represent, for example, road networks.
This method allows at the most with two SELECT statements and the analysis of two strings to compute the shortest path between two vertices of a graph.
For a graph that is oriented like a road graph, this technique requires the prior computation of two tables from structures constructed using the method of the contraction hierarchies.
In the end, these two computed tables will have as many rows as nodes present within the graph. For each row (each node), a label will be calculated.
A label is a string containing the distance information between the current node (the node of the row) and all the other nodes that can be reached with an ascending search on the relative multi-level structure. The advantage of these distances is that they all represent the shortest paths.
So, for future queries, the search of a shortest path will start from the source on the first table and the destination on the second table, from which it will search within the labels for the common nodes with the associated distance information. Only the smallest sum of distances will be kept as the shortest path result. |
https://en.wikipedia.org/wiki/Cannabinoidergic | Cannabinoidergic, or cannabinergic, means "working on the endocannabinoid neurotransmitters". As with terms such as dopaminergic and serotonergic, related proteins and cellular components involved endocannabinoid signaling, such as the cannabinoid (CB1) receptor, as well as exogenous compounds, such as phytocannabinoids or other cannabinoids which modulate the activity of endocannabinoid system, can be described as cannabinoidergic.
See also
Adenosinergic
Adrenergic
Cholinergic
Dopaminergic
GABAergic
Glycinergic
Histaminergic
Melatonergic
Monoaminergic
Opioidergic
Serotonergic |
https://en.wikipedia.org/wiki/Bambermycin | Bambermycin (flavomycin) is a complex of antibiotics obtained from Streptomyces bambergiensis and Streptomyces ghanaensis used as a food additive for
beef cattle, dairy cattle, poultry and swine. The complex consists mainly of moenomycins A and C.
Bambermycin is a performance-enhancing antibiotic intended and available solely for use in animal nutrition. Its mechanism of action is to inhibit the synthesis of the bacterial wall. Bambermycin is predominantly effective against Gram-positive pathogenic bacteria. However, it does not have significant action against Lactobacillus, Bifidobacterium, and other protective bacteria.
Bambermycin has no precautions or warnings to humans on its label pertaining to mixing and handling and is non-hazardous when used according to label directions. Bambermycin has no withdrawal requirement. Bambermycin is not absorbed by the intestine and no measurable residues are found in edible tissues even when fed at up to 50 times the normal recommended dosage.
Synonyms include flavomycin, flavofosfolipol, flavophospholipol, moenomycin, and the brand name Gainpro. |
https://en.wikipedia.org/wiki/Patrick%20du%20Val | Patrick du Val (March 26, 1903 – January 22, 1987) was a British mathematician, known for his work on algebraic geometry, differential geometry, and general relativity. The concept of Du Val singularity of an algebraic surface is named after him.
Early life
Du Val was born in Cheadle Hulme, Cheshire. He was the son of a cabinet maker, but his parents' marriage broke up. As a child, he suffered ill-health, in particular asthma, and was educated mostly by his mother. He was awarded a first class honours degree from the University of London External Programme in 1926, which he took by correspondence course.
He was a talented linguist, for example teaching himself Norwegian so that he might read Peer Gynt. He also had a strong interest in history but his love of mathematics led him to pursue that as a career. His earliest publications show a leaning towards applied mathematics.
His mother moved to a village near Cambridge and he became acquainted with Henry Baker, Lowndean Professor of Astronomy and Geometry. Baker turned his interest towards algebraic geometry, and he entered Trinity College, Cambridge in 1927.
Research in geometry
Du Val's early work before becoming a research student was on relativity, including a paper on the De Sitter model of the universe and Grassmann's tensor calculus. His doctorate was on algebraic geometry and in his thesis he generalised a result of Schoute. He worked on algebraic surfaces and later in his career became interested in elliptic functions.
He received his Ph.D. with a thesis entitled 'On Certain Configurations of Algebraic Geometry Having Groups of Self-Transformations Representable by Symmetry Groups of Certain Polygons' under Baker's supervision in 1930. While a research student he had many famous geometers including Hodge as fellow research students, and he formed a particular friendship with Coxeter and Semple. He was elected a fellow of Trinity in 1930 for four years. During that time he travelled extensively, visi |
https://en.wikipedia.org/wiki/Norman%20Packard | Norman Harry Packard (born 1954 in Billings, Montana) is a chaos theory physicist and one of the founders of the Prediction Company and ProtoLife. He is an alumnus of Reed College and the University of California, Santa Cruz. Packard is known for his contributions to chaos theory, complex systems, and artificial life. He coined the phrase "the edge of chaos".
Biography
Between 1976 and 1981, Packard formed the Dynamical Systems Collective at UC Santa Cruz with fellow physics graduate students, Rob Shaw, Doyne Farmer, and James Crutchfield. The collective was best known for its work in probing chaotic systems for signs of order.
Around the same time, he worked with Doyne Farmer and other friends in Santa Cruz, California to form the Eudaemons collective , to develop a strategy for beating the roulette wheel using a toe-operated computer. The computer could, in theory, predict in what area a roulette ball would land on a wheel, giving the player a significant statistical advantage over the house. Although the project itself was a success, they ran into practical difficulty employing the technique on-site in Las Vegas casinos. The experiences of Norman, Doyne Farmer, and crew were later chronicled in the book The Eudaemonic Pie (1985) by Thomas Bass. Their experience was also chronicled on the History Channel television series "Breaking Vegas."
In 1982, Packard won a NATO post-doctoral fellowship to study at the Institut des Hautes Études Scientifiques in Bures-sur-Yvette, France. One year later, he joined the Princeton Institute for Advanced Study. At the IAS, he worked with colleagues Stephen Wolfram and Rob Shaw to explain complex systems and the tendency for matter to organize itself. Subsequently, Packard has made contributions to the field of Artificial Life, including the definition of Evolutionary Activity.
Professional work
Center for Complex Systems Research
In 1985 Packard moved with Wolfram to the physics department of the University of Illinois, |
https://en.wikipedia.org/wiki/Reciprocal%20determinism | Reciprocal determinism is the theory set forth by psychologist Albert Bandura which states that a person's behavior both influences and is influenced by personal factors and the social environment. Bandura accepts the possibility that an individual's behavior may be conditioned through the use of consequences. At the same time he asserts that a person's behavior (and personal factors, such as cognitive skills or attitudes) can impact the environment.
Bandura was able to show this when he created the Bandura's Box experiment. As an example, Bandura's reciprocal determinism could occur when a child is acting out in school. The child doesn't like going to school; therefore, they act out in class. This results in teachers and administrators of the school disliking having the child around. When confronted by the situation, the child admits they hate school and other peers don't like them. This results in the child acting inappropriately, forcing the administrators who dislike having them around to create a more restrictive environment for children of this stature. Each behavioral and environmental factor coincides with the child and so forth resulting in a continuous battle on all three levels.
Reciprocal determinism is the idea that behavior is controlled or determined by the individual, through cognitive processes, and by the environment, through external social stimulus events. The basis of reciprocal determinism should transform individual behavior by allowing subjective thought processes transparency when contrasted with cognitive, environmental, and external social stimulus events.
Actions do not go one way or the other, as it is affected by repercussions, meaning one's behavior is complicated and can't be thought of as individual and environmental means. Behavior consist of environmental and individual parts that interlink together to function. Many studies showed reciprocal associations between people and their environments over time.
Research
Research cond |
https://en.wikipedia.org/wiki/Zebrawood | The name zebrawood is used to describe several tree species and the wood derived from them. Zebrawood is characterized by a striped figure that is reminiscent of a zebra. The name originally applied to the wood of Astronium graveolens, a large tree native to Central America. In the 20th century, the most important source of zebrawood was Microberlinia brazzavillensis, also called zebrano, a tree native to Central Africa. Other sources include Brazilian Astronium fraxinifolium, African Brachystegia spiciformis, Pacific Guettarda speciosa, and Asian Pistacia integerrima.
History
Zebrawood was first recorded in the British Customs returns for 1773, when 180 pieces of zebrawood were imported from the Mosquito Coast, a British colony (now Republic of Honduras and Nicaragua). In his History of Jamaica (1774), Edward Long relates, "The species of zebra wood at present in esteem among the cabinet-makers is brought to Jamaica from the Mosquito shore; it is of a most lovely tint, and richly veined..."
The Mosquito Coast thereafter exported zebrawood regularly until the Convention of London (1786) and the consequent expulsion of British settlers from this part of the Viceroyalty of New Spain.
An alternative name which occurs in 18th century British sources is palmaletto or palmalatta, from palo mulatto, which was the local name for the wood. At the beginning of the 19th century, another source of zebrawood was found in Brazil. This species, Astronium fraxinifolium, is native to northern South America, especially north-eastern Brazil. It is now traded as goncalo alves, a Portuguese name used in Brazil. On the European and American markets, however, it was still called zebrawood, and commonly used in British furniture-making between about 1810 and 1860.
For most of the 19th century, the botanical identity of zebrawood was unknown. For many years, it was thought to be the product of Omphalobium lambertii DC., later reclassified as Connarus guianensis Lamb ex DC., and |
https://en.wikipedia.org/wiki/Norm%20%28mathematics%29 | In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.
A seminorm satisfies the first two properties of a norm, but may be zero for vectors other than the origin. A vector space with a specified norm is called a normed vector space. In a similar manner, a vector space with a seminorm is called a seminormed vector space.
The term pseudonorm has been used for several related meanings. It may be a synonym of "seminorm".
A pseudonorm may satisfy the same axioms as a norm, with the equality replaced by an inequality "" in the homogeneity axiom.
It can also refer to a norm that can take infinite values, or to certain functions parametrised by a directed set.
Definition
Given a vector space over a subfield of the complex numbers a norm on is a real-valued function with the following properties, where denotes the usual absolute value of a scalar :
Subadditivity/Triangle inequality: for all
Absolute homogeneity: for all and all scalars
Positive definiteness/positiveness/: for all if then
Because property (2.) implies some authors replace property (3.) with the equivalent condition: for every if and only if
A seminorm on is a function that has properties (1.) and (2.) so that in particular, every norm is also a seminorm (and thus also a sublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that if is a norm (or more generally, a seminorm) then and that also has the following property:
Non-neg |
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Portugal | The Nomenclature of Territorial Units for Statistics (NUTS) is developed by Eurostat, and employed in both Portugal and the entire European Union for statistical purposes. The NUTS branch extends from NUTS1, NUTS2 and NUTS3 regions, with the complementary LAU (Local Administrative Units) sub-categorization being used to differentiate the local areas, of trans-national importance.
Developed by Eurostat and implemented in 1998, the Nomenclature of Territorial Units for Statistics (NUTS) regions, which comprises three levels of the Portuguese territory, are instrumental in European Union's Structural Fund delivery mechanisms. The standard was developed by the European Union and extensively used by national governments, Eurostat and other EU bodies for statistical and policy matters. Until 4 November 2002, the Sistema Estatístico Nacional (SEN) used a NUTS codification system that was distinct from the Eurostat system. With the enactment of Decree Law 244/2002 (5 November 2002), published in the Diário da República, this system was abandoned in order to harmonize the national system with that of Eurostat.
Subdivisions
The NUTS system subdivides the nation into three levels: NUTS I, NUTS II and NUTS III. In some European partners, as is the case with Portugal, a complementary hierarchy, respectively LAU I and LAU II (posteriorly referred to as NUTS IV and NUTS V) is employed. The LAU, or Local Administrative Units, in the Portuguese context pertains to the 308 municipalities (LAU I) and 3092 civil parishes (LAU II) respectively. In the broadest sense, the NUTS hierarchy, while they may follow some of the borders (municipal or parish) diverge in their delineation.
Changes NUTS 2-3 (1986—2013)
NUTS I
The first and broadest subdivision of Portugal is between continental Portugal and the two autonomous regions of the Azores and Madeira.
NUTS II
Although the districts are still the most socially relevant subdivision, their function is being phased in favour of locall |
https://en.wikipedia.org/wiki/Orthornavirae | Orthornavirae is a kingdom of viruses that have genomes made of ribonucleic acid (RNA), including genes which encode an RNA-dependent RNA polymerase (RdRp). The RdRp is used to transcribe the viral RNA genome into messenger RNA (mRNA) and to replicate the genome. Viruses in this kingdom share a number of characteristics which promote rapid evolution, including high rates of genetic mutation, recombination, and reassortment.
Viruses in Orthornavirae belong to the realm Riboviria. They are descended from a common ancestor that may have been a non-viral molecule that encoded a reverse transcriptase instead of an RdRp for replication. The kingdom is subdivided into five phyla that separate member viruses based on their genome type, host range, and genetic similarity. Viruses with three genome types are included: positive-strand RNA viruses, negative-strand RNA viruses, and double-stranded RNA viruses.
Many of the most widely known viral diseases are caused by members of this kingdom, including coronaviruses, the Ebola virus, influenza viruses, the measles virus, and the rabies virus, as well as the first virus ever discovered, tobacco mosaic virus. In modern history, RdRp-encoding RNA viruses have caused numerous disease outbreaks, and they infect many economically important crops. Most eukaryotic viruses, including most human, animal, and plant viruses, are RdRp-encoding RNA viruses. In contrast, there are relatively few prokaryotic viruses in the kingdom.
Etymology
The first part of Orthornavirae comes from Greek ὀρθός [orthós], meaning straight, the middle part, rna, refers to RNA, and -virae is the suffix used for virus kingdoms.
Characteristics
Structure
RNA viruses in Orthornavirae typically do not encode many proteins, but most positive-sense, single-stranded (+ssRNA) viruses and some double-stranded RNA (dsRNA) viruses encode a major capsid protein that has a single jelly roll fold, so named because the folded structure of the protein contains a structur |
https://en.wikipedia.org/wiki/Michael%20Luby | Michael George Luby is a mathematician and computer scientist, CEO of BitRipple, senior research scientist at the International Computer Science Institute (ICSI), former VP Technology at Qualcomm, co-founder and former chief technology officer of Digital Fountain. In coding theory he is known for leading the invention of the Tornado codes and the LT codes. In cryptography he is known for his contributions showing that any one-way function can be used as the basis for private cryptography, and for his analysis, in collaboration with Charles Rackoff, of the Feistel cipher construction. His distributed algorithm to find a maximal independent set in a computer network has also been influential.
Luby received his B.Sc. in mathematics from Massachusetts Institute of Technology in 1975. In 1983 he was awarded a Ph.D. in computer science from University of California, Berkeley. In 1996–1997, while at the ICSI, he led the team that invented Tornado codes. These were the first LDPC codes based on an irregular degree design that has proved crucial to all later good LDPC code designs, which provably achieve channel capacity for the erasure channel, and which have linear time encoding and decoding algorithms. In 1998 Luby left ICSI to found the Digital Fountain company, and shortly thereafter in 1998 he invented the LT codes, the first practical fountain codes. Qualcomm acquired Digital Fountain in 2009.
Awards
Luby's publications have won the 2002 IEEE Information Theory Society Information Theory Paper Award for leading the design and analysis of the first irregular LDPC error-correcting codes,
the 2003 SIAM Outstanding Paper Prize for the seminal paper showing how to construct a cryptographically unbreakable pseudo-random generator from any one-way function,
and the 2009 ACM SIGCOMM Test of Time Award.
In 2016 he was awarded the ACM Edsger W. Dijkstra Prize in Distributed Computing; the prize is given "for outstanding papers on the principles of distributed computing, who |
https://en.wikipedia.org/wiki/Factorial%20prime | A factorial prime is a prime number that is one less or one more than a factorial (all factorials greater than 1 are even).
The first 10 factorial primes (for n = 1, 2, 3, 4, 6, 7, 11, 12, 14) are :
2 (0! + 1 or 1! + 1), 3 (2! + 1), 5 (3! − 1), 7 (3! + 1), 23 (4! − 1), 719 (6! − 1), 5039 (7! − 1), 39916801 (11! + 1), 479001599 (12! − 1), 87178291199 (14! − 1), ...
n! − 1 is prime for :
n = 3, 4, 6, 7, 12, 14, 30, 32, 33, 38, 94, 166, 324, 379, 469, 546, 974, 1963, 3507, 3610, 6917, 21480, 34790, 94550, 103040, 147855, 208003, ... (resulting in 27 factorial primes)
n! + 1 is prime for :
n = 0, 1, 2, 3, 11, 27, 37, 41, 73, 77, 116, 154, 320, 340, 399, 427, 872, 1477, 6380, 26951, 110059, 150209, 288465, 308084, 422429, ... (resulting in 24 factorial primes - the prime 2 is repeated)
No other factorial primes are known .
When both n! + 1 and n! − 1 are composite, there must be at least 2n + 1 consecutive composite numbers around n!, since besides n! ± 1 and n! itself, also, each number of form n! ± k is divisible by k for 2 ≤ k ≤ n. However, the necessary length of this gap is asymptotically smaller than the average composite run for integers of similar size (see prime gap).
See also
Primorial prime
External links
The Top Twenty: Factorial primes from the Prime Pages
Factorial Prime Search from PrimeGrid |
https://en.wikipedia.org/wiki/Mathematics%20Magazine | Mathematics Magazine is a refereed bimonthly publication of the Mathematical Association of America. Its intended audience is teachers of collegiate mathematics, especially at the junior/senior level, and their students. It is explicitly a journal of mathematics rather than pedagogy. Rather than articles in the terse "theorem-proof" style of research journals, it seeks articles which provide a context for the mathematics they deliver, with examples, applications, illustrations, and historical background. Paid circulation in 2008 was 9,500 and total circulation was 10,000.
Mathematics Magazine is a continuation of Mathematics News Letter (1926–1934) and National Mathematics Magazine (1934–1945). Doris Schattschneider became the first female editor of Mathematics Magazine in 1981.
The MAA gives the Carl B. Allendoerfer Awards annually "for articles of expository excellence" published in Mathematics Magazine.
See also
American Mathematical Monthly
Carl B. Allendoerfer Award
Notes
Further reading
External links
Mathematics Magazine at JSTOR
Mathematics Magazine at Taylor & Francis Online
Mathematics education journals
Academic journals published by learned and professional societies of the United States
Mathematical Association of America |
https://en.wikipedia.org/wiki/ADAM22 | Disintegrin and metalloproteinase domain-containing protein 22 also known as ADAM22 is an enzyme that in humans is encoded by the ADAM22 gene.
Function
ADAM22 is a member of the ADAM (A Disintegrin And Metalloprotease domain) family. Members of this family are membrane-anchored proteins structurally related to snake venom disintegrins, and have been implicated in a variety of biological processes involving cell-cell and cell-matrix interactions, including fertilization, muscle development, and neurogenesis. This gene is highly expressed in the brain and may function as an integrin ligand in the brain. Alternative splicing results in several transcript variants.
Interactions
ADAM22 has been shown to interact with DLG4. |
https://en.wikipedia.org/wiki/FEniCS%20Project | The FEniCS Project is a collection of free and open-source software components with the common goal to enable automated solution of differential equations. The components provide scientific computing tools for working with computational meshes,
finite-element variational formulations of ordinary and partial differential equations, and numerical linear algebra.
Design and components
The FEniCS Project is designed as an umbrella project for a collection of interoperable components. The core components are
UFL (unified form language), a domain-specific language embedded in Python for specifying finite element discretizations of differential equations in terms of finite element variational forms;
FIAT (finite element automatic tabulator), the finite element backend of FEniCS, a Python module for generation of arbitrary order finite element basis functions on simplices;
FFC (fenics form compiler), a compiler for finite element variational forms taking UFL code as input and generating UFC output;
UFC (unified form-assembly code), a C++ interface consisting of low-level functions for evaluating and assembling finite element variational forms;
Instant, a Python module for inlining C and C++ code in Python;
DOLFIN, a C++/Python library providing data structures and algorithms for finite element meshes, automated finite element assembly, and numerical linear algebra.
DOLFIN, the computational high-performance C++ backend of FEniCS, functions as the main problem-solving environment (in both C++ and Python) and user interface. Its functionality integrates the other FEniCS components and handles communication with external libraries such as PETSc, Trilinos and Eigen for numerical linear algebra, ParMETIS and SCOTCH for mesh partitioning, and MPI and OpenMP for distributed computing.
History
The FEniCS Project was initiated in 2003 as a research collaboration between the University of Chicago and Chalmers University of Technology. The following institutions are cur |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.