source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Spectral%20index | In astronomy, the spectral index of a source is a measure of the dependence of radiative flux density (that is, radiative flux per unit of frequency) on frequency. Given frequency in Hz and radiative flux density in Jy, the spectral index is given implicitly by
Note that if flux does not follow a power law in frequency, the spectral index itself is a function of frequency. Rearranging the above, we see that the spectral index is given by
Clearly the power law can only apply over a certain range of frequency because otherwise the integral over all frequencies would be infinite.
Spectral index is also sometimes defined in terms of wavelength . In this case, the spectral index is given implicitly by
and at a given frequency, spectral index may be calculated by taking the derivative
The spectral index using the , which we may call differs from the index defined using The total flux between two frequencies or wavelengths is
which implies that
The opposite sign convention is sometimes employed, in which the spectral index is given by
The spectral index of a source can hint at its properties. For example, using the positive sign convention, the spectral index of the emission from an optically thin thermal plasma is -0.1, whereas for an optically thick plasma it is 2. Therefore, a spectral index of -0.1 to 2 at radio frequencies often indicates thermal emission, while a steep negative spectral index typically indicates synchrotron emission. It is worth noting that the observed emission can be affected by several absorption processes that affect the low-frequency emission the most; the reduction in the observed emission at low frequencies might result in a positive spectral index even if the intrinsic emission has a negative index. Therefore, it is not straightforward to associate positive spectral indices with thermal emission.
Spectral index of thermal emission
At radio frequencies (i.e. in the low-frequency, long-wavelength limit), where the Rayleigh–Jeans |
https://en.wikipedia.org/wiki/Empathy%20map | An empathy map is a widely-used visualization tool within the field of user experience design and human–computer interaction practice. In relation to empathetic design, the primary purpose of an empathy map is to bridge the understanding of the end user. Within context of its application, this tool is used to build a shared understanding of the user's needs and provide context to a user-centered solution.
Structure
The traditional empathy map begins with four categories: says, thinks, does, and feels. At the center of the map, a user or persona is displayed to remind practitioners and stakeholders what type of individual this research is centered around. Each category of the empathy map represents a snapshot of the user's thoughts and feelings without any chronological order.
Says category contains what the user says out loud during research or testing. Ideally, each point is written down as close to the user's original words as possible.
Thinks category contains what the user is thinking. While content may overlap with the Says category, Thinks category exists to capture thoughts users may not want to share willing due to social factors, such as self-consciousness or politeness.
Does category contains the user's action and behaviors. This contains what the user is physically doing and captures what actions users are taking.
Feels category contains the user's emotional state in context with their experience. This typically contains information or phrases as to how they feel about the experience.
However, as time evolved, the empathy map has been updated to provide more context and information architecture within the industry.
Empathy maps could vary in forms, but they have common core elements. Other than the four traditional categories mentioned above, empathy map could also include other categories. Here are two other categories commonly used:
See category contains information users observed through eyes. It could be what users see in the marketplace |
https://en.wikipedia.org/wiki/Apollo%20VP3 | Apollo VP3 (alias ETEQ 6628) is a x86 based Socket 7 chipset which was manufactured by VIA Technologies and was launched in 1997.
On its time Apollo VP3 was a high performance, cost effective, and energy efficient chipset. It offered AGP support for Socket 7 processors which was not supported at that moment by Intel, SiS and ALi chipsets. In November 1997 FIC released motherboard PA-2012, which uses Apollo VP3 and has AGP bus. This was the first Socket 7 motherboard supporting AGP.
Description
Apollo VP3 supports 32 bits Socket 7 CPU-s, like Pentium, Pentium MMX, AMD K5, AMD K6, Cyrix 6x86, WinChip C2 and C6 CPU-s. It uses VT82C597 (or VT82C597AT for Baby AT and ATX motherboards) northbridge controller chip and AC97 compliant VT82C586B southbridge chip with ACPI power management system.
VP3 has 64 bits memory bus; 32 bits 33 MHz PCI; 32 bits 66 MHz AGP 2X with sideband addressing, 133 MHz signalling and up to 533 MB/s transfer capability interfaces. It uses an integrated 10-bits TAG comparator and supports up to 2 MB pipelined burst synchronous SRAM (cache memory) and up to 1 GB ECC cachable RAM memory. Memory controller supports up to 8 memory pages (banks) interleaving mode, flexible row and column addresses, concurrent DRAM writeback, read around write capability, burst read and write operations, etc. (for more details see). Officially, the supported speeds of the memory bus are 50, 60 and 66 MHz, but the numerous implemented motherboards with VP3 have also 75 and even 83 MHz bus speed capability.
VT82C597 Northbridge supports up to six memory banks of DRAM-s or DIMM-s up to 1GB in total size. Memory controller supports standard fast page mode (FPM) DRAM, EDO-DRAM, Synchronous DRAM (SDRAM), and also SDRAM-II with Double Data Rate (DDR) in a flexible, mixed configuration. The Synchronous DRAM interface allows zero wait state bursting between the DRAM and the data buffers at 66 MHz. The six memory banks of DRAM can be used in arbitrary mixture of 1MB / 2MB / |
https://en.wikipedia.org/wiki/Percentage%20point | A percentage point or percent point is the unit for the arithmetic difference between two percentages. For example, moving up from 40 percent to 44 percent is an increase of 4 percentage points (although it is a 10-percent increase in the quantity being measured, if the total amount remains the same). In written text, the unit (the percentage point) is usually either written out, or abbreviated as pp, p.p., or %pt. to avoid confusion with percentage increase or decrease in the actual quantity. After the first occurrence, some writers abbreviate by using just "point" or "points".
Differences between percentages and percentage points
Consider the following hypothetical example: In 1980, 50 percent of the population smoked, and in 1990 only 40 percent of the population smoked. One can thus say that from 1980 to 1990, the prevalence of smoking decreased by 10 percentage points (or by 10 percent of the population) or by 20 percent when talking about smokers only – percentages indicate proportionate part of a total.
Percentage-point differences are one way to express a risk or probability. Consider a drug that cures a given disease in 70 percent of all cases, while without the drug, the disease heals spontaneously in only 50 percent of cases. The drug reduces absolute risk by 20 percentage points. Alternatives may be more meaningful to consumers of statistics, such as the reciprocal, also known as the number needed to treat (NNT). In this case, the reciprocal transform of the percentage-point difference would be 1/(20pp) = 1/0.20 = 5. Thus if 5 patients are treated with the drug, one could expect to cure one more patient than would have occurred in the absence of the drug.
For measurements involving percentages as a unit, such as, growth, yield, or ejection fraction, statistical deviations and related descriptive statistics, including the standard deviation and root-mean-square error, the result should be expressed in units of percentage points instead of percentage |
https://en.wikipedia.org/wiki/Eat-me%20signals | Eat-me signals are molecules exposed on the surface of a cell to induce phagocytes to phagocytose (eat) that cell. Currently known eat-me signals include: phosphatidylserine, oxidized phospholipids, sugar residues (such as galactose), deoxyribonucleic acid (DNA), calreticulin, annexin A1, histones and pentraxin-3 (PTX3).
The most well characterised eat-me signal is the phospholipid phosphatidylserine. Healthy cells do not expose phosphatidylserine on their surface, whereas dead, dying, infected, injured and some activated cells expose phosphatidylserine on their surface in order to induce phagocytes to phagocytose them. Most glycoproteins and glycolipids on the surface of our cells have short sugar chains that terminate in sialic acid residues, which inhibit phagocytosis, but removal of these residues reveals galactose residues (and subsequently N-acetylglucosamine and mannose residues) that can bind opsonins or directly activate phagocytic receptors. Calreticulin, annexin A1, histones, pentraxin-3 and DNA may be released by (and onto the surface of) dying cells to encourage phagocytes to eat these cells, thereby acting as self-opsonins. Eat-me signals, or the opsonins that bind them, are recognised by phagocytic receptors on phagocytes, inducing engulfment of the cell exposing the eat-me signal.
See also
Find-me signals |
https://en.wikipedia.org/wiki/Bego%C3%B1a%20Fern%C3%A1ndez%20%28mathematician%29 | María Asunción Begoña Fernández Fernández (published as Begoña Fernández) is a Mexican mathematician specializing in probability theory, stochastic processes, and mathematical finance. She is a professor of mathematics at the National Autonomous University of Mexico (UNAM).
Education
Fernández studied mathematics at UNAM, graduating in 1979. She earned a master's degree in statistics and operations research in 1986, and completed her doctorate at CINVESTAV in 1990.
Recognition
Fernández is a member of the Mexican Academy of Sciences. |
https://en.wikipedia.org/wiki/Pinwheel%20tiling | In geometry, pinwheel tilings are non-periodic tilings defined by Charles Radin and based on a construction due to John Conway.
They are the first known non-periodic tilings to each have the property that their tiles appear in infinitely many orientations.
Conway's tessellation
Let be the right triangle with side length , and .
Conway noticed that can be divided in five isometric copies of its image by the dilation of factor .
By suitably rescaling and translating/rotating, this operation can be iterated to obtain an infinite increasing sequence of growing triangles all made of isometric copies of .
The union of all these triangles yields a tiling of the whole plane by isometric copies of .
In this tiling, isometric copies of appear in infinitely many orientations (this is due to the angles and of each being algebraically independent to over the reals.).
Despite this, all the vertices have rational coordinates.
The pinwheel tilings
Radin relied on the above construction of Conway to define pinwheel tilings.
Formally, the pinwheel tilings are the tilings whose tiles are isometric copies of , in which a tile may intersect another tile only either on a whole side or on half the length side, and such that the following property holds.
Given any pinwheel tiling , there is a pinwheel tiling which, once each tile is divided in five following the Conway construction and the result is dilated by a factor , is equal to .
In other words, the tiles of any pinwheel tilings can be grouped in sets of five into homothetic tiles, so that these homothetic tiles form (up to rescaling) a new pinwheel tiling.
The tiling constructed by Conway is a pinwheel tiling, but there are uncountably many other different pinwheel tilings.
They are all locally undistinguishable (i.e., they have the same finite patches).
They all share with the Conway tiling the property that tiles appear in infinitely many orientations (and vertices have rational coordinates).
The main result prove |
https://en.wikipedia.org/wiki/IMGT | IMGT or the international ImMunoGeneTics information system is a collection of databases and resources for immunoinformatics, particularly the V, D, J, and C gene sequences, as well as a providing other tools and data related to the adaptive immune system. IMGT/LIGM-DB, the first and still largest database hosted as part of IMGT contains reference nucleotide sequences for 360 species' T-cell receptor and immunoglobulin molecules, as of 2023. These genes encode the proteins which are the foundation of adaptive immunity, which allows highly specific recognition and memory of pathogens.
History
IMGT was founded in June, 1989, by Marie-Paule Lefranc, an immunologist working at University of Montpellier. The project was presented to the 10th Human Genome Mapping Workshop, and resulted in the recognition of V, D, J, and C regions as genes. The first resource created was IMGT/LIGM-DB, a reference for nucleotide sequences of T-cell receptor and immunoglobulin of humans, and later vertebrate species. IMGT was created under the auspices of Laboratoire d'ImmunoGénétique Moléculaire at the University of Montpellier as well as French National Centre for Scientific Research (CNRS).
As both T-cell receptors and immunoglobulin molecules are built through a process of recombination of nucleotide sequences, the annotation of the building block regions and their role is unique within the genome. To standardize terminology and references, the IMGT-NC was created in 1992 and recognized by the International Union of Immunological Societies as a nomenclature subcommittee. Other tools include IMGT/Collier-de-Perles, a method for two dimensional representation of receptor amino acid sequences, and IMGT/mAb-DB, a database of monoclonal antibodies. Now maintained by the HLA Informatics Group, the primary reference for human HLA, IPD-IMGT/HLA Database, originated in part with IMGT. It was merged with the Immuno Polymorphism Database in 2003 to form the current reference.
Since 2015, IMGT h |
https://en.wikipedia.org/wiki/Wolf%20effect | The Wolf effect (sometimes Wolf shift) is a frequency shift in the electromagnetic spectrum.
The phenomenon occurs in several closely related phenomena in radiation physics, with analogous effects occurring in the scattering of light. It was first predicted by Emil Wolf in 1987 and subsequently confirmed in the laboratory in acoustic sources by Mark F. Bocko, David H. Douglass, and Robert S. Knox, and a year later in optic sources by Dean Faklis and George Morris in 1988.
Theoretical description
In optics, two non-Lambertian sources that emit beamed energy can interact in a way that causes a shift in the spectral lines. It is analogous to a pair of tuning forks with similar frequencies (pitches), connected together mechanically with a sounding board; there is a strong coupling that results in the resonant frequencies getting "dragged down" in pitch. The Wolf Effect requires that the waves from the sources are partially coherent - the wavefronts being partially in phase. Laser light is coherent while candlelight is incoherent, each photon having random phase. It can produce either redshifts or blueshifts, depending on the observer's point of view, but is redshifted when the observer is head-on.
For two sources interacting while separated by a vacuum, the Wolf effect cannot produce shifts greater than the linewidth of the source spectral line, since it is a position-dependent change in the distribution of the source spectrum, not a method by which new frequencies may be generated. However, when interacting with a medium, in combination with effects such as Brillouin scattering it may produce distorted shifts greater than the linewidth of the source.
Notes
Scattering
Spectroscopy |
https://en.wikipedia.org/wiki/Simulink | Simulink is a MATLAB-based graphical programming environment for modeling, simulating and analyzing multidomain dynamical systems. Its primary interface is a graphical block diagramming tool and a customizable set of block libraries. It offers tight integration with the rest of the MATLAB environment and can either drive MATLAB or be scripted from it. Simulink is widely used in automatic control and digital signal processing for multidomain simulation and model-based design.
Add-on products
MathWorks and other third-party hardware and software products can be used with Simulink. For example, Stateflow extends Simulink with a design environment for developing state machines and flow charts.
MathWorks claims that, coupled with another of their products, Simulink can automatically generate C source code for real-time implementation of systems. As the efficiency and flexibility of the code improves, this is becoming more widely adopted for production systems, in addition to being a tool for embedded system design work because of its flexibility and capacity for quick iteration. Embedded Coder creates code efficient enough for use in embedded systems.
Simulink Real-Time (formerly known as xPC Target), together with x86-based real-time systems, is an environment for simulating and testing Simulink and Stateflow models in real-time on the physical system. Another MathWorks product also supports specific embedded targets. When used with other generic products, Simulink and Stateflow can automatically generate synthesizable VHDL and Verilog.
Simulink Verification and Validation enables systematic verification and validation of models through modeling style checking, requirements traceability and model coverage analysis. Simulink Design Verifier uses formal methods to identify design errors like integer overflow, division by zero and dead logic, and generates test case scenarios for model checking within the Simulink environment.
SimEvents is used to add a library of |
https://en.wikipedia.org/wiki/Gossip%20protocol | A gossip protocol or epidemic protocol is a procedure or process of computer peer-to-peer communication that is based on the way epidemics spread. Some distributed systems use peer-to-peer gossip to ensure that data is disseminated to all members of a group. Some ad-hoc networks have no central registry and the only way to spread common data is to rely on each member to pass it along to their neighbors.
Communication
The concept of gossip communication can be illustrated by the analogy of office workers spreading rumors. Let's say each hour the office workers congregate around the water cooler. Each employee pairs off with another, chosen at random, and shares the latest gossip. At the start of the day, Dave starts a new rumor: he comments to Bob that he believes that Charlie dyes his mustache. At the next meeting, Bob tells Alice, while Dave repeats the idea to Eve. After each water cooler rendezvous, the number of individuals who have heard the rumor roughly doubles (though this doesn't account for gossiping twice to the same person; perhaps Dave tries to tell the story to Frank, only to find that Frank already heard it from Alice). Computer systems typically implement this type of protocol with a form of random "peer selection": with a given frequency, each machine picks another machine at random and shares any rumors.
Variants and styles
There are probably hundreds of variants of specific gossip-like protocols because each use-scenario is likely to be customized to the organization's specific needs.
For example, a gossip protocol might employ some of these ideas:
The core of the protocol involves periodic, pairwise, inter-process interactions.
The information exchanged during these interactions is of bounded size.
When agents interact, the state of at least one agent changes to reflect the state of the other.
Reliable communication is not assumed.
The frequency of the interactions is low compared to typical message latencies so that the protocol cos |
https://en.wikipedia.org/wiki/Radiogenic%20nuclide | A radiogenic nuclide is a nuclide that is produced by a process of radioactive decay. It may itself be radioactive (a radionuclide) or stable (a stable nuclide).
Radiogenic nuclides (more commonly referred to as radiogenic isotopes) form some of the most important tools in geology. They are used in two principal ways:
In comparison with the quantity of the radioactive 'parent isotope' in a system, the quantity of the radiogenic 'daughter product' is used as a radiometric dating tool (e.g. uranium–lead geochronology).
In comparison with the quantity of a non-radiogenic isotope of the same element, the quantity of the radiogenic isotope is used to define its isotopic signature (e.g. 206Pb/204Pb). This technique is discussed in more detail under the heading isotope geochemistry.
Examples
Some naturally occurring isotopes are entirely radiogenic, but all those are radioactive isotopes, with half-lives too short to have occurred primordially and still exist today. Thus, they are only present as radiogenic daughters of either ongoing decay processes, or else cosmogenic (cosmic ray induced) processes that produce them in nature freshly. A few others are naturally produced by nucleogenic processes (natural nuclear reactions of other types, such as neutron absorption).
For radiogenic isotopes that decay slowly enough, or that are stable isotopes, a primordial fraction is always present, since all sufficiently long-lived and stable isotopes do in fact naturally occur primordially. An additional fraction of some of these isotopes may also occur radiogenically.
Lead is perhaps the best example of a partly radiogenic substance, as all four of its stable isotopes (204Pb, 206Pb, 207Pb, and 208Pb) are present primordially, in known and fixed ratios. However, 204Pb is only present primordially, while the other three isotopes may also occur as radiogenic decay products of uranium and thorium. Specifically, 206Pb is formed from 238U, 207Pb from 235U, and 208Pb from 232Th. In ro |
https://en.wikipedia.org/wiki/UgMicroSatdb | UgMicroSatdb (UniGene Microsatellites database) is a database of microsatellites present in uniGene.
See also
Microsatellites
Unigene |
https://en.wikipedia.org/wiki/Cimora | Cimora is a Peruvian term used to describe a brew with hallucinogenic properties made from the “San Pedro” cacti (Trichocereus pachanoi) and other plants such as chamico (Datura stramonium) in South America, used traditionally for shamanic purposes and healing in Peru and Bolivia. The name is also used to describe a number of both hallucinogenic and non-hallucinogenic plants in the region, some of which are used in traditional medicines. Which plants go by the name cimora is an ethnobotanical problem that has been debated at great length by ethnobotanists in many different journals. The term cimora is said to refer to algo malo, meaning something bad. San Pedro goes by many names including pachanoi, aguacolla, elremedio, gigantón, and cactus of the four winds. The ritualistic use of the brew is similar to ayahuasca, which is a South American used as a traditional spirit medicine in Brazil, although while the active ingredient in ayahuasca is DMT, the active ingredient in cimora is mescaline. The use of cimora and the rituals involved have evolved throughout history due to the influence of those who controlled Peru at different stages, although it has almost always involved the use of the San Pedro cactus and its mescaline content.
Cimora (drink)
Plants and admixtures in the cimora brew
The main ingredient in the brew is the cactus Trichocereus pachanoi, also known as San Pedro, which contains Mescaline, which is responsible for the hallucinogenic effects of cimora. Other plants are commonly included in the mixture such as Neoraimondia arequipensis (syn. N. macrostibas), Brugmansia arborea, Pedilanthus tithymaloides, Datura stramonium and Isotoma longiflora. Other ingredients such as powdered bones, archaeological dust from sacred sites or cemetery dust are added if the illness is thought to be caused by black magic.
Effects of the brew
Trichocereus pachanoi is the main ingredient in cimora, which contains concentrations of mescaline. This ingredient causes a nu |
https://en.wikipedia.org/wiki/Dr.%20Neil%20Trivett%20Global%20Atmosphere%20Watch%20Observatory | The Dr. Neil Trivett Global Atmosphere Watch Observatory is an atmospheric baseline station operated by Environment and Climate Change Canada located about south south-west of Alert, Nunavut, on the north-eastern tip of Ellesmere Island, about south of the geographic North Pole.
The observatory is the northernmost of 31 global stations in an international network coordinated by the World Meteorological Organization (WMO) under its Global Atmosphere Watch (GAW) program to study the long-term effects of pollution on the atmospheric environment. Among these 31 stations, Alert is one of three greenhouse gas "intercomparison supersites", along with Mauna Loa in Hawaii and Cape Grim in Australia, which, due to their locations far from industrial activity, provide the international scientific community with a baseline record of atmospheric chemistry.
Geography
The observatory is located on a plateau about south of Canadian Forces Station (CFS) Alert, which is itself located on the shore of the Lincoln Sea, from the mouth of the Nares Strait. The region is characterized by recent glacial activity, with still extant glaciers visible among the peaks of the United States Range approximately to the west. The landscape immediately surrounding the observatory is undulating, marked by cliffs and crevasses and a number of small rivers which can become impassable during freshet.
To the south, the Winchester Hills are the dominant visible feature. A number of small freshwater lakes provide CFS Alert (and by extension, the observatory) with drinking water.
Due to its high latitude, the observatory experiences 24-hour daylight from the beginning of April to early September, and the sun remains below the horizon from mid-October to late February and both civil polar night and nautical polar night will occur. The intermediate periods are marked by a slight diurnal cycle. The dark season is responsible for much of the unique atmospheric chemistry that occurs during polar sunri |
https://en.wikipedia.org/wiki/Network%20topology | Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks.
Network topology is the topological structure of a network and may be depicted physically or logically. It is an application of graph theory wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes. Physical topology is the placement of the various components of a network (e.g., device location and cable installation), while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two different networks, yet their logical topologies may be identical. A network’s physical topology is a particular concern of the physical layer of the OSI model.
Examples of network topologies are found in local area networks (LAN), a common computer network installation. Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. A wide variety of physical topologies have been used in LANs, including ring, bus, mesh and star. Conversely, mapping the data flow between the components determines the logical topology of the network. In comparison, Controller Area Networks, common in vehicles, are primarily distributed control system networks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology.
Topologies
Two basic categories of network topologies exist, physical topologies and logical topologies.
The transmission medium layout used to link devices is the physical topology of t |
https://en.wikipedia.org/wiki/BioCyc%20database%20collection | The BioCyc database collection is an assortment of organism specific Pathway/Genome Databases (PGDBs) that provide reference to genome and metabolic pathway information for thousands of organisms. As of July 2023, there were over 20,040 databases within BioCyc. SRI International, based in Menlo Park, California, maintains the BioCyc database family.
Categories of Databases
Based on the manual curation done, BioCyc database family is divided into 3 tiers:
Tier 1: Databases which have received at least one year of literature based manual curation. Currently there are seven databases in Tier 1. Out of the seven, MetaCyc is a major database that contains almost 2500 metabolic pathways from many organisms. The other important Tier 1 database is HumanCyc which contains around 300 metabolic pathways found in humans. The remaining five databases include, EcoCyc (E. coli), AraCyc (Arabidopsis thaliana), YeastCyc (Saccharomyces cerevisiae), LeishCyc (Leishmania major Friedlin) and TrypanoCyc (Trypanosoma brucei).
Tier 2: Databases that were computationally predicted but have received moderate manual curation (most with 1–4 months curation). Tier 2 Databases are available for manual curation by scientists who are interested in any particular organism. Tier 2 databases currently contain 43 different organism databases.
Tier 3: Databases that were computationally predicted by PathoLogic and received no manual curation. As with Tier 2, Tier 3 databases are also available for curation for interested scientists.
Software tools
The BioCyc website contains a variety of software tools for searching, visualizing, comparing, and analyzing genome and pathway information. It includes a genome browser, and browsers for metabolic and regulatory networks. The website also includes tools for painting large-scale ("omics") datasets onto metabolic and regulatory networks, and onto the genome.
Use in Research
Since BioCyc Database family comprises a long list of organism specific dat |
https://en.wikipedia.org/wiki/Unipotent | In mathematics, a unipotent element r of a ring R is one such that r − 1 is a nilpotent element; in other words, (r − 1)n is zero for some n.
In particular, a square matrix M is a unipotent matrix if and only if its characteristic polynomial P(t) is a power of t − 1. Thus all the eigenvalues of a unipotent matrix are 1.
The term quasi-unipotent means that some power is unipotent, for example for a diagonalizable matrix with eigenvalues that are all roots of unity.
In the theory of algebraic groups, a group element is unipotent if it acts unipotently in a certain natural group representation. A unipotent affine algebraic group is then a group with all elements unipotent.
Definition
Definition with matrices
Consider the group of upper-triangular matrices with 's along the diagonal, so they are the group of matrices
Then, a unipotent group can be defined as a subgroup of some . Using scheme theory the group can be defined as the group scheme
and an affine group scheme is unipotent if it is a closed group scheme of this scheme.
Definition with ring theory
An element x of an affine algebraic group is unipotent when its associated right translation operator, rx, on the affine coordinate ring A[G] of G is locally unipotent as an element of the ring of linear endomorphism of A[G]. (Locally unipotent means that its restriction to any finite-dimensional stable subspace of A[G] is unipotent in the usual ring-theoretic sense.)
An affine algebraic group is called unipotent if all its elements are unipotent. Any unipotent algebraic group is isomorphic to a closed subgroup of the group of upper triangular matrices with diagonal entries 1, and conversely any such subgroup is unipotent. In particular any unipotent group is a nilpotent group, though the converse is not true (counterexample: the diagonal matrices of GLn(k)).
For example, the standard representation of on with standard basis has the fixed vector .
Definition with representation theory
If a unipo |
https://en.wikipedia.org/wiki/Circle%20grid%20analysis | Circle grid analysis (CGA), also known as circle grid strain analysis, is a method of measuring the strain levels of sheet metal after a part is formed by stamping or drawing. The name itself is a fairly accurate description of the process. Literally, a grid of circles of known diameter is etched to the surface of the sheet metal to be formed. After the part is formed, the circles have been stretched into ellipses. By measuring the longest part of the ellipse (called the “major strain”) and the shortest part of the ellipse (called the “minor strain”), it is possible to determine how close any stamped part is to splitting or fracturing.
The goal of using circle grid strain analysis is to predict potential problems before they become problems. Once you have a forming problem, chances are circle grid analysis won’t be able to help you, unless it’s intermittent enough to form a “good” part from time to time.
See also
Forming limit diagram |
https://en.wikipedia.org/wiki/Rectangle%20packing | Rectangle packing is a packing problem where the objective is to determine whether a given set of small rectangles can be placed inside a given large polygon, such that no two small rectangles overlap. Several variants of this problem have been studied.
Packing identical rectangles in a rectangle
In this variant, there are multiple instances of a single rectangle of size (l,w), and a bigger rectangle of size (L,W). The goal is to pack as many small rectangles as possible into the big rectangle without overlap between any rectangles (small or large). Common constraints of the problem include limiting small rectangle rotation to 90° multiples and requiring that each small rectangle is orthogonal to the large rectangle.
This problem has some applications such as loading of boxes on pallets and, specifically, woodpulp stowage. As an example result: it is possible to pack 147 small rectangles of size (137,95) in a big rectangle of size (1600,1230).
Packing identical squares in a rectilinear polygon
Given a rectilinear polygon (whose sides meet at right angles) R in the plane, a set S of points in R, and a set of identical squares, the goal is to find the largest number of non-overlapping squares that can be packed in points of S.
Suppose that, for each point p in S, we put a square centered at p. Let GS be the intersection graph of these squares. A square-packing is equivalent to an independent set in GS. Finding a largest square-packing is NP-hard; one may prove this by reducing from 3SAT.
Packing different rectangles in a given rectangle
In this variant, the small rectangles can have varying lengths and widths, and they should be packed in a given large rectangle. The decision problem of whether such a packing exists is NP-hard. This can be proved by a reduction from 3-partition. Given an instance of 3-partition with 3m positive integers: a1, ..., a3m, with a total sum of m T, we construct 3m small rectangles, all with a width of 1, such that the length of r |
https://en.wikipedia.org/wiki/6000%20%28number%29 | 6000 (six thousand) is the natural number following 5999 and preceding 6001.
Selected numbers in the range 6001–6999
6001 to 6099
6025 – Rhythm guitarist of the Dead Kennedys from June 1978 to March 1979. Full name is Carlos Cadona.
6028 – centered heptagonal number
6037 – super-prime, prime of the form 2p-1
6042 – 6042 Cheshirecat is a Mars-crossing asteroid.
6047 – safe prime
6053 – Sophie Germain prime
6069 – nonagonal number
6073 – balanced prime
6079 – The serial number Winston Smith is referred to as in the George Orwell novel Nineteen Eighty-Four
6084 = 782, sum of the cubes of the first twelve integers
6089 – highly cototient number
6095 – magic constant of n × n normal magic square and n-Queens Problem for n = 23.
6100 to 6199
6101 – Sophie Germain prime
6105 – triangular number
6113 – Sophie Germain prime, super-prime
6121 – prime of the form 2p-1
6131 – Sophie Germain prime, twin prime with 6133
6133 – 800th prime number, twin prime with 6131
6143 – Thabit number
6144 – 3-smooth number (211×3)
6173 – Sophie Germain prime
6174 – Kaprekar's constant
6181 – octahedral number
6200 to 6299
6200 – harmonic divisor number
6201 – square pyramidal number
6211 – cuban prime of the form x = y + 1
6216 – triangular number
6217 – super-prime, prime of the form 2p-1
6229 – super-prime
6232 – amicable number with 6368
– Most widely accepted figure for the number of verses in the Qur'an
6241 = 792, centered octagonal number
6250 – Leyland number
6263 – Sophie Germain prime, balanced prime
6269 – Sophie Germain prime
6280 – decagonal number
6300 to 6399
6311 – super-prime
6317 – balanced prime
6322 – centered heptagonal number
6323 – Sophie Germain prime, balanced prime, super-prime
6328 – triangular number
6329 – Sophie Germain prime
– Number of verses in the Qur'an according to the sect founded by Rashad Khalifa.
6348 – pentagonal pyramidal number
6361 – prime of the form 2p-1, twin prime
6364 – nonagonal number
63 |
https://en.wikipedia.org/wiki/Digital%20Ocean | Digital Ocean, Inc., was founded in 1992 by Jeffery Alhom. Digital Ocean was a maker of wireless products from 1992 until it was disbanded in 1998.
History
The company was founded in May 1992 by Jeffery Alholm and headquartered in Lenexa, Kansas. The company had several contracts with Apple Inc., AT&T, Aironet Wireless Communications (later acquired by Cisco as its wireless LAN division), Harris Semiconductor, the United States Department of Defense. It was a co-developer of the IEEE 802.11 wireless standard and of the industry's first 802.11 chipset. It developed the Seahorse, arguably the world's first smartphone. In addition, by specializing in rapid, custom development, the company concluded multiple individual development contracts for application-specific wireless products in vertical markets. Digital Ocean was granted approximately 20 patents for its development of wireless technologies.
In 1998 it was sold with its assets to Harris Semiconductor to become part of their Intersil division; Intersil was then spun off from Harris one year later.
Products
Several products included:
Starfish
The Starfish product lines include the Starfish Wireless Access Point for LocalTalk and EtherTalk Macintosh, the Starfish with Microcellular Roaming Software which enables seamless roaming, and the Starfish II Ethernet Access Point which Business Wire Magazine explained, "The Starfish II connects to wired networks and acts as the access provider for Manta and Digital Ocean's other station products."
Manta
The Manta product lines include the Manta 500EN EtherTalk Wireless Station with AAUI Connection, and the Manta 10BaseT which allows Wireless network connections at full Ethernet speeds.
Grouper Line
The Grouper line of products were networking devices that used spread-spectrum radio waves to communicate. Groupers could be attached to any PowerBook or used freestanding with any desktop Mac. Placing one Grouper on a wired network would have it serve as a hub for up t |
https://en.wikipedia.org/wiki/Amplified%20musculoskeletal%20pain%20syndrome | Amplified musculoskeletal pain syndrome (AMPS) is a long-term condition characterized by amplified pain without an identifiable physical cause. Other common symptoms include changes in the skin texture, color, and temperature; changes in hair and nail growth; skin sensitivity to touch, also known as allodynia; and dizziness, especially when going to a standing position after sitting. In up to 80% of cases, symptoms are caused by psychological trauma or psychological stress. AMPS is also commonly caused by physical injury or illness. Other possible causes of AMPS include Ehlers-danlos syndrome, myositis, arthritis, and other rheumatologic diseases. Due to the nature of the condition, AMPS is often not diagnosed when it first presents. AMPS is diagnosed through a review of a patient's medical history, as well as multiple tests to rule out the diagnosis of other conditions, such as bone fracture.
AMPS is treated through various methods. As there is no cure for the condition, pain management and treatment of potential causes, such as psychological stress, are used. This can include psychotherapy, physical therapy, and other pain management treatments. The prognosis for the condition is very positive, with the ability for majority of effected individuals to recover completely, but the management of the condition is a gradual improvement over time, which can leave many individuals feeling a lack of motivation or progress in their AMPS management. Anyone can be affected with AMPS, though the most commonly affected individuals are children and adolescents, with up to 80% of affected individuals being women.
Signs and symptoms
Amplified musculoskeletal pain syndrome is characterized by various symptoms. This includes chronic pain, allodynia, abdominal pain, dysphagia, dizziness, fatigue, headache, joint pain, movement issues, such as stiffness, shakiness, or coordination difficulty, swelling, fast heart rate, skin texture, color, or temperature changes, paresthesia, and c |
https://en.wikipedia.org/wiki/Mutation%E2%80%93selection%20balance | Mutation–selection balance is an equilibrium in the number of deleterious alleles in a population that occurs when the rate at which deleterious alleles are created by mutation equals the rate at which deleterious alleles are eliminated by selection. The majority of genetic mutations are neutral or deleterious; beneficial mutations are relatively rare. The resulting influx of deleterious mutations into a population over time is counteracted by negative selection, which acts to purge deleterious mutations. Setting aside other factors (e.g., balancing selection, and genetic drift), the equilibrium number of deleterious alleles is then determined by a balance between the deleterious mutation rate and the rate at which selection purges those mutations.
Mutation–selection balance was originally proposed to explain how genetic variation is maintained in populations, although several other ways for deleterious mutations to persist are now recognized, notably balancing selection. Nevertheless, the concept is still widely used in evolutionary genetics, e.g. to explain the persistence of deleterious alleles as in the case of spinal muscular atrophy, or, in theoretical models, mutation-selection balance can appear in a variety of ways and has even been applied to beneficial mutations (i.e. balance between selective loss of variation and creation of variation by beneficial mutations).
Haploid population
As a simple example of mutation-selection balance, consider a single locus in a haploid population with two possible alleles: a normal allele A with frequency , and a mutated deleterious allele B with frequency , which has a small relative fitness disadvantage of . Suppose that deleterious mutations from A to B occur at rate , and the reverse beneficial mutation from B to A occurs rarely enough to be negligible (e.g. because the mutation rate is so low that is small). Then, each generation selection eliminates deleterious mutants reducing by an amount , while mutation creat |
https://en.wikipedia.org/wiki/The%20Limits%20to%20Growth | The Limits to Growth (LTG) is a 1972 report that discussed the possibility of exponential economic and population growth with finite supply of resources, studied by computer simulation. The study used the World3 computer model to simulate the consequence of interactions between the Earth and human systems. The model was based on the work of Jay Forrester of MIT, as described in his book World Dynamics.
Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971. The report's authors are Donella H. Meadows, Dennis L. Meadows, Jørgen Randers, and William W. Behrens III, representing a team of 17 researchers.
The report's findings suggest that in the absence of significant alterations in resource utilization, it is highly likely that there would be an abrupt and unmanageable decrease in both population and industrial capacity. Despite facing severe criticism and scrutiny upon its initial release, subsequent research aimed at verifying its predictions consistently supports the notion that there have been inadequate modifications made since 1972 to substantially alter its essence.
Since its publication, some 30 million copies of the book in 30 languages have been purchased. It continues to generate debate and has been the subject of several subsequent publications.
Beyond the Limits and The Limits to Growth: The 30-Year Update were published in 1992 and 2004 respectively, in 2012, a 40-year forecast from Jørgen Randers, one of the book's original authors, was published as 2052: A Global Forecast for the Next Forty Years, and in 2022 two of the original Limits to Growth authors, Dennis Meadows and Jørgen Randers, joined 19 other contributors to produce Limits and Beyond.
Purpose
In commissioning the MIT team to undertake the project that resulted in LTG, the Club of Rome had three objectives:
Gain insights into the limits of our world system and the constraints it put |
https://en.wikipedia.org/wiki/Lemniscus%20%28anatomy%29 | A lemniscus (Greek for ribbon or band) is a bundle of secondary sensory fibers in the brainstem. The medial lemniscus and lateral lemniscus terminate in specific relay nuclei of the diencephalon. The trigeminal lemniscus is sometimes considered as the cephalic part of the medial lemniscus. The spinal lemniscus constitutes the spinothalamic tract. |
https://en.wikipedia.org/wiki/Cyprus%20lunar%20sample%20displays | The Cyprus lunar sample displays are part of two commemorative plaques consisting of tiny fragments of Moon specimens brought back with the Apollo 11 and Apollo 17 lunar missions. These plaques were given to the people of the Republic of Cyprus by United States President Richard Nixon as goodwill gifts.
Description
Apollo 11
Apollo 17
History
An international mystery of how the Cyprus goodwill Moon rock was offered for sale on the black market begins in 1960. During a coup of 1974 the Presidential Palace burned. The Cyprus "Moon rock" plaque from Apollo 17 was considered lost at that time. Subsequent information revealed that the display was never actually given to the Cyprus government, rather was kept at the US embassy in Nicosia during the 1974 coup d'état (Turkish invasion), which caused a delayed presentation of the plaque. But American diplomatic personnel left the island and the display went missing, showing up on the black market years later, in the hands of the son of a previous US diplomat.
NASA reported in May 2010 that the Office of Inspector General recovered the Apollo 17 plaque and are preparing to re-gift it. According to Robert Pearlman, the whereabouts of the Cyprus Apollo 11 goodwill lunar display are unknown.
See also
List of Apollo lunar sample displays |
https://en.wikipedia.org/wiki/Actuary | An actuary is a professional with advanced mathematical skills who deals with the measurement and management of risk and uncertainty. The name of the corresponding field is actuarial science which covers rigorous mathematical calculations such as the survival function and stochastic process. These risks can affect both sides of the balance sheet and require asset management, liability management, and valuation skills. Actuaries provide assessments of financial security systems, with a focus on their complexity, their mathematics, and their mechanisms.
While the concept of insurance dates to antiquity, the concepts needed to scientifically measure and mitigate risks have their origins in the 17th century studies of probability and annuities. Actuaries of the 21st century require analytical skills, business knowledge, and an understanding of human behavior and information systems to design and manage programs that control risk. The actual steps needed to become an actuary are usually country-specific; however, almost all processes share a rigorous schooling or examination structure and take many years to complete.
The profession has consistently been ranked as one of the most desirable. In various studies in the United States, being an actuary was ranked first or second multiple times since 2010, and in the top 20 for most of the past decade.
Responsibilities
Actuaries use skills primarily in mathematics, particularly calculus-based probability and mathematical statistics, but also economics, computer science, finance, and business. For this reason, actuaries are essential to the insurance and reinsurance industries, either as staff employees or as consultants; to other businesses, including sponsors of pension plans; and to government agencies such as the Government Actuary's Department in the United Kingdom or the Social Security Administration in the United States of America. Actuaries assemble and analyze data to estimate the probability and likely cost of the |
https://en.wikipedia.org/wiki/Hypoplasia | Hypoplasia (from Ancient Greek 'under' + 'formation'; adjective form hypoplastic) is underdevelopment or incomplete development of a tissue or organ. Although the term is not always used precisely, it properly refers to an inadequate or below-normal number of cells. Hypoplasia is similar to aplasia, but less severe. It is technically not the opposite of hyperplasia (too many cells). Hypoplasia is a congenital condition, while hyperplasia generally refers to excessive cell growth later in life. (Atrophy, the wasting away of already existing cells, is technically the direct opposite of both hyperplasia and hypertrophy.)
Hypoplasia can be present in any tissue or organ. It is descriptive of many medical conditions, including underdevelopment of organs such as:
Breasts during puberty
Testes in Klinefelter's syndrome
Ovaries in Fanconi anemia, gonadal dysgenesis, trisomy X
Thymus in DiGeorge syndrome
Labia majora in popliteal pterygium syndrome
Corpus callosum, connecting the two sides of the brain, in agenesis of the corpus callosum
Cerebellum caused by mutation in the reelin gene
Tooth caused by oral pathology, such as Turner's hypoplasia
Chambers of the heart in hypoplastic left heart syndrome and hypoplastic right heart syndrome
Optic nerve in optic nerve hypoplasia
Sacrum in sacral agenesis
Facial muscle in asymmetric crying facies
Thumb from birth
Lungs, often as a result of oligohydramnios during gestation or the existence of congenital diaphragmatic hernia
Small bowel in coeliac disease
Fingers and ears in harlequin-type ichthyosis
Mandible in congenital hypothyroidism
See also
Atrophy, when an existing part wastes away
List of biological development disorders |
https://en.wikipedia.org/wiki/Naum%20Akhiezer | Naum Ilyich Akhiezer (; ; 6 March 1901 – 3 June 1980) was a Soviet and Ukrainian mathematician of Jewish origin, known for his works in approximation theory and the theory of differential and integral operators. He is also known as the author of classical books on various subjects in analysis, and for his work on the history of mathematics. He is the brother of the theoretical physicist Aleksandr Akhiezer.
Biography
Naum Akhiezer was born in Cherykaw (now in Belarus). He studied in the Kyiv Institute of Public Education (now Taras Shevchenko National University of Kyiv). In 1928, he defended his PhD thesis "Aerodynamical Investigations" under the supervision of Dmitry Grave. From 1928 to 1933, he worked at the Kyiv University and at the Kyiv Aviation Institute.
In 1933, Naum Akhiezer moved to Kharkiv. From 1933 to his death, except for the years of war and evacuation, he was a professor at Kharkov University and at other institutes in Kharkiv. From 1935 to 1940 and from 1947 to 1950 he was director of the Kharkov Institute of Mathematics and Mechanics. For many years he headed the Kharkov Mathematical Society.
Work
Akhiezer obtained important results in approximation theory (in particular, on extremal problems, constructive function theory, and the problem of moments), where he masterly applied the methods of the geometric theory of functions of a complex variable (especially, conformal mappings and the theory of Riemann surfaces) and of functional analysis. He found the fundamental connection between the inverse problem for important classes of differential and finite difference operators of the second order with a finite number of gaps in the spectrum, and the Jacobi inversion problem for Abelian integrals. This connection led to explicit solutions of the inverse problem for the so-called finite-gap operators.
Some publications
Books in analysis
English translation:
. English translation:
. English translation:
English translation:
English translation |
https://en.wikipedia.org/wiki/List%20of%20video%20game%20console%20emulators | The following is a list of notable video game console emulators.
Arcade
Visual Pinball
Atari
Atari 2600
Stella
Nintendo
Home consoles
Nintendo Entertainment System
FCEUX
NESticle
Nestopia
Super NES
Snes9x
ZSNES
Nintendo 64
Mupen64Plus
Project64
Project Unreality
UltraHLE
GameCube/Wii
Dolphin
Wii U
Cemu
Handhelds
Game Boy
Wzonka-Lad
Game Boy Advance
VisualBoyAdvance (Also supports Game Boy and Game Boy Color)
VisualBoyAdvance-M
Nintendo 3DS
Citra
Hybrid
Nintendo Switch
Yuzu
SNK
Neo Geo CD
NeoCD
Sony
Home consoles
PlayStation
AdriPSX
bleem!
bleemcast!
Connectix Virtual Game Station
ePSXe
PCSX-Reloaded
PlayStation 2
PCSX2
PlayStation 3
RPCS3
PlayStation 4
, PlayStation 4 emulators remain experimental. A website promoting a supposed PS4 emulator, "PCSX4", is a scam.
Handhelds
PlayStation Portable
PPSSPP
Frontends
RetroArch
Multi-system emulators
Multi-system emulators are capable of emulating the functionality of multiple systems.
higan
MAME (Multiple Arcade Machine Emulator)
Mednafen
MESS (Multi Emulator Super System), formerly a stand-alone application and now part of MAME
OpenEmu
See also
Emulator
List of computer system emulators
List of emulators
Video game console emulator |
https://en.wikipedia.org/wiki/Counterplan%20%28Soviet%20planning%29 | In the economy of the Soviet Union and other communist states of the Soviet Bloc, the counterplan () was a plan put forth by workers of an enterprise (or its structural unit) to exceed the expectations of the state plan allocated for the enterprise/unit. It was an important part of the socialist competition.
According to the Great Soviet Encyclopedia, the idea of the counterplan was put forth by the workers of the Karl Marx Plant, Leningrad, in June 1930, during the first five-year plan.
Since the 1960s, counterplans, in the form of obligations as part of Socialist emulation, to execute state plans (annual, quarterly, monthly) ahead of schedule were common in the Soviet Union and other communist states. |
https://en.wikipedia.org/wiki/Pregnancy%20over%20age%2050 | Pregnancy over the age of 50 has become possible for more women due to advances in assisted reproductive technology, in particular egg donation. Typically, a woman's fecundity ends with menopause, which, by definition, is 12 consecutive months without having had any menstrual flow at all. During perimenopause, the menstrual cycle and the periods become irregular and eventually stop altogether. The female biological clock can vary greatly from woman to woman. A woman's individual level of fertility can be tested through a variety of methods.
In the United States, between 1997 and 1999, 539 births were reported among mothers over age 50 (four per 100,000 births), with 194 being over 55.
The oldest recorded mother to date to conceive was 73 years. According to statistics from the Human Fertilisation and Embryology Authority, in the UK more than 20 babies are born to women over age 50 per year through in vitro fertilization with the use of donor oocytes (eggs).
Maria del Carmen Bousada de Lara formerly held the record of oldest verified mother; she was aged 66 years 358 days when she gave birth to twins; she was 130 days older than Adriana Iliescu, who gave birth in 2005 to a baby girl. In both cases, the children were conceived through IVF with donor eggs. The oldest verified mother to conceive naturally (listed currently in the Guinness Records) is Dawn Brooke (Guernsey); she conceived a son at the age of 59 in 1997.
Erramatti Mangamma currently holds the record for being the oldest living mother who gave birth at the age of 73 through in-vitro fertilisation via caesarean section in the city of Hyderabad, India. She delivered twin baby girls, making her also the oldest mother to give birth to twins.
The previous record for being the oldest living mother was held by Daljinder Kaur Gill from Amritsar, India who gave birth to a baby boy at age 72 through in-vitro fertilisation.
Age considerations
Menopause typically occurs between 44 and 58 years of age. DNA testi |
https://en.wikipedia.org/wiki/Server-based%20signatures | In cryptography, server-based signatures are digital signatures in which a publicly available server participates in the signature creation process. This is in contrast to conventional digital signatures that are based on public-key cryptography and public-key infrastructure. With that, they assume that signers use their personal trusted computing bases for generating signatures without any communication with servers.
Four different classes of server based signatures have been proposed:
1. Lamport One-Time Signatures. Proposed in 1979 by Leslie Lamport. Lamport one-time signatures are based on cryptographic hash functions. For signing a message, the signer just sends a list of hash values (outputs of a hash function) to a publishing server and therefore the signature process is very fast, though the size of the signature is many times larger, compared to ordinary public-key signature schemes.
2. On-line/off-line Digital Signatures. First proposed in 1989 by Even, Goldreich and Micali in order to speed up the signature creation procedure, which is usually much more time-consuming than verification. In case of RSA, it may be one thousand times slower than verification. On-line/off-line digital signatures are created in two phases. The first phase is performed off-line, possibly even before the message to be signed is known. The second (message-dependent) phase is performed on-line and involves communication with a server. In the first (off-line) phase, the signer uses a conventional public-key digital signature scheme to sign a public key of the Lamport one-time signature scheme. In the second phase, a message is signed by using the Lamport signature scheme. Some later works
have improved the efficiency of the original solution by Even et al.
3. Server-Supported Signatures (SSS). Proposed in 1996 by Asokan, Tsudik and Waidner in order to delegate the use of time-consuming operations of asymmetric cryptography from clients (ordinary users) to a server. For ordin |
https://en.wikipedia.org/wiki/Lichen%20sclerosus | Lichen sclerosus (LS) is a chronic, inflammatory skin disease of unknown cause which can affect any body part of any person but has a strong preference for the genitals (penis, vulva) and is also known as balanitis xerotica obliterans (BXO) when it affects the penis. Lichen sclerosus is not contagious. There is a well-documented increase of skin cancer risk in LS, potentially improvable with treatment. LS in adult age women is normally incurable, but improvable with treatment, and often gets progressively worse if not treated properly. Most males with mild or intermediate disease restricted to foreskin or glans can be cured by either medical or surgical treatment.
Signs and symptoms
LS can occur without symptoms. White patches on the LS body area, itching, pain, dyspareunia (in genital LS), easier bruising, cracking, tearing and peeling, and hyperkeratosis are common symptoms in both men and women. In women, the condition most commonly occurs on the vulva and around the anus with ivory-white elevations that may be flat and glistening.
In males, the disease may take the form of whitish patches on the foreskin and its narrowing (preputial stenosis), forming an "indurated ring", which can make retraction more difficult or impossible (phimosis). In addition there can be lesions, white patches or reddening on the glans. In contrast to women, anal involvement is less frequent. Meatal stenosis, making it more difficult or even impossible to urinate, may also occur.
On the non-genital skin, the disease may manifest as porcelain-white spots with small visible plugs inside the orifices of hair follicles or sweat glands on the surface. Thinning of the skin may also occur.
Psychological effect
Distress due to the discomfort and pain of lichen sclerosus is normal, as are concerns with self-esteem and sex. Counseling can help.
According to the National Vulvodynia Association, which also supports women with lichen sclerosus, vulvo-vaginal conditions can cause feelings of iso |
https://en.wikipedia.org/wiki/Bevel | A bevelled edge (UK) or beveled edge (US) is an edge of a structure that is not perpendicular to the faces of the piece. The words bevel and chamfer overlap in usage; in general usage they are often interchanged, while in technical usage they may sometimes be differentiated as shown in the image at right. A bevel is typically used to soften the edge of a piece for the sake of safety, wear resistance, or aesthetics; or to facilitate mating with another piece.
Applications
Cutting tools
Most cutting tools have a bevelled edge which is apparent when one examines the grind.
Bevel angles can be duplicated using a sliding T bevel.
Graphic design
Typographic bevels are shading and artificial shadows that emulate the appearance of a 3-dimensional letter. The bevel is a relatively common effect in graphic editors such as Photoshop. As such, it is in widespread use in mainstream logos and other design elements.
Glass and mirrors
Bevelled edges are a common aesthetic nicety added to window panes and mirrors.
Geology
Geologists refer to any slope of land into a stratum of different elevation as a bevel.
Sports
In waterskiing, a bevel is the transition area between the side of the ski and the bottom of the ski. Beginners tend to prefer sharp bevels, which allow the ski to glide on the water surface.
In Disc Golf, the 'beveled edge' was patented in 1983 by Dave Dunipace who founded Innova Champion Discs. This element transformed the Frisbee into the farther flying golf discs the sport uses today.
Cards
With a deck of cards, the top portion can be slid back so that the back of the deck is at an angle, a technique used in card tricks.
Semiconductor wafers
In the semiconductor industry, wafers have two typical edge types: a slanted beveled shape or a rounded bullet shape. The edges on the beveled types are called the bevel region, and they are typically ground at a 22 degree angle. While it is not possible to create a complete and functional die with the bevel material |
https://en.wikipedia.org/wiki/Laplace%20operators%20in%20differential%20geometry | In differential geometry there are a number of second-order, linear, elliptic differential operators bearing the name Laplacian. This article provides an overview of some of them.
Connection Laplacian
The connection Laplacian, also known as the rough Laplacian, is a differential operator acting on the various tensor bundles of a manifold, defined in terms of a Riemannian- or pseudo-Riemannian metric. When applied to functions (i.e. tensors of rank 0), the connection
Laplacian is often called the Laplace–Beltrami operator. It is defined as the trace of the second covariant derivative:
where T is any tensor, is the Levi-Civita connection associated to the metric, and the trace is taken with respect to
the metric. Recall that the second covariant derivative of T is defined as
Note that with this definition, the connection Laplacian has negative spectrum. On functions, it agrees with
the operator given as the divergence of the gradient.
If the connection of interest is the Levi-Civita connection one can find a convenient formula for the Laplacian of a scalar function in terms of partial derivatives with respect to a coordinate system:
where is a scalar function, is absolute value of the determinant of the metric (absolute value is necessary in the pseudo-Riemannian case, e.g. in General Relativity) and denotes the inverse of the metric tensor.
Hodge Laplacian
The Hodge Laplacian, also known as the Laplace–de Rham operator, is a differential operator acting on differential forms. (Abstractly,
it is a second order operator on each exterior power of the cotangent bundle.) This operator is defined on any manifold equipped with
a Riemannian- or pseudo-Riemannian metric.
where d is the exterior derivative or differential and δ is the codifferential. The Hodge Laplacian on a compact manifold has nonnegative spectrum.
The connection Laplacian may also be taken to act on differential forms by restricting it to act on skew-symmetric tensors. The connection |
https://en.wikipedia.org/wiki/MAM%20domain | MAM domain is an evolutionary conserved protein domain. It is an extracellular domain found in many receptors.
A 170 amino acid domain, the so-called MAM (meprin, A-5 protein, and receptor protein-tyrosine phosphatase mu) domain, has been recognised in the extracellular region of functionally diverse proteins. These proteins have a modular, receptor-like architecture comprising a signal peptide, an N-terminal extracellular domain, a single transmembrane domain and an intracellular domain. Such proteins include meprin (a cell surface glycoprotein); A5 antigen (a developmentally-regulated cell surface protein; Xenopus nrp1; ); and receptor-like tyrosine protein phosphatase. The MAM domain is thought to have an adhesive function. It contains 4 conserved cysteine residues, which probably form disulphide bridges.
Human proteins containing this domain
ALK; EGFL6; MAMDC2; MAMDC4; MDGA1; MDGA2; MEP1A; MEP1B;
NPNT; NRP1; NRP2; PRSS7; PTPRK; PTPRM; PTPRO; PTPRT;
PTPRU; ZAN |
https://en.wikipedia.org/wiki/Intracortical%20encephalogram%20signal%20analysis | Intracortical encephalogram signal analysis (minedICE) is the learning and subsequent prediction of electrical activity inside the grey matter of the brain produced by the firing of neurons within the brain. The device was made by clinical researchers and medical Doctors at Columbia University, University of Colorado at Anschutz Medical Campus, and the University of Colorado at Colorado Springs.
Intracortical encephalogram signal analysis has two components: 1) an intracortical EEG multicontact electrode (ICE) that inserted through a patient's skull and deep into the grey matter of the patient, and 2) an artificial intelligent agent that is trained in neurological signal analysis.
Clinical use
The (ICE) component comprises a catheter with platinum sensors that, when inserted into the brain allows for recording directly from the cerebral cortex of patients with acute brain injury. The Artificial Intelligent (see Machine learning) and Knowledge Discovery of neurological signals component was first derived from using Intracortical Electrodes in rats (see Brain–computer interface) to predicting epilepsy seizures in rats by 6 seconds (see Seizure prediction) [15]. These models used in the clinical environs of noisy domains, spectral analysis, knowledge discovery in databases (KDD), discrete finite automata and sequential and coincident power spectra are incorporated into the ICE component to read, learn and predict severe Neurological disorders.
Clinical demand
The prediction, detection and interpretation of abnormal brain electrical activity is an area wherein technological advancement is necessary in that current state-of-the-art methods for electroencephalography (EEG) are retrospective, prone to subjectivity and obviate real-time data interpretation that is often necessary to allow timely and accurate therapeutic intervention by neurologist and neurosurgeon. Intracortical encephalogram signal analysis is done by neurosurgeon clinical researchers and those who cre |
https://en.wikipedia.org/wiki/Chang%27an%20Flower | Chang'an Flower () is the mascot of the Xi'an China International Horticultural Exposition, which was held in Xi'an, China in 2011. The sculpture of Chang'an Flower was unveiled on October 10, 2010, is tall, and is constructed of fiberglass reinforced plastics.
Meaning
The Chang'an Flower is shaped like a pomegranate with a flower on top. The pomegranate flower was chosen as the city flower of Xi'an on August 13, 1986. Xi’an was also formerly known as Chang'an in ancient times, hence the “Chang’an” in the mascot's name. Chang'an translates to “forever peace” in English.
The mascot was created to show the harmonious relationship that human beings share with nature. It is red, with a yellow flower on top of its head. It has a cartoon facial expression, big eyes, and small body to emphasize its friendliness and greeting.
The mascot shares a name with the emblem of the Xi'an China International Horticultural Exposition, which was designed by Chen Shaohua and is printed in white on the mascot's red torso. Their name is derived from the poem “After passing the civil service exam” () by Tang Dynasty poet Meng Jiao (), as it is the phrase "长安花" from the line "春风得意马蹄疾,一日看尽长安花" ("Riding on the crest of success, seeing all the flowers in Chang'an").
The emblem is shaped like a pomegranate flower, composed of concentric lobed rings that also resemble flowers. The innermost “flower” has three petals. Moving outward, the number of petals increases by one for every next ring, a design inspired by the claim in the Taoist text Classic of the Virtue of the Path and the Power that, “The Tao produced One; One produced Two; Two produced Three; and Three produced All things.”
The ring in the center has three petals and is oriented like a triangle, such that it resembles the Chinese character ”人”, which literally translates to people and is representative of civilization, responsibility, and reason. It also represents the seeds of nature. The next ring has four petals and represent |
https://en.wikipedia.org/wiki/Network%20Performance%20Monitoring%20Solution | Network Performance Monitor (NPM) in Operations Management Suite, a component of Microsoft Azure, monitors network performance between office sites, data centers, clouds and applications in near real-time. It helps a network administrator locate and troubleshoot bottlenecks like network delay, data loss and availability of any network link across on-premises networks, Microsoft Azure VNets, Amazon Web Services VPCs, hybrid networks, VPNs or even public internet links.
Network Performance Monitor
Network Performance Monitor (NPM) is network monitoring from the Operations Management Suite, that monitors networks. NPM monitors the availability of connectivity and quality of connectivity between multiple locations within and across campuses, private and public clouds. It uses synthetic transactions to test for reachability and can be used on any IP network irrespective of the make and model of network routers or switches deployed.
Features
A dashboard is generated to display summarized information about the Network including Network health events, unhealthy Network links, and the Subnetwork links with the most loss and most latency. Custom dashboards can also be created to find the state of the network at a point-in-time in history.
An interactive topology map is also generated to show the routes between Nodes. Network administrator can use it to distinguish the unhealthy path to find out the root cause of the issue.
Alerts can be configured to send e-mails to stakeholders when a threshold is reached.
Use cases
Two on-premises networks: Monitor connectivity between two office sites which could be connected using an MPLS WAN link or VPN
Multiple sites: Monitor connectivity to a central site from multiple sites. For example, scenarios where users from multiple office locations are accessing applications hosted at a central location
Hybrid Networks: Monitor connectivity between on-premises and Azure VNets that could be connected using S2S VPN or ExpressRoute
Multip |
https://en.wikipedia.org/wiki/Thermal%20effusivity | In thermodynamics, a material's thermal effusivity, also known as thermal responsivity, is a measure of its ability to exchange thermal energy with its surroundings. It is defined as the square root of the product of the material's thermal conductivity () and its volumetric heat capacity () or as the ratio of thermal conductivity to the square root of thermal diffusivity ().
The SI units for thermal effusivity are , or, equivalently, .
Thermal effusivity is a good approximation for the material's thermal inertia for a semi-infinite rigid body where heat transfer is dominated by the diffusive process of conduction only.
Thermal effusivity is a parameter that emerges upon applying solutions of the heat equation to heat flow through a thin surface-like region. It becomes particularly useful when the region is selected adjacent to a material's actual surface. Knowing the effusivity and equilibrium temperature of each of two material bodies then enables an estimate of their interface temperature when placed into thermal contact.
If and are the temperature of the two bodies, then upon contact, the temperature of the contact interface (assumed to be a smooth surface) becomes
Specialty sensors have also been developed based on this relationship to measure effusivity.
Thermal effusivity and thermal diffusivity are related quantities; respectively a product versus a ratio of a material's fundamental transport and storage properties. The diffusivity appears explicitly in the heat equation, which is an energy conservation equation, and measures the speed at which thermal equilibrium can be reached by a body. By contrast a body's effusivity (also sometimes called inertia, accumulation, responsiveness etc.) is its ability to resist a temperature change when subjected to a time-periodic, or similarly perturbative, forcing function.
Applications
Temperature at a contact surface
If two semi-infinite bodies initially at temperatures and are brought in perfect ther |
https://en.wikipedia.org/wiki/Steitzviridae | Steitzviridae is a family of RNA viruses, which infect prokaryotes.
Taxonomy
Steitzviridae contains 117 genera:
Abakapovirus
Achlievirus
Adahmuvirus
Alehxovirus
Aphenovirus
Arawsmovirus
Arctuvirus
Arpirivirus
Ashcevirus
Bahnicevirus
Belbovirus
Berdovirus
Bicehmovirus
Bidhavirus
Brikhyavirus
Cahrlavirus
Cahtavirus
Catindovirus
Cebevirus
Chlurivirus
Chorovirus
Clitovirus
Cohrdavirus
Controvirus
Cunarovirus
Dohnjavirus
Endehruvirus
Eregrovirus
Erimutivirus
Fagihovirus
Fejonovirus
Ferahgovirus
Fluruvirus
Frobavirus
Fudhoevirus
Gahmegovirus
Garnievirus
Gehrmavirus
Gernuduvirus
Gihfavirus
Gredihovirus
Gulmivirus
Hahkesevirus
Henifovirus
Hohltdevirus
Hohrdovirus
Huhbevirus
Huohcivirus
Huylevirus
Hyjrovirus
Hylipavirus
Iwahcevirus
Jiforsuvirus
Kecijavirus
Kecuhnavirus
Kehruavirus
Kihsiravirus
Kinglevirus
Kyanivirus
Laimuvirus
Lazuovirus
Lehptavirus
Lihvevirus
Limaivirus
Lomnativirus
Loptevirus
Luloavirus
Lygehevirus
Lyndovirus
Mahdsavirus
Mahjnavirus
Metsavirus
Milihnovirus
Minusuvirus
Mocruvirus
Molucevirus
Nehumivirus
Nihlwovirus
Ociwvivirus
Pahspavirus
Patimovirus
Pepusduvirus
Phulihavirus
Pirifovirus
Podtsbuvirus
Pohlodivirus
Psiaduvirus
Psouhdivirus
Puduphavirus
Pujohnavirus
Rodtovirus
Rohsdrivirus
Sdenfavirus
Setohruvirus
Sidiruavirus
Snuwdevirus
Sperdavirus
Stehnavirus
Suhnsivirus
Surghavirus
Tamanovirus
Tehmuvirus
Tehnicivirus
Thehlovirus
Thyrsuvirus
Tikiyavirus
Timirovirus
Tsuhreavirus
Tuskovirus
Tuwendivirus
Vernevirus
Vesehyavirus
Vindevirus
Weheuvirus
Widsokivirus
Yeziwivirus
Zuysuivirus |
https://en.wikipedia.org/wiki/Eddy%20current%20brake | An eddy current brake, also known as an induction brake, Faraday brake, electric brake or electric retarder, is a device used to slow or stop a moving object by generating eddy currents and thus dissipating its kinetic energy as heat. Unlike friction brakes, where the drag force that stops the moving object is provided by friction between two surfaces pressed together, the drag force in an eddy current brake is an electromagnetic force between a magnet and a nearby conductive object in relative motion, due to eddy currents induced in the conductor through electromagnetic induction.
A conductive surface moving past a stationary magnet develops circular electric currents called eddy currents induced in it by the magnetic field, as described by Faraday's law of induction. By Lenz's law, the circulating currents create their own magnetic field that opposes the field of the magnet. Thus the moving conductor experiences a drag force from the magnet that opposes its motion, proportional to its velocity. The kinetic energy of the moving object is dissipated as heat generated by the current flowing through the electrical resistance of the conductor.
In an eddy current brake the magnetic field may be created by a permanent magnet or an electromagnet. With an electromagnet system, the braking force can be turned on and off (or varied) by varying the electric current in the electromagnet windings. Another advantage is that since the brake does not work by friction, there are no brake shoe surfaces to wear, eliminating replacement as with friction brakes. A disadvantage is that since the braking force is proportional to the relative velocity of the brake, the brake has no holding force when the moving object is stationary, as provided by static friction in a friction brake, hence in vehicles it must be supplemented by a friction brake.
In some cases, energy in the form of momentum stored within a motor or other machine is used to energize any electromagnets involved. The |
https://en.wikipedia.org/wiki/Internet%20art | Internet art (also known as net art) is a form of new media art distributed via the Internet. This form of art circumvents the traditional dominance of the physical gallery and museum system. In many cases, the viewer is drawn into some kind of interaction with the work of art. Artists working in this manner are sometimes referred to as net artists.
Net artists may use specific social or cultural internet traditions to produce their art outside of the technical structure of the internet. Internet art is often — but not always — interactive, participatory, and multimedia-based. Internet art can be used to spread a message, either political or social, using human interactions.
The term Internet art typically does not refer to art that has been simply digitized and uploaded to be viewable over the Internet, such as in an online gallery.
Rather, this genre relies intrinsically on the Internet to exist as a whole, taking advantage of such aspects as an interactive interface and connectivity to multiple social and economic cultures and micro-cultures, not only web-based works.
New media theorist and curator Jon Ippolito defined "Ten Myths of Internet Art" in 2002. He cites the above stipulations, as well as defining it as distinct from commercial web design, and touching on issues of permanence, archivability, and collecting in a fluid medium.
History and context
Internet art is rooted in disparate artistic traditions and movements, ranging from Dada to Situationism, conceptual art, Fluxus, video art, kinetic art, performance art, telematic art and happenings.
In 1974, Canadian artist Vera Frenkel worked with the Bell Canada Teleconferencing Studios to produce the work String Games: Improvisations for Inter-City Video, the first artwork in Canada to use telecommunications technologies.
An early telematic artwork was Roy Ascott's work, La Plissure du Texte,
performed in collaboration created for an exhibition at the Musée d'Art Moderne de la Ville de Paris in 1983.
|
https://en.wikipedia.org/wiki/Simion%20Stoilow | Simion Stoilow or Stoilov ( – 4 April 1961) was a Romanian mathematician, creator of the Romanian school of complex analysis, and author of over 100 publications.
Biography
He was born in Bucharest, and grew up in Craiova. His father, Colonel Simion Stoilow, fought at the in the Romanian War of Independence. After studying at the Obedeanu elementary school and the Carol I High School, Stoilow went in 1907 to the University of Paris, where he earned a B.S. degree in 1910 and a Ph.D. in Mathematics in 1916. His doctoral dissertation was written under the direction of Émile Picard.
He returned to Romania in 1916 to fight in the Romanian Campaign of World War I, first in Dobrudja, then in Moldavia. After the war, he became professor of mathematics at the University of Iași (1919–1921) and the University of Cernăuți (1921–1939). He was an Invited Speaker of the International Congress of Mathematicians in 1920 at Strasbourg, in 1928 at Bologna, and in 1936 at Oslo. In 1928 he was awarded the Legion of Honour, Officer rank. In 1939 he moved to Bucharest, working first at the Polytechnic University of Bucharest, and from 1941 at the University of Bucharest, serving as rector from 1944 to 1946 and as dean of the Faculties of Mathematics and Physics from 1948 to 1951.
From 1946 to 1948, he served as Romanian ambassador to France. In 1946 he was a member of the Romanian delegation at the Paris Peace Conference, headed by Gheorghe Tătărescu. In July 1947 he organized at Club de Chaillot the exhibit "L'art français au secours des enfants roumains"; Constantin Brâncuși participated, Tristan Tzara and Jean Cassou wrote the preface to the catalogue. In 1946 he was awarded the Order of the Star of Romania, Grand Officer rank and in 1948, the Order of the Star of the Romanian People's Republic, Second class.
Stoilow was elected corresponding member of the Romanian Academy in 1936, and full member in 1945, and later became president of the Physics and Mathematics section of the A |
https://en.wikipedia.org/wiki/Hericium%20coralloides | Hericium coralloides is a saprotrophic fungus, commonly known as coral tooth fungus or comb coral mushroom. It grows on dead hardwood trees. The species is edible and good when young, but as it ages the branches and hanging spines become brittle and turn a light shade of yellowish brown.
Found September 23, 1997 in Vilas County, Wisconsin near water, high in the wound of a living tree. The dried specimen lives at the UWSP Herbarium. |
https://en.wikipedia.org/wiki/Arboreal%20theory | The arboreal theory claims that primates evolved from their ancestors by adapting to arboreal life. It was proposed by Grafton Elliot Smith (1912), a neuroanatomist who was chiefly concerned with the emergence of the primate brain.
Summary
Primates are thought to have developed several of their traits and habits initially while living in trees. One key component to this argument is that primates relied on sight over smell. They were able to develop a keen sense of depth perception, perhaps because of the constant leaping that was necessary to move about the trees. Primates also developed hands and feet that were capable of grasping. This was also a result of arboreal life, which required a great deal of crawling along branches, and reaching out for fruit and other food. These early primates were likely to have eaten foods found in trees, such as flowers, fruits, berries, gums, leaves, and insects. They are thought to have shifted their diets towards insects in the early Cenozoic era, when insects became more numerous. |
https://en.wikipedia.org/wiki/Government%20Operational%20Research%20Service | In the United Kingdom, the Government Operational Research Service (GORS) supports and champions Operational Research across government. GORS currently supports policy-making, strategy and operations in many different departments and agencies across the United Kingdom and employs over 1000 analysts, ranging from sandwich students to members of the Senior Civil Service. |
https://en.wikipedia.org/wiki/KIAA0232 | KIAA0232 is a nuclear phosphoserine protein which in humans is encoded by the KIAA0232 gene.
Gene
KIAA0232 is located at 4p16.1 neighboring TBC1 domain family member 14 and an uncharacterized locus. It has 10 exons which comprise its 4 known transcript variants.
Expression
KIAA0232 is expressed fairly ubiquitously, but particularly highly in the brain relative to other tissues according to GEO normal tissue expression profiling. Other notable areas of high expression identified by EST profiling include nerves, umbilical cord, and parathyroid.
Homology
Paralogs
There are no known paralogs of KIAA0232.
Orthologs
KIAA0232 is conserved in most animals, including mammals, reptiles, birds, amphibians, insects, and as far back as Trichnella spiralis, a species of nematode. It is not found in fungi, plants, or prokaryotes.
Protein
The KIAA0232 protein is 1395 amino acids in length with a molecular weight of 154.8kDa. It has higher than average frequencies of serine and glutamic acid residues as well as several multi-serine runs that are evolutionarily conserved. It has an isoelectric point of 4.52.
Domains
KIAA0232 is largely composed of DUF4603.
Subcellular localization
KIAA0232 is predicted to have nuclear localization.
Post-translational modification
KIAA0232 is predicted by the tools at ExPASy to be highly phosphorylated, with 15 tyrosine specific sites, 25 threonine specific sites, and 103 serine specific sites.
Interacting proteins
KIAA0232 is experimentally known to interact with YWHAZ, DYRK1A, and DYRK1B. via two hybrid interaction experiments. YWHAZ is an apoptotic path regulator known to bind to phosphoserine proteins. DYRK1A/B are dual-specificity tyrosine kinases. This corroborates the post-translational modification predictions.
Clinical significance |
https://en.wikipedia.org/wiki/Kalanchoe%20%C3%97%20poincarei | Kalanchoe × poincarei is a species of Kalanchoe native to southern Madagascar. Its scientific name is often misapplied to K. suarezensis and K. mortagei, but K. × poincarei is very different from them. The true K. × poincarei is a natural hybrid involving K. beauverdii, with similar sprawling stems up to 3 m in length, and not known in cultivation, whereas K. suarezensis and K. mortagei are erect, 30~60 cm tall and cultivated as ornamentals. |
https://en.wikipedia.org/wiki/Terahertz%20nondestructive%20evaluation | Terahertz nondestructive evaluation pertains to devices, and techniques of analysis occurring in the terahertz domain of electromagnetic radiation. These devices and techniques evaluate the properties of a material, component or system without causing damage.
Terahertz imaging
Terahertz imaging is an emerging and significant nondestructive evaluation (NDE) technique used for dielectric (nonconducting, i.e., an insulator) materials analysis and quality control in the pharmaceutical, biomedical, security, materials characterization, and aerospace industries. It has proved to be effective in the inspection of layers in paints and coatings,
detecting structural defects in ceramic and composite materials
and imaging the physical structure of paintings and manuscripts.
The use of THz waves for non-destructive evaluation enables inspection of multi-layered structures and can identify abnormalities from foreign material inclusions, disbond and delamination, mechanical impact damage, heat damage, and water or hydraulic fluid ingression.
This new method can play a significant role in a number of industries for materials characterization applications where precision thickness mapping (to assure product dimensional tolerances within a product and from product-to-product) and density mapping (to assure product quality within a product and from product-to-product) are required.
Nondestructive evaluation
Sensors and instruments are employed in the 0.1 to the 10 THz range for nondestructive evaluation, which includes detection.
Terahertz Density Thickness Imager
The Terahertz Density Thickness Imager is a nondestructive inspection method that employs terahertz energy for density and thickness mapping in dielectric, ceramic, and composite materials. This non-contact, single-sided terahertz electromagnetic measurement and imaging method characterizes micro-structure and thickness variation in dielectric (insulating) materials. This method was demonstrated for the Space Shutt |
https://en.wikipedia.org/wiki/Anthony%20Hill%20%28artist%29 | Anthony Hill (23 April 1930 – 13 October 2020) was an English artist, painter and relief-maker, originally a member of the post-World War II British art movement termed the Constructionist Group whose work was essentially in the international constructivist tradition.
Biography
His fellow members in this group were Victor Pasmore, Adrian Heath, John Ernest, Kenneth Martin, Mary Martin, Gillian Wise (artist) and Stephen Gilbert. He was born on 23 April 1930 in London, and studied at the St Martin's and the Central Schools of Art 1948–51. He began painting in the style of Dada and Surrealism in 1948 but quickly moved on to geometric abstract idioms. He made his first relief in 1954 and abandoned painting for relief-making in 1956. One feature of these reliefs has been the use of non-traditional materials such as industrial aluminium and Perspex. His first one-man show of reliefs was held at the Institute of Contemporary Arts in 1958. He has participated in exhibitions of abstract and constructivist art in the UK, Paris, Germany, Holland, Poland, Switzerland and the USA. In 1978 he exhibited in the Arts Council's exhibition, Constructive Context, alongside a number if artists such as Jeffrey Steele and Peter Lowe who had begun working in a systematised constructive mode in the mid to late 1960s and came together in the Systems Group in December 1969. Hill, however, along with the Martins, declined membership of this group. In 1983 the Hayward Gallery held a major retrospective exhibition of Anthony Hill's constructivist work.
Anthony Hill has had a lifelong fascination with mathematics, and there are many mathematicians among his circle of acquaintances. Together with his colleague John Ernest he made contributions to graph theory (crossing number) and in 1979, in recognition of a number of his mathematical papers, he was elected a member of the London Mathematical Society and made a visiting research associate in the Department of Mathematics at University College, |
https://en.wikipedia.org/wiki/Vanillic%20acid | Vanillic acid (4-hydroxy-3-methoxybenzoic acid) is a dihydroxybenzoic acid derivative used as a flavoring agent. It is an oxidized form of vanillin. It is also an intermediate in the production of vanillin from ferulic acid.
Occurrence in nature
The highest amount of vanillic acid in plants known so far is found in the root of Angelica sinensis, an herb indigenous to China, which is used in traditional Chinese medicine.
Occurrences in food
Açaí oil, obtained from the fruit of the açaí palm (Euterpe oleracea), is rich in vanillic acid (). It is one of the main natural phenols in argan oil. It is also found in wine and vinegar.
Metabolism
Vanillic acid is one of the main catechins metabolites found in humans after consumption of green tea infusions.
Synthesis
Vanillic acid can be obtained from the oxidation of vanillin by various oxidizing agents. With Pd/C, NaBH4, and KOH as the oxidizing agent, the conversion was reported to occur in ~89% yield. |
https://en.wikipedia.org/wiki/Conditional%20short-circuit%20current | Conditional short-circuit current is the value of the alternating current component of a prospective current, which a switch without integral short-circuit protection, but protected by a suitable short circuit protective device (SCPD) in series, can withstand for the operating time of the current under specified test conditions. It may be understood to be the RMS value of the maximum permissible current over a specified time interval (t0,t1) and operating conditions.
The IEC definition is critiqued to be open to interpretation. |
https://en.wikipedia.org/wiki/Variance%20swap | A variance swap is an over-the-counter financial derivative that allows one to speculate on or hedge risks associated with the magnitude of movement, i.e. volatility, of some underlying product, like an exchange rate, interest rate, or stock index.
One leg of the swap will pay an amount based upon the realized variance of the price changes of the underlying product. Conventionally, these price changes will be daily log returns, based upon the most commonly used closing price. The other leg of the swap will pay a fixed amount, which is the strike, quoted at the deal's inception. Thus the net payoff to the counterparties will be the difference between these two and will be settled in cash at the expiration of the deal, though some cash payments will likely be made along the way by one or the other counterparty to maintain agreed upon margin.
Structure and features
The features of a variance swap include:
the variance strike
the realized variance
the vega notional: Like other swaps, the payoff is determined based on a notional amount that is never exchanged. However, in the case of a variance swap, the notional amount is specified in terms of vega, to convert the payoff into dollar terms.
The payoff of a variance swap is given as follows:
where:
= variance notional (a.k.a. variance units),
= annualised realised variance, and
= variance strike.
The annualised realised variance is calculated based on a prespecified set of sampling points over the period. It does not always coincide with the classic statistical definition of variance as the contract terms may not subtract the mean. For example, suppose that there are observed prices
where
for to . Define the natural log returns.
Then
where is an annualisation factor normally chosen to be approximately the number of sampling points in a year (commonly 252) and is set be the swaps contract life defined by the number . It can be seen that subtracting the mean return will decrease the realised varianc |
https://en.wikipedia.org/wiki/Orthographic%20projection | Orthographic projection (also orthogonal projection and analemma) is a means of representing three-dimensional objects in two dimensions. Orthographic projection is a form of parallel projection in which all the projection lines are orthogonal to the projection plane, resulting in every plane of the scene appearing in affine transformation on the viewing surface. The obverse of an orthographic projection is an oblique projection, which is a parallel projection in which the projection lines are not orthogonal to the projection plane.
The term orthographic sometimes means a technique in multiview projection in which principal axes or the planes of the subject are also parallel with the projection plane to create the primary views. If the principal planes or axes of an object in an orthographic projection are not parallel with the projection plane, the depiction is called axonometric or an auxiliary views. (Axonometric projection is synonymous with parallel projection.) Sub-types of primary views include plans, elevations, and sections; sub-types of auxiliary views include isometric, dimetric, and trimetric projections.
A lens that provides an orthographic projection is an object-space telecentric lens.
Geometry
A simple orthographic projection onto the plane z = 0 can be defined by the following matrix:
For each point v = (vx, vy, vz), the transformed point Pv would be
Often, it is more useful to use homogeneous coordinates. The transformation above can be represented for homogeneous coordinates as
For each homogeneous vector v = (vx, vy, vz, 1), the transformed vector Pv would be
In computer graphics, one of the most common matrices used for orthographic projection can be defined by a 6-tuple, (left, right, bottom, top, near, far), which defines the clipping planes. These planes form a box with the minimum corner at (left, bottom, -near) and the maximum corner at (right, top, -far).
The box is translated so that its center is at the origin, then it is |
https://en.wikipedia.org/wiki/Inositol%20pentakisphosphate | Inositol pentakisphosphate (abbreviated IP5) is a molecule derived from inositol tetrakisphosphate by adding a phosphate group with the help of Inositol-polyphosphate multikinase (IPMK). It is believed to be one of the many second messengers in the inositol phosphate family. It "is implicated in a wide array of biological and pathophysiological responses, including tumorigenesis, invasion and metastasis, therefore specific inhibitors of the kinase may prove useful in cancer therapy."
IP5 also plays a role in defense signaling in plants. It potentiates the interaction of the plant hormone JA-Ile by its receptor. |
https://en.wikipedia.org/wiki/Isodesmosine | Isodesmosine is a lysine derivative found in elastin. Isodesmosine is an isomeric pyridinium-based amino acid resulting from the condensation of four lysine residues between elastin proteins by lysyl-oxidase. These represent ideal biomarkers for monitoring elastin turnover because these special cross-links are only found in mature elastin in mammals.
See also
Desmosine |
https://en.wikipedia.org/wiki/Lin%E2%80%93Kernighan%20heuristic | In combinatorial optimization, Lin–Kernighan is one of the best heuristics for solving the symmetric travelling salesman problem. It belongs to the class of local search algorithms, which take a tour (Hamiltonian cycle) as part of the input and attempt to improve it by searching in the neighbourhood of the given tour for one that is shorter, and upon finding one repeats the process from that new one, until encountering a local minimum. As in the case of the related 2-opt and 3-opt algorithms, the relevant measure of "distance" between two tours is the number of edges which are in one but not the other; new tours are built by reassembling pieces of the old tour in a different order, sometimes changing the direction in which a sub-tour is traversed. Lin–Kernighan is adaptive and has no fixed number of edges to replace at a step, but favours small numbers such as 2 or 3.
Derivation
For a given instance of the travelling salesman problem, tours are uniquely determined by their sets of edges, so we may as well encode them as such. In the main loop of the local search, we have a current tour and are looking for new tour such that the symmetric difference is not too large and the length of the new tour is less than the length of the current tour. Since is typically much smaller than and , it is convenient to consider the quantity
— the gain of using when switching from —
since : how much longer the current tour is than the new tour . Naively -opt can be regarded as examining all with exactly elements ( in but not in , and another in but not in ) such that is again a tour, looking for such a set which has . It is however easier to do those tests in the opposite order: first search for plausible with positive gain, and only second check if is in fact a tour.
Define a trail in to be alternating (with respect to ) if its edges are alternatingly in and not in , respectively. Because the subgraphs and are -regular, the subgraph will have vertices of d |
https://en.wikipedia.org/wiki/Eutely | Eutelic organisms have a fixed number of somatic cells when they reach maturity, the exact number being relatively constant for any one species. This phenomenon is also referred to as cell constancy. Development proceeds by cell division until maturity; further growth occurs via cell enlargement only. This growth is known as auxetic growth. It is shown by members of phylum Aschelminthes. In some cases, individual organs show eutelic properties while the organism itself does not.
Background
In 1909, Eric Martini coined the term eutely to describe the idea of cell constancy and to introduce a term literature sources would be able to use to identify organisms with a fixed amount and arrangement of cells and tissues. Since the introduction of eutely in the early 1900s, textbooks and theories of cytology and ontogeny have not used the term consistently. Advancements in the field of eutely has been developed by morphologists.
Studying of eutelic organisms has proved challenging, as most eutelic organisms are microscopic. Additionally, there is potential for mistakes in cell counting (often completed via an automated cell counter) and observation when larger organisms have numerous cells. In organisms of small size, errors in the examination and explanation of units may entirely negate reconstructions and deductions. Therefore, investigation of most eutelic organisms is done with intense scrutiny and review.
There are two distinct classes of organisms which display eutely:
Eutelic organisms whose somatic cells show a fixed, or complete pattern of cell and tissue number and arrangement
Eutelic organisms whose somatic cells show a limited, or incomplete pattern of cell and tissue number and arrangement
Examples
Eutely has been confirmed to certain degrees in various forms of diversity and sections of the tree of life. Examples include rotifers, many species of nematodes (including ascaris and the organism Caenorhabditis elegans whose male individuals have 1,033 cell |
https://en.wikipedia.org/wiki/Ipswitch%2C%20Inc. | Ipswitch is an IT management software developer for small and medium sized businesses. The company was founded in 1991 and is headquartered in Burlington, Massachusetts and has operations in Atlanta (Alpharetta) and Augusta, Georgia, American Fork, Utah, Madison, Wisconsin and Galway, Ireland. Ipswitch sells its products directly, as well as through distributors, resellers and OEMs in the United States, Canada, Latin America, Europe and the Pacific Rim. Since 2019, Ipswitch is part of Progress Software.
History
Roger Greene founded Ipswitch in 1991 out of his apartment without any venture capital funding or bank financing. The company became profitable in its second year. The first product Ipswitch produced was a gateway that allowed the Novell Inc. IPX networking protocol to connect with the Internet Protocol. In 1994, Ipswitch launched the IMail server, the first product available on the e-commerce site, Open Market. In the mid-1990s, the company was a value-added reseller of the Spyglass Mosaic browser. In 1996, the company released the beta version of WhatsUp Gold, a commercial version of an earlier shareware product, WS_Ping, as well as purchasing the rights to WS_FTP.
In 2008, Ipswitch acquired the Wisconsin-based software producer Standard Networks Inc. and its product MOVEit. Shortly after the acquisition, Ipswitch split its operations into three divisions, secure file transfers, network management, and messaging and collaboration. The company acquired Salt Lake City-based Hourglass Technologies in 2009. The following year, the company acquired the compliance system and system log analysis software producer, Dorian Software. In December 2012, Ipswitch acquired the Waldorf, Germany-based performance testing company iOpus known for its product, iMacros, a web-browser extension.
In 2015, Ipswitch merged its network monitoring and file transfer divisions and created the PartnerSynergy program. Ipswitch opened a support and operations center in Galway, Irelan |
https://en.wikipedia.org/wiki/Superior%20tympanic%20artery | The superior tympanic artery is a small artery in the head. It is a branch of the middle meningeal artery. On entering the cranium it runs in the canal for the tensor tympani muscle and supplies this muscle and the lining membrane of the canal. |
https://en.wikipedia.org/wiki/Markov%20switching%20multifractal | In financial econometrics (the application of statistical methods to economic data), the Markov-switching multifractal (MSM) is a model of asset returns developed by Laurent E. Calvet and Adlai J. Fisher that incorporates stochastic volatility components of heterogeneous durations. MSM captures the outliers, log-memory-like volatility persistence and power variation of financial returns. In currency and equity series, MSM compares favorably with standard volatility models such as GARCH(1,1) and FIGARCH both in- and out-of-sample. MSM is used by practitioners in the financial industry to forecast volatility, compute value-at-risk, and price derivatives.
MSM specification
The MSM model can be specified in both discrete time and continuous time.
Discrete time
Let denote the price of a financial asset, and let denote the return over two consecutive periods. In MSM, returns are specified as
where and are constants and {} are independent standard Gaussians. Volatility is driven by the first-order latent Markov state vector:
Given the volatility state , the next-period multiplier is drawn from a fixed distribution with probability , and is otherwise left unchanged.
{|
|-
| drawn from distribution || with probability
|-
| || with probability
|}
The transition probabilities are specified by
.
The sequence is approximately geometric at low frequency. The marginal distribution has a unit mean, has a positive support, and is independent of .
Binomial MSM
In empirical applications, the distribution is often a discrete distribution that can take the values or with equal probability. The return process is then specified by the parameters . Note that the number of parameters is the same for all .
Continuous time
MSM is similarly defined in continuous time. The price process follows the diffusion:
where , is a standard Brownian motion, and and are constants. Each component follows the dynamics:
{|
|-
| drawn from distribution || wit |
https://en.wikipedia.org/wiki/Vomeronasal%20receptor | Vomeronasal receptors are a class of olfactory receptors that putatively function as receptors for pheromones. Pheromones have evolved in all animal phyla, to signal sex and dominance status, and are responsible for stereotypical social and sexual behaviour among members of the same species. In mammals, these chemical signals are believed to be detected primarily by the vomeronasal organ (VNO), a chemosensory organ located at the base of the nasal septum.
The VNO is present in most amphibia, reptiles and non-primate mammals but is absent in birds, adult catarrhine monkeys and apes. An active role for the human VNO in the detection of pheromones is disputed; the VNO is clearly present in the fetus but appears to be atrophied or absent in adults. Two distinct families of vomeronasal receptors – which putatively function as pheromone receptors – have been identified in the vomeronasal organ (V1Rs and V2Rs). While all are G protein-coupled receptors (GPCRs), they are distantly related to the receptors of the main olfactory system, highlighting their different role.
The V1 receptors share between 50 and 90% sequence identity but have little similarity to other families of G protein-coupled receptors. They appear to be distantly related to the mammalian T2R bitter taste receptors and the rhodopsin-like GPCRs. In rat, the family comprises 30–40 genes. These are expressed in the apical regions of the VNO, in neurons expressing Gi2. Coupling of the receptors to this protein mediates inositol trisphosphate signaling. A number of human V1 receptor homologues have also been found. The majority of these human sequences are pseudogenes, but an apparently functional receptor has been identified that is expressed in the human olfactory system.
The V2 receptors are members of GPCR family 3 and have close similarity to the extracellular calcium-sensing receptors. Rodents appear to have around 100 functional V2 receptors and many pseudogenes. These receptors are expressed in the ba |
https://en.wikipedia.org/wiki/Cytochrome%20c%20assembly%20protein%20family | In molecular biology, the cytochrome c assembly protein family includes various proteins involved in cytochrome c assembly from mitochondria and bacteria. Members of this family include: CycK from Rhizobium leguminosarum, CcmC from Escherichia coli and Paracoccus denitrificans, and orf240 from Triticum aestivum (Wheat) mitochondria. The members of this family are probably integral membrane proteins with six predicted transmembrane helices that may comprise the membrane component of an ABC (ATP binding cassette) transporter complex. This transporter may be necessary for transport of some component needed for cytochrome c assembly. One member, R. leguminosarum CycK, contains a putative haem-binding motif. Wheat orf240 also contains a putative haem-binding motif and is a proposed ABC transporter with c-type haem as its proposed substrate. However it seems unlikely that all members of this family transport haem or c-type apocytochromes because P. denitrificans CcmC transports neither. |
https://en.wikipedia.org/wiki/Tree%20rearrangement | Tree rearrangements are deterministic algorithms devoted to search for optimal phylogenetic tree structure. They can be applied to any set of data that are naturally arranged into a tree, but have most applications in computational phylogenetics, especially in maximum parsimony and maximum likelihood searches of phylogenetic trees, which seek to identify one among many possible trees that best explains the evolutionary history of a particular gene or species.
Basic tree rearrangements
The simplest tree-rearrangement, known as nearest-neighbor interchange, exchanges the connectivity of four subtrees within the main tree. Because there are three possible ways of connecting four subtrees, and one is the original connectivity, each interchange creates two new trees. Exhaustively searching the possible nearest-neighbors for each possible set of subtrees is the slowest but most optimizing way of performing this search. An alternative, more wide-ranging search, subtree pruning and regrafting (SPR), selects and removes a subtree from the main tree and reinserts it elsewhere on the main tree to create a new node. Finally, tree bisection and reconnection (TBR) detaches a subtree from the main tree at an interior node and then attempts all possible connections between edges of the two trees thus created. The increasing complexity of the tree rearrangement technique correlates with increasing computational time required for the search, although not necessarily with their performance.
SPR can be further divided into uSPR: Unrooted SPR, rSPR: Rooted SPR. uSPR is applied to unrooted trees, and goes like this: break any edge. Join one end of the edge (selected arbitrarily) to any other edge in the tree. rSPR is applied to rooted trees*, and goes: break any edge except the edge leading to the root node. Join one end of the edge (specifically: the end of the edge that is FURTHEST from the root) and attach it to any other edge of the tree.
* In this example the root of the tree is |
https://en.wikipedia.org/wiki/Snake-in-the-box | The snake-in-the-box problem in graph theory and computer science deals with finding a certain kind of path along the edges of a hypercube. This path starts at one corner and travels along the edges to as many corners as it can reach. After it gets to a new corner, the previous corner and all of its neighbors must be marked as unusable. The path should never travel to a corner which has been marked unusable.
In other words, a snake is a connected open path in the hypercube where each node connected with path, with the exception of the head (start) and the tail (finish), it has exactly two neighbors that are also in the snake. The head and the tail each have only one neighbor in the snake. The rule for generating a snake is that a node in the hypercube may be visited if it is connected to the current node and it is not a neighbor of any previously visited node in the snake, other than the current node.
In graph theory terminology, this is called finding the longest possible induced path in a hypercube; it can be viewed as a special case of the induced subgraph isomorphism problem. There is a similar problem of finding long induced cycles in hypercubes, called the coil-in-the-box problem.
The snake-in-the-box problem was first described by , motivated by the theory of error-correcting codes. The vertices of a solution to the snake or coil in the box problems can be used as a Gray code that can detect single-bit errors. Such codes have applications in electrical engineering, coding theory, and computer network topologies. In these applications, it is important to devise as long a code as is possible for a given dimension of hypercube. The longer the code, the more effective are its capabilities.
Finding the longest snake or coil becomes notoriously difficult as the dimension number increases and the search space suffers a serious combinatorial explosion. Some techniques for determining the upper and lower bounds for the snake-in-the-box problem include proofs usin |
https://en.wikipedia.org/wiki/Peter%20Lucas%20%28computer%20scientist%29 | Peter Lucas (13 January 1935 in Vienna, Austria – 2 February 2015 in California, United States) was an Austrian computer scientist and university professor.
Life
Peter Lucas graduated in 1953 and then studied telecommunications at the Vienna University of Technology. He completed his studies in 1959 with a diploma thesis on the topic of programming electronic calculating machines. Then he was a member of Heinz Zemanek's group and was responsible for the system programming of Mailüfterl, the first fully transistorized computer in continental Europe.
In 1961, he moved with the Mailüfterl Group from the Technical University to the IBM company, working at the IBM Laboratory Vienna, where he worked on the formal description of programming languages. Together with Hans Bekić, Kurt Walk, and Heinz Zemanek, he was responsible for the formal definition of the IBM programming language PL/I using the Vienna Definition Language (VDL), an important part of the formal method VDM. In addition, he worked together with Hans Bekić on a compiler for ALGOL 60. During this time, he gave lectures and lectures at the Vienna University of Technology and the Johannes Kepler University Linz, covering theoretical foundations of programming and the formal definition of programming languages.
In 1978, he joined the Thomas J. Watson Research Center at Yorktown Heights, New York, United States, where he worked on experimental compiler projects. In 1979, he moved to IBM in San Jose, California, later the IBM Almaden Research Center. In 1988, he worked in John Backus' group on the definition and implementation of the functional programming language FL.
In October 1993, he was appointed as a full professor in software technology at the Graz University of Technology, retiring to an emeritus position in July 2001. From 1994, he was chairman of Formal Methods Europe (FME) and corresponding member of the Austrian Academy of Sciences.
Peter Lucas died on 2 February 2015 at the age of 80.
Awards
19 |
https://en.wikipedia.org/wiki/Facebook%20Stories | Facebook Stories are short user-generated photo or video collections that can be uploaded to the user's Facebook. Facebook Stories were created on March 28, 2017. They are considered a second news feed for the social media website. It is focused around Facebook's in-app camera which allows users to add fun filters and Snapchat-like lenses to their content as well as add visual geolocation tags to their photos and videos. The content is able to be posted publicly on the Facebook app for only 24 hours or can be sent as a direct message to a Facebook friend.
"As people mostly post photos and videos, Stories is the way they’re going to want to do it," says Facebook Camera product manager Connor Hayes, noting Facebook's shift away from text status updates after ten years as its primary sharing option. "Obviously we’ve seen this doing very well in other apps. Snapchat has really pioneered this," explained Hayes. Facebook has seen much success through other applications like Snapchat and Instagram, especially since Facebook bought Instagram for $1 billion in 2012.
History
After the many failed attempts of trying to incorporate Snapchat-like features on Facebook, (date=January 2018) the company decided to test run Messenger Day. In 2016, Facebook created a feature called Messenger Day, which allowed users to post videos and pictures with filters for 24 hours only. This project was only used in Poland because of the unpopularity of Snapchat in that region. Users are able to add texts and colorful graphics. However, this was only a test for Facebook to be later turned into a feature on Facebook's app.
Facebook's introduction of the Story function may have been in response to the wider success of Instagram Story advertising over the advertising on Facebook Wall; Instagram Story ads were found to be more successful than Facebook Wall advertising in all demographics aside from non-millennial men.
Popularity and criticism
, Facebook Stories is much less popular among socia |
https://en.wikipedia.org/wiki/Security%20descriptor | Security descriptors are data structures of security information for securable Windows objects, that is objects that can be identified by a unique name. Security descriptors can be associated with any named objects, including files, folders, shares, registry keys, processes, threads, named pipes, services, job objects and other resources.
Security descriptors contain discretionary access control lists (DACLs) that contain access control entries (ACEs) that grant and deny access to trustees such as users or groups. They also contain a system access control list (SACLs) that control auditing of object access. ACEs may be explicitly applied to an object or inherited from a parent object. The order of ACEs in an ACL is important, with access denied ACEs appearing higher in the order than ACEs that grant access. Security descriptors also contain the object owner.
Mandatory Integrity Control is implemented through a new type of ACE on a security descriptor.
Files and folder permissions can be edited by various tools including Windows Explorer, WMI, command line tools like Cacls, XCacls, ICacls, SubInACL, the freeware Win32 console FILEACL, the free software utility SetACL, and other utilities. To edit a security descriptor, a user needs WRITE_DAC permissions to the object, a permission that is usually delegated by default to administrators and the object's owner.
Permissions in NTFS
The following table summarizes NTFS permissions and their roles (in individual rows.) The table exposes the following information:
Permission code: Each access control entry (ACE) specifies its permission with binary code. There are 14 codes (12 in older systems.)
Meaning: Each permission code has a meaning, depending on whether it is applied to a file or a folder. For example, code 0x01 on file indicates the permission to read the file, while on a folder indicates the permission to list the content of the folder. Knowing the meaning alone, however, is useless. An ACE must also spec |
https://en.wikipedia.org/wiki/Prp8 | Prp8 refers to both the Prp8 protein and Prp8 gene. Prp8's name originates from its involvement in pre-mRNA processing. The Prp8 protein is a large, highly conserved, and unique protein that resides in the catalytic core of the spliceosome and has been found to have a central role in molecular rearrangements that occur there. Prp8 protein is a major central component of the catalytic core in the spliceosome, and the spliceosome is responsible for splicing of precursor mRNA that contains introns and exons. Unexpressed introns are removed by the spliceosome complex in order to create a more concise mRNA transcript. Splicing is just one of many different post-transcriptional modifications that mRNA must undergo before translation. Prp8 has also been hypothesized to be a cofactor in RNA catalysis.
History
The systematic name for the PRP8 protein is YHR165C. Prp8 protein is coded by a single gene in humans with 42 exons. The size of Prp8 ranges between 230 and 280 kDa depending on the organism. The sequence coding for the Prp8 protein is highly conserved between eukaryotic organisms, with a 61% identity match between humans and yeast in amino acid sequence. The Prp8 gene is located on chromosome VIII in yeast and chromosome 17 in humans.
Role in splicing
Pre-mRNA splicing involves two trans-esterification reactions and attacks by hydroxyl groups within the spliceosome. In these reactions, spliceosomal intron removal is catalyzed by the spliceosome using the same mechanism as Group II introns. There are five key small nuclear RNA-protein complexes (snRNP) involved in this process. All of the snRNPs together contribute about 50 proteins to the core spliceosome. The Prp8 gene encodes for a protein that is a central part of the U5 snRNP and the U5-U4/U6 tri-snRNP. The U5-U4/U6 tri-snRNP is involved with Complex B, the pre-catalytic spliceosome, where the U5 snRNP binds to exons at the 5’ end of the mRNA before shifting to introns. The U5 snRNP is involved with Complex |
https://en.wikipedia.org/wiki/F%CF%83%20set | {{DISPLAYTITLE:Fσ set}}
In mathematics, an Fσ set (said F-sigma set) is a countable union of closed sets. The notation originated in French with F for (French: closed) and σ for (French: sum, union).
The complement of an Fσ set is a Gδ set.
Fσ is the same as in the Borel hierarchy.
Examples
Each closed set is an Fσ set.
The set of rationals is an Fσ set in . More generally, any countable set in a T1 space is an Fσ set, because every singleton is closed.
The set of irrationals is not a Fσ set.
In metrizable spaces, every open set is an Fσ set.
The union of countably many Fσ sets is an Fσ set, and the intersection of finitely many Fσ sets is an Fσ set.
The set of all points in the Cartesian plane such that is rational is an Fσ set because it can be expressed as the union of all the lines passing through the origin with rational slope:
where , is the set of rational numbers, which is a countable set.
See also
Gδ set — the dual notion.
Borel hierarchy
P-space, any space having the property that every Fσ set is closed |
https://en.wikipedia.org/wiki/Tresorit | Tresorit is a cloud storage service with end-to-end encryption.
Founded in 2011, Tresorit closed an €11.5M Series B financing round in 2018 and was featured on FT1000 by Financial Times 2020 as the fifth fastest-growing cybersecurity company in Europe.
The software is available for Windows, macOS, Android, iOS, and Linux, and the company has released plugins for Gmail and Outlook. As of 2022, Swiss Post owns a majority stake in the cloud storage service. Tresorit works as an independent entity under Swiss Post. The company has offices in Zurich, Switzerland; Munich, Germany; and Budapest, Hungary, employing around 100 people.
History
Tresorit was founded in 2011 by Istvan Lam (CEO), Szilveszter Szebeni (CIO) and Gyorgy Szilagyi (CPO). In the coming years, Andrea Skaliczki joined as CFO, Istvan Hartung as CTO and Arno van Züren as CSO. The company officially launched its client-side encrypted cloud storage service after emerging from its stealth beta version in April 2014.
In 2013 and 2014, Tresorit hosted a hacking contest offering $10,000 to anyone who hacked their data encryption methods to gain access to their servers. After some months, the reward was increased to $25,000 and later to $50,000, challenging experts from institutions like Harvard, Stanford or MIT. The contest ran for 468 days and according to the company, nobody was able to break the encryption.
In August 2015, Wuala (owned by LaCie and Seagate), a pioneer of secure cloud storage, announced it was closing its service after 7 years and recommended its users choose Tresorit as their secure cloud alternative.
In 2016, Tresorit launched a beta of the software development kit (SDK) ZeroKit. In January 2017, Apple's SDK project CareKit announced the option for mobile app developers using CareKit to integrate ZeroKit, enabling zero-knowledge user authentication and encryption for medical and health apps.
In 2017, Tresorit patented its shareable encryption technology in the US under no. US 9563783 |
https://en.wikipedia.org/wiki/List%20of%20mathematical%20properties%20of%20points | In mathematics, the following appear:
Algebraic point
Associated point
Base point
Closed point
Divisor point
Embedded point
Extreme point
Fermat point
Fixed point
Focal point
Geometric point
Hyperbolic equilibrium point
Ideal point
Inflection point
Integral point
Isolated point
Generic point
Heegner point
Lattice hole, Lattice point
Lebesgue point
Midpoint
Napoleon points
Non-singular point
Normal point
Parshin point
Periodic point
Pinch point
Point (geometry)
Point source
Rational point
Recurrent point
Regular point, Regular singular point
Saddle point
Semistable point
Separable point
Simple point
Singular point of a curve
Singular point of an algebraic variety
Smooth point
Special point
Stable point
Torsion point
Vertex (curve)
Weierstrass point
Calculus
Critical point (aka stationary point), any value v in the domain of a differentiable function of any real or complex variable, such that the derivative of v is 0 or undefined
Geometry
Antipodal point, the point diametrically opposite to another point on a sphere, such that a line drawn between them passes through the centre of the sphere and forms a true diameter
Conjugate point, any point that can almost be joined to another by a 1-parameter family of geodesics (e.g., the antipodes of a sphere, which are linkable by any meridian
Vertex (geometry), a point that describes a corner or intersection of a geometric shape
Apex (geometry), the vertex that is in some sense the highest of the figure to which it belongs
Topology
Adherent point, a point x in topological space X such that every open set containing x contains at least one point of a subset A
Condensation point, any point p of a subset S of a topological space, such that every open neighbourhood of p contains uncountably many points of S
Limit point, a set S in a topological space X is a point x (which is in X, but not necessarily in S) that can be approximated by points of S, since every neighbourhood o |
https://en.wikipedia.org/wiki/Zhura | Zhura ( ) is a free, web-based screenwriting software application for writing and formatting screenplays to the film industry standard, as well as other formats. Zhura allows users to collaborate on scripts in public or private groups and uses Creative Commons Licensing for all work in the public workspace.
On March 29, 2010, Zhura announced its merger with Scripped. Scripped's CEO, Sunil Rajaraman, remains the company's Chief Executive Officer as of 2022. The Zhura CEO was Eric MacDonald, a former Cascade Communications engineer. Scripped later closed on April 1, 2015 after a catastrophic, irrecoverable data loss.
Script editor
Screenplay Template – The script editor provides a built-in screenplay template which formats the document to a standard for scripts as recommended by the AMPAS. The screenplay document is composed of seven elements: scene, action, character, dialogue, parenthetical, transition and shot (see image). Each element has a specific style to which the script editor conforms as you type.
Script Formats – Other major script formats for stage play, sitcom, audio drama and comic book are also supported as well as the ability to switch between them.
Auto-Complete – Characters, scene headings and custom transitions are “remembered” as they are written and “recalled” with tab-completion when a writer starts a new character, scene heading or transition, respectively.
Multiple Editors – With a collaborative editing model comparable to Google Docs, two or more users can edit the same script simultaneously, regardless of having a different operating system or web browser.
Import/Export – A screenplay written in another program can be imported into the script editor and automatically conformed to the screenplay template. The closer the original script has adhered to the standard format, the better it will appear when imported. Supported import/export formats include Text (.txt) Word (.doc) Rich Text (.rtf) and OpenDocument (.odt). Scripts can also be ex |
https://en.wikipedia.org/wiki/Dushnik%E2%80%93Miller%20theorem | In mathematics, the Dushnik–Miller theorem is a result in order theory stating that every infinite linear order has a non-identity order embedding into itself. It is named for Ben Dushnik and E. W. Miller, who published this theorem for countable linear orders in 1940. More strongly, they showed that in the countable case there exists an order embedding into a proper subset of the given order; however, they provided examples showing that this strengthening does not always hold for uncountable orders.
In reverse mathematics, the Dushnik–Miller theorem for countable linear orders has the same strength as the arithmetical comprehension axiom (ACA0), one of the "big five" subsystems of second-order arithmetic. This result is closely related to the fact that (as Louise Hay and Joseph Rosenstein proved) there exist computable linear orders with no computable non-identity self-embedding.
See also
Cantor's isomorphism theorem
Laver's theorem |
https://en.wikipedia.org/wiki/How%20to%20Design%20Programs | How to Design Programs (HtDP) is a textbook by Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi on the systematic design of computer programs. MIT Press published the first edition in 2001, and the second edition in 2018, which is freely available online and in print. The book introduces the concept of a design recipe, a six-step process for creating programs from a problem statement. While the book was originally used along with the education project TeachScheme! (renamed ProgramByDesign), it has been adopted at many colleges and universities for teaching program design principles.
According to HtDP, the design process starts with a careful analysis of a problem statement with the goal of extracting a rigorous description of the kinds of data that the desired program consumes and produces. The structure of these data descriptions determines the organization of the program.
Then, the book carefully introduces data forms of progressively growing complexity. It starts with data of atomic forms and then progresses to compound forms, including data that can be arbitrarily large. For each kind of data definition, the book explains how to organize the program in principle, thus enabling a programmer who encounters a new form of data to still construct a program systematically.
Like Structure and Interpretation of Computer Programs (SICP), HtDP relies on a variant of the programming language Scheme. It includes its own programming integrated development environment (IDE), named DrRacket, which provides a series of programming languages. The first language supports only functions, atomic data, and simple structures. Each language adds expressive power to the prior one. Except for the largest teaching language, all languages for HtDP are functional programming languages.
Pedagogical basis
In the 2004 paper, The Structure and Interpretation of the Computer Science Curriculum, the same authors compared and contrasted the pedagogical focus |
https://en.wikipedia.org/wiki/DC%20injection%20braking | DC injection braking is a method of slowing AC electric motors. Direct Current is injected into the winding of the AC motor after the AC voltage is disconnected, providing braking force to the rotor.
Applications of DC injection braking
When power is disconnected from the motor, the rotor spins freely until friction slows it to a stop. Large rotors and loads with a high moment of inertia may take a significant amount of time to stop through internal friction alone. To reduce downtime, or possibly as an emergency safety feature, DC injection braking can be used to quickly stop the rotor.
A DC injection brake system can be used as an alternative to a friction brake system. DC injection brakes only require a small module located with the other motor switchgear and/or drivers, mounted in a remote and convenient location, whereas a friction brake must be mounted somewhere on the rotating system. Friction brakes eventually wear out with use and require replacement of braking components. DC brake modules do not have consumable parts and should not require maintenance. Friction brakes also require a method of actuation, requiring either a human operator or system controlled actuator, adding to the complexity of the system. A DC brake is easily integrated into the motor control circuitry.
Operation
Direct current is applied to the motor stator windings, creating a stationary magnetic field which applies a static torque to the rotor. This slows and eventually halts the rotor completely. As long as the DC voltage is applied to the windings, the rotor will be held in position and resistant to any attempt to spin it. The higher the voltage that is applied, the stronger the braking force and holding power. The current should only be applied for a few seconds or the motor will overheat.
In a thyristor-controlled injection braking unit, the voltage to be injected into the motor stator winding is obtained by rectifying the supply voltage. Two thyristors are connected as a ph |
https://en.wikipedia.org/wiki/TENEX%20%28operating%20system%29 | TENEX is an operating system developed in 1969 by BBN for the PDP-10, which later formed the basis for Digital Equipment Corporation's TOPS-20 operating system.
Background
In the 1960s, BBN was involved in a number of LISP-based artificial intelligence projects for DARPA, many of which had very large (for the era) memory requirements. One solution to this problem was to add paging software to the LISP language, allowing it to write out unused portions of memory to disk for later recall if needed. One such system had been developed for the PDP-1 at MIT by Daniel Murphy before he joined BBN. Early DEC machines were based on an 18-bit word, allowing addresses to encode for a 256 kiloword memory. The machines were based on expensive core memory and included nowhere near the required amount. The pager used the most significant bits of the address to index a table of blocks on a magnetic drum that acted as the pager's backing store. The software would fetch the pages if needed, and then resolve the address to the proper area of RAM.
In 1964 DEC announced the PDP-6. DEC was still heavily involved with MIT's AI Lab, and many feature requests from the LISP hackers were moved into this machine. 36-bit computing was especially useful for LISP programming because with an 18-bit address space, a word of storage on these systems contained two addresses, a perfect match for the common LISP CAR and CDR operations. BBN became interested in buying one for their AI work when they became available, but wanted DEC to add a hardware version of Murphy's pager directly into the system. With such an addition, every program on the system would have paging support invisibly, making it much easier to do any sort of programming on the machine. DEC was initially interested, but soon (1966) announced they were in fact dropping the PDP-6 and concentrating solely on their smaller 18-bit and new 16-bit lines. The PDP-6 was expensive and complex, and had not sold well for these reasons.
It was n |
https://en.wikipedia.org/wiki/Antarctic%20Benthic%20Deep-Sea%20Biodiversity%20Project | The Antarctic Benthic Deep-Sea Biodiversity Project (ANDEEP) is an international project to investigate deep-water biology of the Scotia and Weddell seas. Benthic refers to "bottom-dwelling" organisms that are known to exhibit unusual characteristics not normally seen in shallow-dwelling creatures. ANDEEP has already made many notable discoveries, such as animals with gigantism and extraordinary longevity. |
https://en.wikipedia.org/wiki/Michael%20Jackson%27s%20Moonwalker | Michael Jackson's Moonwalker is the name of several video games based on the 1988 Michael Jackson film Moonwalker. Sega developed two beat 'em ups, released in 1990; one released in arcades and another released for the Sega Genesis and Master System consoles. U.S. Gold also published various games for home computers the same year. Each of the games' plots loosely follows the "Smooth Criminal" segment of the film, in which Jackson rescues kidnapped children from the evil Mr. Big, and incorporates synthesized versions of some of the musician's songs. Following Moonwalker, Jackson collaborated with Sega on several other video games.
Sega arcade version
is an arcade video game by Sega (programming) and Triumph International (audiovisuals), with the help of Jackson which was released on the Sega System 18 hardware. The arcade has distinctively different gameplay from its computer and console counterparts, focusing more on beat 'em up gameplay elements rather than platform gameplay.
Gameplay
The game is essentially a beat-em-up that is drawn using isometric video game graphics, although Jackson attacks with magic powers instead of physical contact, and has the ability to shoot short-ranged magic power at enemies. The magic power can be charged by holding the attack button to increase the range and damage of the magic power. If up close to enemies, Jackson executes a spinning melee attack using magic power.
If the cabinet supports it, up to three people can play simultaneously. All three players play as Jackson, dressed in his suit (white for player 1, red for player 2, black for player 3).
Jackson's special attack is termed "Dance Magic". There are three different dance routines that may be performed, and the player starts with one to three of these attacks per credit (depending on how the machine is set up).
Bubbles, Michael's real-life pet chimpanzee, appears in each level. Once collected or rescued, Bubbles transforms Michael into a robotic version, with th |
https://en.wikipedia.org/wiki/Keystone%20effect | The keystone effect is the apparent distortion of an image caused by projecting it onto an angled surface. It is the distortion of the image dimensions, such as making a square look like a trapezoid, the shape of an architectural keystone, hence the name of the feature. In the typical case of a projector sitting on a table, and looking upwards to the screen, the image is larger at the top than on the bottom. Some areas of the screen may not be focused correctly as the projector lens is focused at the average distance only.
In photography, the term is used to describe the apparent leaning of buildings towards the vertical centerline of the photo when shooting upwards, a common effect in architectural photography. Likewise, when taking photos looking down, e.g., from a skyscraper, buildings appear to get broader towards the top. The effect is usually corrected by either using special lenses in tilt–shift photography or in post-processing using modern image editing software.
Theory
The distortion suffered by the image depends on the angle of the projector to the screen, and the beam angle.
The distortion (on a two-dimensional model, and for small focus angles) is best approximated by:
where is the angle between the screen axis and the central ray from the projector, and is the width of the focus.
From the formula, it is clear that there will be no distortion when is zero, or perpendicular to the screen.
In stereo imaging
In stereoscopy, two lenses are used to view the same subject image, each from a slightly different perspective, allowing a three-dimensional view of the subject. If the two images are not exactly parallel, this causes a keystone effect. This is particularly noticeable when the lenses are close to the subject, as with a stereo microscope, but is also a common problem with many 3D stereo camera lenses.
Solving the problem
The problem arises for screen projectors that don't have the depth of focus necessary to keep all lines (from top to |
https://en.wikipedia.org/wiki/Michael%20McKubre | Michael Charles Harold McKubre is an electrochemist involved with cold fusion energy research. McKubre was the director of the Energy Research Center at SRI International in 1998. He is a native of New Zealand.
Education
McKubre completed two degrees at Victoria University of Wellington, a Master's degree in 1972, titled A Study of the Frequency Domain Induced Polarisation Effects Displayed by Clay and by Cation Exchange Resin, Model Soil Systems, followed by a PhD in 1976 on membrane polarisation effects in simulated rock systems.
Career
From 1989 to 2002, he researched cold fusion at SRI International. Unlike other researchers in the same field, he obtained mainstream funding during all his research: first from the Electric Power Research Institute, then from the Japanese government, and in 2002 he had funding from the U.S. government.
In January 1992 a cold fusion cell exploded in an SRI lab. One of McKubre's collaborators was killed and three people including McKubre were wounded. McKubre still has pieces of glass embedded in his side. Subsequent experiments were done behind bulletproof glass.
In 2004 he and other cold fusion researchers asked the United States Department of Energy (DOE) to give a new review to the field of cold fusion, and he co-authored a report with all the available experimental and theoretical evidence since the 1989 review. The 2004 review concluded that "while significant progress has been made in the sophistication of calorimeters since the review of this subject in 1989, the conclusions reached by the reviewers today are similar to those found in the 1989 review."
As of 2010, he was still making experiments with palladium cells at SRI International, and collaborates with the ENEA laboratory, where the most reliable palladium is being produced. McKubre more recently took part as one of the 22 physicists of the Steorn "jury".
Selected publications
(manuscript) Paper listing the available experimental evidence of cold fusion. |
https://en.wikipedia.org/wiki/Ivanovism | Ivanovism () is a Rodnover (Slavic Neopagan) new religious movement and healing system in Eastern Europe based on the teachings of the Russian mystic Porfiry Korneyevich Ivanov (1898–1983), who elaborated his doctrines by drawing upon Russian folklore. The movement began to take institutional forms between the 1980s and the 1990s, and in the same period it had an influence on the other Rodnover movement of Peterburgian Vedism.
Ivanovite theology is a pantheism in which "Nature" (Pri-Roda in Slavic languages) is seen as a personified, universal and almighty force expressing herself primarily as the four elemental gods of Air, Earth, Fire and Water, while Porfiry Ivanov is deified as the Parshek, her messenger and intercessor. In some interpretations of the theology, Nature is the cloth of a more abstract God, and God comes to full consciousness in mankind. Central to the movement's practice are the Detka ("Child") health system based on twelve commandments and the singing of a Hymn to Life which contains a synthesis of the movement's beliefs.
Overview
Scholars of the study of religions have classified Ivanovism variously as a nature religion, a philosophy, and a system of health. The scholar Boris Knorre defined Ivanovism as an autochthonous Slavic new religious movement combining Slavic paganism with some Christian concepts, and a worldview in line with the theories of "energism" and "noospherology". The scholar S. I. Ivanenko ascribed Ivanovism to the "movements of spiritual and physical improvement", while other scholars have emphasised its similarities with Taoism, yogic traditions, and other Eastern religions, while recognising its isolated Russian origins. Some scholars including Alexey V. Gaidukov characterised Ivanovism as a movement of deep ecology, essentially an animism based on the deification of natural elements and on the attribution of healing properties to them, while O. V. Aseyev described it as a system of "health promotion, rituals and healing m |
https://en.wikipedia.org/wiki/Elliptic%20algebra | In algebra, an elliptic algebra is a certain regular algebra of a Gelfand–Kirillov dimension three (quantum polynomial ring in three variables) that corresponds to a cubic divisor in the projective space P2. If the cubic divisor happens to be an elliptic curve, then the algebra is called a Sklyanin algebra. The notion is studied in the context of noncommutative projective geometry. |
https://en.wikipedia.org/wiki/Electrophysiological%20techniques%20for%20clinical%20diagnosis | Clinical Electrophysiological Testing is based on techniques derived from electrophysiology used for the clinical diagnosis of patients. There are many processes that occur in the body which produce electrical signals that can be detected. Depending on the location and the source of these signals, distinct methods and techniques have been developed to properly target them.
Role of electrophysiology in clinical medicine
Electrophysiology has a very important role in ensuring accurate clinical diagnoses. The brain, the heart and skeletal muscles are prime sources of electric and magnetic fields that can be recorded and the resulting patterns can give insight on what ailments the subject may have.
While electrophysiological tests generally passively collect electrical data, it is sometimes necessary to apply an external stimulus to the desired target in order to produce transient evoked potentials that can provide further insight not obtained from solely passive recording methods.
Electroencephalography (EEG)
Electroencephalography is the measurement of brain activity through the surface of the scalp. Electroencephalography data can be viewed as a qualitative wave form, or it can be further processed through analytical procedures to produce quantitative electroencephalography (qEEG). If qEEG data is mapped from multiple parts of the brain then it is a topographic qEEG (also known as brain electrical activity mapping or BEAM).
If EEGs are recorded after intentionally stimulating the brain, then the resulting data is called a event related potential. The firing of neurons throughout the brain has been known to have localized relationships to certain functions, processes and reactions to stimuli. With proper equipment it is possible to locate where in the brain neurons have been activated and measure their event related potentials. Event-related potentials can be classified as either: sensory, motor or cognitive.
EEGs can be used to diagnose and monitor brain |
https://en.wikipedia.org/wiki/Strange%20star | A strange star is a hypothetical astronomical object, a quark star made of strange quark matter.
Strange stars might exist without regard to the Bodmer–Witten assumption of stability at near-zero temperatures and pressures, as strange quark matter might form and remain stable at the core of neutron stars, in the same way as ordinary quark matter could. Such strange stars will naturally have a crust layer of neutron star material. The depth of the crust layer will depend on the physical conditions and circumstances of the entire star and on the properties of strange quark matter in general. Stars partially made up of quark matter (including strange quark matter) are also referred to as hybrid stars.
The collapse of the crust layer of strange stars is one of the proposed causes of fast radio bursts.
Theoretical description
Neutron stars are formed when the collapse of a star occurs with such intense force that gravity forces subatomic particles such as protons and electrons to merge into neutrally charged neutron particles, releasing a shower of neutrinos. If the resultant neutral core is able to maintain form and not collapse into a black hole, the end result is an incredibly dense celestial body composed entirely of neutral uncharged particles.
Protons and neutrons are composed of three quarks: a proton by two up quarks and one down quark, a neutron by two down quarks and one up quark. It is hypothesized that within neutron stars, the conditions are so extreme that a process known as deconfinement occurs: where subatomic particles dissolve and leave their constituent quarks behind as free particles. The temperature and pressure would then force these quarks to be squeezed together to such an extent that they would form a hypothetical phase of matter known as quark matter. If this occurs, the neutron star becomes a "quark star". If the pressure is great enough, the quarks could be affected even further and transform into strange quarks, which would then interac |
https://en.wikipedia.org/wiki/J%C3%BCrgen%20G%C3%A4rtner | Jürgen Gärtner (born 1950 in Reichenbach, Oberlausitz) is a German mathematician, specializing in probability theory and analysis.
Gärtner graduated in 1973 with Diplom from TU Dresden. He received in 1976 his Ph.D. from Lomonosov University under the supervision of Mark Freidlin. At the Weierstrass Institute, Gärtner was from 1976 to 1985 a research associate; he habilitated there in 1984 with Dissertation B: Zur Ausbreitung von Wellenfronten für Reaktions-Diffusions-Gleichungen (The propagation of wave fronts for reaction-diffusion equations). At the Weierstrass Institute he was from 1985 to 1995 the head of the probability group. He was a professor of the Academy of Sciences of the GDR from 1988 until its disbandment in late 1991. At TU Berlin he was from 1992 to 2011 a professor, retiring as professor emeritus in 2011.
In 1977 he proved a general form of Cramér's Theorem in the theory of large deviations (LD); the theorem is known as the Gärtner-Ellis Large Deviations Principle (LDP). (Richard S. Ellis proved the theorem in 1984 with weaker premises.) In 1982 Gärtner wrote an important paper on the famous KPP equation (a semi-linear diffusion equation introduced in 1937). In 1987 Gärtner, with Donald A. Dawson, introduced the construction of a projective limit in the LDP. From 1987 to 1989 Gärtner and Dawson wrote a series of important papers on the McKean-Vlasov process. Their results were extended by other mathematicians in the 1990s to random mean-field interactions and to spin-glass mean-field interactions. In 1990 Gärtner and Molchanov wrote a seminal paper on intermittency in the parabolic Anderson model; the paper introduced a new approach to intermittency via the study of Lyapunov coefficients.
Gärtner was a member from 1984 to 1992 of the editorial board of Probability Theory and Related Fields and from 1990 to 2000 of the editorial board of Mathematische Nachrichten.
In 1992 Gärtner was an invited lecturer at the first European Congress of Mathemat |
https://en.wikipedia.org/wiki/Oscillator%20sync | Oscillator sync is a feature in some synthesizers with two or more VCOs, DCOs, or "virtual" oscillators. As one oscillator finishes a cycle, it resets the period of another oscillator, forcing the latter to have the same base frequency. This can produce a harmonically rich sound, the timbre of which can be altered by varying the synced oscillator's frequency. A synced oscillator that resets other oscillator(s) is called the master; the oscillators which it resets are called slaves. There are two common forms of oscillator sync which appear on synthesizers: Hard Sync and Soft Sync. According to Sound on Sound journalist Gordon Reid, oscillator sync is "one of the least understood facilities on any synthesizer".
Hard Sync
The leader oscillator's pitch is generated by user input (typically the synthesizer's keyboard), and is arbitrary. The follower oscillator's pitch may be tuned to (or detuned from) this frequency, or may remain constant. Every time the leader oscillator's cycle repeats, the follower is retriggered, regardless of its position. If the follower is tuned to a lower frequency than the leader it will be forced to repeat before it completes an entire cycle, and if it is tuned to a higher frequency it will be forced to repeat partway through a second or third cycle. This technique ensures that the oscillators are technically playing at the same frequency, but the irregular cycle of the follower oscillator often causes complex timbres and the impression of harmony. If the tuning of the follower oscillator is swept, one may discern a harmonic sequence.
This effect may be achieved by measuring the zero axis crossings of the leader oscillator and retriggering the follower oscillator after every other crossing.
This form of oscillator sync is more common than soft sync, but is prone to generating aliasing in naive digital implementations.
Soft Sync
There are several other kinds of sync which may also be called Soft Sync. In a Hard Sync setup, the follower os |
https://en.wikipedia.org/wiki/Conda%20%28package%20manager%29 | Conda is an open-source, cross-platform, language-agnostic package manager and environment management system. It was originally developed to solve difficult package management challenges faced by Python data scientists, and today is a popular package manager for Python and R.
At first part of Anaconda Python distribution developed by Anaconda Inc., it ended up being useful on its own and for things other than Python, so it was spun out as a separate package, released under the BSD license. The Conda package and environment manager is included in all versions of Anaconda, Miniconda, and Anaconda Repository. Conda is a NumFOCUS affiliated project.
Features
Conda allows users to easily install different versions of binary software packages and any required libraries appropriate for their computing platform. Also, it allows users to switch between package versions and download and install updates from a software repository. Conda is written in the Python programming language, but can manage projects containing code written in any language (e.g., R), including multi-language projects. Conda can install Python,
while similar Python-based cross-platform package managers (such as wheel or pip) cannot.
A popular Conda channel for bioinformatics software is Bioconda, which provides multiple software distributions for computational biology.
Comparison to pip
The big difference between Conda and the pip package manager is in how package dependencies are managed, which is a significant challenge for Python data science and the reason Conda was created. In versions of Pip released prior to version 20.3, Pip installs all Python package dependencies required, whether or not those conflict with other packages previously installed. So a working installation of, for example, Google TensorFlow will suddenly stop working when a user pip-installs a new package that needs a different version of the NumPy library. Everything might still appear to work but the user could get differen |
https://en.wikipedia.org/wiki/Evolution%20of%20seed%20size | The first seeded plants emerged in the late Devonian 370 million years ago. Selection pressures shaping seed size stem from physical and biological sources including drought, predation, seedling-seedling competition, optimal dormancy depth, and dispersal.
History
Since the evolution of the first seeded plants ~370 million years ago, the largest change in seed size was found to be at the divergence of gymnosperms and angiosperms ~325 million years ago, but overall, the divergence of seed size appears to take place relatively consistently through evolutionary time. Seed mass has been found to be phylogenetically conservative with most differences in mean seed mass within types of seed dispersal (dispersal modes) being phylogenetic. This type of information gives us clues about how seed size evolved. Dating fossilized seeds of various sizes and comparing them with the presence of possible animal dispersers and the environmental conditions of the time is another technique used to study the evolution of seed size. Environmental conditions appear to have had a larger influence on the evolution of seed size compared to the presence of animal dispersers. One example of seed size evolving to environmental conditions is thought to have been abundant, closed forest vegetation selecting for larger seed sizes during the Eocene epoch. A general increase or decrease in seed size through time has not been found, but instead a fluctuation in seed size following the environmental conditions of the Maastrichtian, Paleocene, Eocene, Oligocene, Miocene, and Pliocene epochs. Today we also see a pattern with seed size distribution and global environmental conditions where the largest mean seed size is found in tropical forests and a steep decrease in seed size takes places globally as vegetation type changes to non-forest.
Mechanism
Modern seed sizes range from 0.0001 mg in orchid seeds to in double coconuts. Larger seeds have larger quantities of metabolic reserves in their embryo an |
https://en.wikipedia.org/wiki/New%20digraph%20reconstruction%20conjecture | The reconstruction conjecture of Stanisław Ulam is one of the best-known open problems in graph theory. Using the terminology of Frank Harary it can be stated as follows: If G and H are two graphs on at least three vertices and ƒ is a bijection from V(G) to V(H) such that G\{v} and H\{ƒ(v)} are isomorphic for all vertices v in V(G), then G and H are isomorphic.
In 1964 Harary extended the reconstruction conjecture to directed graphs on at least five vertices as the so-called digraph reconstruction conjecture. Many results supporting the digraph reconstruction conjecture appeared between 1964 and 1976. However, this conjecture was proved to be false when P. K. Stockmeyer discovered several infinite families of counterexample pairs of digraphs (including tournaments) of arbitrarily large order. The falsity of the digraph reconstruction conjecture caused doubt about the reconstruction conjecture itself. Stockmeyer even observed that “perhaps the considerable effort being spent in attempts to prove the (reconstruction) conjecture should be balanced by more serious attempts to construct counterexamples.”
In 1979, Ramachandran revived the digraph reconstruction conjecture in a slightly weaker form called the new digraph reconstruction conjecture. In a digraph, the number of arcs incident from (respectively, to) a vertex v is called the outdegree (indegree) of v and is denoted by od(v) (respectively, id(v)). The new digraph conjecture may be stated as follows:
The new digraph reconstruction conjecture reduces to the reconstruction conjecture in the undirected case, because if all the vertex-deleted subgraphs of two graphs are isomorphic, then the corresponding vertices must have the same degree. Thus, the new digraph reconstruction conjecture is stronger than the reconstruction conjecture, but weaker than the disproved digraph reconstruction conjecture. Several families of digraphs have been shown to satisfy the new digraph reconstruction conjecture and these include al |
https://en.wikipedia.org/wiki/Solar2D | Solar2D (formerly Corona SDK) is a free and open-source, cross-platform software development kit originally developed by Corona Labs Inc. and now maintained by Vlad Shcherban. Released in late 2009, it allows software programmers to build 2D mobile applications for iOS, Android, and Kindle, desktop applications for Windows, Linux and macOS, and connected TV applications for Apple TV, Fire TV and Android TV.
Solar2D uses integrated Lua layered on top of C++/OpenGL to build graphic applications. The software has two operational modes: the Solar2D Simulator and Solar2D Native. With the Solar2D Simulator, apps are built directly from the Solar2D Simulator. Solar2D Native allows the integration of Lua code and assets within an Xcode or Android Studio project to build apps and include native features.
History
Walter Luh and Carlos Icaza started Ansca Mobile, later renamed Corona Labs, after departing from Adobe in 2007. At Adobe, Luh was the lead architect working on the Flash Lite team and Icaza was the engineering manager responsible for mobile Flash authoring. In June 2009, Ansca released the first Corona SDK beta free for early adopters.
In December 2009, Ansca launched Corona SDK 1.0 for iPhone. The following February, the Corona SDK 1.1 was released with additional features.
In September 2010, Ansca released version 2.0 of Corona SDK and added Corona Game Edition. Version 2.0 added cross-platform support for iPad and Android, while Game Edition added a physics engine and other advanced features aimed specifically at game development.
In January 2011, Corona SDK was released for Windows XP and newer, giving developers the opportunity to build Android applications on PC.
In April 2012, co-founder and CEO Icaza left Ansca, and CTO Luh took the CEO role. Shortly after, in June 2012, Ansca changed its name to Corona Labs. In August 2012, Corona Labs announced Enterprise Edition, which added native bindings for Objective-C.
In March 2015, during GDC 2015 announceme |
https://en.wikipedia.org/wiki/Joint%20Committee%20for%20Guides%20in%20Metrology | The Joint Committee for Guides in Metrology (JCGM) is an organization in Sèvres that prepared the Guide to the Expression of Uncertainty in Measurement (GUM) and the International Vocabulary of Metrology (VIM). The JCGM assumed responsibility for these two documents from the ISO Technical Advisory Group 4 (TAG4).
Partner organizations
Partner organizations below send representatives into the JCGM:
International Bureau of Weights and Measures (BIPM)
International Electrotechnical Commission (IEC)
International Federation of Clinical Chemistry and Laboratory Medicine (IFCC)
International Organization for Standardization (ISO)
International Union of Pure and Applied Chemistry (IUPAC)
International Union of Pure and Applied Physics (IUPAP)
International Organization of Legal Metrology (OIML)
International Laboratory Accreditation Cooperation (ILAC)
Working groups
JCGM has two Working Groups. Working Group 1, "Expression of uncertainty in measurement", has the task to promote the use of the GUM and to prepare Supplements and other documents for its broad application. Working Group 2, "Working Group on International vocabulary of basic and general terms in metrology (VIM)", has the task to revise and promote the use of the VIM. For further information on the activity of the JCGM, see www.bipm.org.
Publications
GUM: Guide to the Expression of Uncertainty in Measurement
The Guide to the Expression of Uncertainty in Measurement (GUM) is a document published by the JCGM that establishes general rules for evaluating and expressing uncertainty in measurement.
The GUM provides a way to express the perceived quality of the result of a measurement. Rather than express the result by providing an estimate of the measurand along with information about systematic and random error values (in the form of an "error analysis"), the GUM approach is to express the result of a measurement as an estimate of the measurand along with an associated measurement uncertainty.
One |
https://en.wikipedia.org/wiki/Jensen%20Huang | Jen-Hsun "Jensen" Huang (; born February 17, 1963) is an American businessman, electrical engineer, and the co-founder, president and CEO of Nvidia Corporation.
Early life
Huang was born in Tainan, Taiwan. His family first moved to Thailand when he was five years old, then emigrated to the United States around four to five years later, in 1973. When he was ten years old, he lived in the boys dormitory with his brother at Oneida Baptist Institute while attending Oneida Elementary school in Oneida, Kentucky. His family later settled in Oregon, where he graduated from Aloha High School just outside Portland.
Jensen received his undergraduate degree in electrical engineering from Oregon State University in 1984, and his master's degree in electrical engineering from Stanford University in 1992.
Career
After college he was a director at LSI Logic and a microprocessor designer at Advanced Micro Devices, Inc. (AMD). At 30 years old in 1993, Huang co-founded Nvidia and is the CEO and president.
He owns 3.6% of Nvidia's stock, which went public in 1999.
He earned as CEO in 2007, ranking him as the 61st highest paid U.S. CEO by Forbes.
As of June 19, 2023, Huang's net worth is according to the Bloomberg Billionaires Index.
Philanthropy
In 2022 Huang donated to his alma mater, Oregon State University, as a portion of a donation towards the creation of a supercomputing institute on campus.
Huang gave his other alma mater Stanford University to build the Jen-Hsun Huang School of Engineering Center. The building is the second of four that make up Stanford's Science and Engineering Quad. It was designed by Bora Architects of Portland, Oregon and completed in 2010. Huang gave his alma mater Oneida Baptist Institute to build Huang Hall, a new girls' dormitory and classroom building. It was designed by CMW Architects of Lexington, Kentucky.
In 2007, Huang was the recipient of the Silicon Valley Education Foundation's Pioneer Business Leader Award for his work in bot |
https://en.wikipedia.org/wiki/Capillary%20fringe | The capillary fringe is the subsurface layer in which groundwater seeps up from a water table by capillary action to fill pores. Pores at the base of the capillary fringe are filled with water due to tension saturation. This saturated portion of the capillary fringe is less than the total capillary rise because of the presence of a mix in pore size. If the pore size is small and relatively uniform, it is possible that soils can be completely saturated with water for several feet above the water table. Alternately, when the pore size is large, the saturated portion will extend only a few inches above the water table. Capillary action supports a vadose zone above the saturated base, within which water content decreases with distance above the water table. In soils with a wide range in pore size, the unsaturated zone can be several times thicker than the saturated zone.
Some workers restrict their definition of the capillary fringe only to the tension-saturated base portion and exclude it wholly from the vadose zone.
This is more common among workers addressing solute transport and water flow. Others define the capillary fringe as including both the tension-saturated and unsaturated portions. This is the preferred definition among workers dealing with the remediation of salt affected soils as well as those dealing with the vapor phase of soil processes and bioremediation. It is not uncommon to see the capillary fringe treated as a boundary condition separating the water table from the unsaturated zone, without defining it as a significant part of either.
See also |
https://en.wikipedia.org/wiki/Lippulaulu | Lippulaulu () also known Siniristilippumme is a flag anthem, written by Veikko Antero Koskenniemi and composed by Yrjö Kilpinen in 1927. The song is often sung during flag raisings on flag flying days in Finland, especially on Midsummer and Independence Day.
Koskenniemi wrote the song following the flag change to the Blue cross flag in May 1918. In 1917, Koskenniemi had written a poem about the previous Lion flag, which was used during the Finnish Civil War, his poem was named Leijonalippu ().
The text of the song is deeply patriotic and even warlike, calling for the sacrifice of one's life for their country.
Text
See also
Maamme
National anthem |
https://en.wikipedia.org/wiki/Anatol%20Odzijewicz | Anatol Odzijewicz (November 10, 1947 – April 18, 2022) was Polish mathematician and physicist. The main areas of his research were the theory of Banach groupoids and algebroids related to the structure of W*-algebras, quantization of physical systems by means of the coherent state map, as well as quantum and classical integrable systems.
Anatol Odzijewicz was also a social activist. He founded a branch of the Society for the Preservation of Monuments in Białystok and chaired it for many years. He was the founder of the open-air museum in Białowieża.
Career
In the years 1975-1979 Odzijewicz was employed at Warsaw University. Since 1979 Odzijewicz was working at University of Białystok (till 1997 it was a branch of Warsaw University). In the years 1997-2005 he was the dean of the Faculty of Mathematics and Physics, and from 2008 to 2016 - the dean of the Faculty of Mathematics and Computer Science, University of Białystok. He was also the director of the Institute of Mathematics (2005-2008 and 2016-2019). He founded and was leading a group of researchers working in the area of mathematical physics.
He was also a founder of the conference series Workshop on Geometric Methods in Physics.
Selected publications |
https://en.wikipedia.org/wiki/Sharp%20series | The sharp series is a series of spectral lines in the atomic emission spectrum caused when electrons descend from higher-energy s orbitals of an atom to the lowest available p orbital. The spectral lines include some in the visible light, and they extend into the ultraviolet. The lines get closer and closer together as the frequency increases never exceeding the series limit. The sharp series was important in the development of the understanding of electron shells and subshells in atoms. The sharp series has given the letter s to the s atomic orbital or subshell.
The sharp series has a limit given by
The series is caused by transitions to the lowest P state from higher energy S orbitals.
One terminology to identify the lines is: 1P-mS But note that 1P just means the lowest P state in an atom and that the modern designation would start at 2P, and is larger for higher atomic numbered atoms.
The terms can have different designations, mS for single line systems, mσ for doublets and ms for triplets.
Since the P state is not the lowest energy level for the alkali atom (the S is) the sharp series will not show up as absorption in a cool gas, however it shows up as emission lines.
The Rydberg correction is largest for the S term as the electron penetrates the inner core of electrons more.
The limit for the series corresponds to electron emission, where the electron has so much energy it escapes the atom.
Even though the series is called sharp, the lines may not be sharp.
In alkali metals the P terms are split and . This causes the spectral lines to be doublets, with a constant spacing between the two parts of the double line.
Names
The sharp series used to be called the second subordinate series, with the diffuse series being the first subordinate, both being subordinate to the principal series.
Laws for alkali metals
The sharp series limit is the same as the diffuse series limit. In the late 1800s these two were termed supplementary series.
In 1896 Arthur Schu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.