source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Akira%20Yoshizawa | was a Japanese origamist, considered to be the grandmaster of origami. He is credited with raising origami from a craft to a living art. According to his own estimation made in 1989, he created more than 50,000 models, of which only a few hundred designs were presented as diagrams in his 18 books. Yoshizawa acted as an international cultural ambassador for Japan throughout his career. In 1983, Emperor Hirohito awarded him the Order of the Rising Sun, 5th class, one of the highest honors bestowed in Japan.
Life
Yoshizawa was born on 14 March 1911, in Kaminokawa, Japan, to the family of a dairy farmer. When he was a child, he took pleasure in teaching himself origami. He moved into a factory job in Tokyo when he was 13 years old. His passion for origami was rekindled in his early 20s, when he was promoted from factory worker to technical draftsman. His new job was to teach junior employees geometry. Yoshizawa used the traditional art of origami to understand and communicate geometrical problems.
In 1937, he left factory work to pursue origami full-time. During the next 20 years, he lived in total poverty, earning his living by door-to-door selling of (a Japanese preserved condiment that is usually made of seaweed). During World War II, Yoshizawa served in the army medical corps in Hong Kong. He made origami models to cheer up the sick patients, but eventually fell ill himself and was sent back to Japan. His origami work was creative enough to be included in the 1944 book Origami Shuko, by . However, it was his work for the January 1952 issue of the magazine Asahi Graph that launched his career, which included the 12 zodiac signs commissioned by a magazine.
In 1954, his first monograph, Atarashii Origami Geijutsu (New Origami Art) was published. In this work, he established the Yoshizawa–Randlett system of notation for origami folds (a system of symbols, arrows and diagrams), which has become the standard for most paperfolders. The publishing of this book helped |
https://en.wikipedia.org/wiki/Time%20consistency%20%28finance%29 | Time consistency in the context of finance is the property of not having mutually contradictory evaluations of risk at different points in time. This property implies that if investment A is considered riskier than B at some future time, then A will also be considered riskier than B at every prior time.
Time consistency and financial risk
Time consistency is a property in financial risk related to dynamic risk measures. The purpose of the time the consistent property is to categorize the risk measures which satisfy the condition that if portfolio (A) is riskier than portfolio (B) at some time in the future, then it is guaranteed to be riskier at any time prior to that point. This is an important property since if it were not to hold then there is an event (with probability of occurring greater than 0) such that B is riskier than A at time although it is certain that A is riskier than B at time . As the name suggests a time inconsistent risk measure can lead to inconsistent behavior in financial risk management.
Mathematical definition
A dynamic risk measure on is time consistent if and implies .
Equivalent definitions
Equality
For all
Recursive
For all
Acceptance Set
For all where is the time acceptance set and
Cocycle condition (for convex risk measures)
For all where is the minimal penalty function (where is an acceptance set and denotes the essential supremum) at time and .
Construction
Due to the recursive property it is simple to construct a time consistent risk measure. This is done by composing one-period measures over time. This would mean that:
Examples
Value at risk and average value at risk
Both dynamic value at risk and dynamic average value at risk are not a time consistent risk measures.
Time consistent alternative
The time consistent alternative to the dynamic average value at risk with parameter at time t is defined by
such that .
Dynamic superhedging price
The dynamic superhedging price is a time consisten |
https://en.wikipedia.org/wiki/Polarization%20in%20astronomy | Polarization is an important phenomenon in astronomy.
Stars
The polarization of starlight was first observed by the astronomers William Hiltner and John S. Hall in 1949. Subsequently, Jesse Greenstein and Leverett Davis, Jr. developed theories allowing the use of polarization data to trace interstellar magnetic fields.
Though the integrated thermal radiation of stars is not usually appreciably polarized at source, scattering by interstellar dust can impose polarization on starlight over long distances. Net polarization at the source can occur if the photosphere itself is asymmetric, due to limb polarization. Plane polarization of starlight generated at the star itself is observed for Ap stars (peculiar A type stars).
Sun
Both circular and linear polarization of sunlight has been measured. Circular polarization is mainly due to transmission and absorption effects in strongly magnetic regions of the Sun's surface. Another mechanism that gives rise to circular polarization is the so-called "alignment-to-orientation mechanism". Continuum light is linearly polarized at different locations across the face of the Sun (limb polarization) though taken as a whole, this polarization cancels. Linear polarization in spectral lines is usually created by anisotropic scattering of photons on atoms and ions which can themselves be polarized by this interaction. The linearly polarized spectrum of the Sun is often called the second solar spectrum. Atomic polarization can be modified in weak magnetic fields by the Hanle effect. As a result, polarization of the scattered photons is also modified providing a diagnostics tool for understanding stellar magnetic fields.
Other sources
Polarization is also present in radiation from coherent astronomical sources due to the Zeeman effect (e.g. hydroxyl or methanol masers).
The large radio lobes in active galaxies and pulsar radio radiation (which may, it is speculated, sometimes be coherent) also show polarization.
Apart from providing in |
https://en.wikipedia.org/wiki/Canadian%20postal%20abbreviations%20for%20provinces%20and%20territories | Canadian provincial and territorial postal abbreviations are used by Canada Post in a code system consisting of two capital letters, to represent the 13 provinces and territories on addressed mail. These abbreviations allow automated sorting.
ISO 3166-2:CA identifiers' second elements are all the same as these; ISO adopted the existing Canada Post abbreviations.
These abbreviations are not the source of letters in Canadian postal codes, which are assigned by Canada Post on a different basis than these abbreviations. While postal codes are also used for sorting, they allow extensive regional sorting. In addition, several provinces have postal codes that begin with different letters.
The codes replaced the inconsistent traditional system used by Canadians until the 1990s. Apart from the postal abbreviations, there are no officially designated traditional (or standard) abbreviations for the provinces. Natural Resources Canada, however, maintains a list of such abbreviations which are recommended for "general purpose use" and are also used in other official contexts, such as the census conducted by Statistics Canada. Some of the French versions included a hyphen. Nunavut (created in 1999) does not have a designated abbreviation because it did not exist when these codes were phased out, though some can be found in other official works.
Names and abbreviations
Choice of letters
The sources of the postal abbreviations vary. Some are from the initials of two of the words in the name of a province or territory, while others are from the first and final letter or from the first and some other letter in the name. All of these names are based on the English form of the name, though they also correspond to their French equivalents in various ways (for example, NT could be read for the first and last letters of Nord-Ouest, instead of Northwest Territories). For Quebec and New Brunswick, the two provinces with large numbers of French speakers, the initials in both languages |
https://en.wikipedia.org/wiki/Energy%20level%20splitting | In quantum physics, energy level splitting or a split in an energy level of a quantum system occurs when a perturbation changes the system. The perturbation changes the corresponding Hamiltonian and the outcome is change in eigenvalues; several distinct energy levels emerge in place of the former degenerate (multi-state) level. This may occur because of external fields, quantum tunnelling between states, or other effects. The term is most commonly used in reference to the electron configuration in atoms or molecules.
The simplest case of level splitting is a quantum system with two states whose unperturbed Hamiltonian is a diagonal operator: , where is the identity matrix. Eigenstates and eigenvalues (energy levels) of a perturbed Hamiltonian
will be:
: the level, and
: the level,
so this degenerate eigenvalue splits in two whenever . Though, if a perturbed Hamiltonian is not diagonal for this quantum states basis , then Hamiltonian's eigenstates are linear combinations of these two states.
For a physical implementation such as a charged spin-½ particle in an external magnetic field, the z-axis of the coordinate system is required to be collinear with the magnetic field to obtain a Hamiltonian in the form above (the Pauli matrix corresponds to z-axis). These basis states, referred to as spin-up and spin-down, are hence eigenvectors of the perturbed Hamiltonian, so this level splitting is both easy to demonstrate mathematically and intuitively evident.
But in cases where the choice of state basis is not determined by a coordinate system, and the perturbed Hamiltonian is not diagonal, a level splitting may appear counter-intuitive, as in examples from chemistry below.
Examples
In atomic physics:
The Zeeman effect – the splitting of electronic levels in an atom because of an external magnetic field.
The Stark effect – splitting because of an external electric field.
In physical chemistry:
The Jahn–Teller effect – splitting of electronic levels in a mol |
https://en.wikipedia.org/wiki/Edge%20and%20vertex%20spaces | In the mathematical discipline of graph theory, the edge space and vertex space of an undirected graph are vector spaces defined in terms of the edge and vertex sets, respectively. These vector spaces make it possible to use techniques of linear algebra in studying the graph.
Definition
Let be a finite undirected graph. The vertex space of G is the vector space over the finite field of two elements
of all functions . Every element of naturally corresponds the subset of V which assigns a 1 to its vertices. Also every subset of V is uniquely represented in by its characteristic function. The edge space is the -vector space freely generated by the edge set E. The dimension of the vertex space is thus the number of vertices of the graph, while the dimension of the edge space is the number of edges.
These definitions can be made more explicit. For example, we can describe the edge space as follows:
elements are subsets of , that is, as a set is the power set of E
vector addition is defined as the symmetric difference:
scalar multiplication is defined by:
The singleton subsets of E form a basis for .
One can also think of as the power set of V made into a vector space with similar vector addition and scalar multiplication as defined for .
Properties
The incidence matrix for a graph defines one possible linear transformation
between the edge space and the vertex space of . The incidence matrix of , as a linear transformation, maps each edge to its two incident vertices. Let be the edge between and then
The cycle space and the cut space are linear subspaces of the edge space. |
https://en.wikipedia.org/wiki/Pulse%20wave | A pulse wave or pulse train is a type of non-sinusoidal waveform that includes square waves (duty cycle of 50%) and similarly periodic but asymmetrical waves (duty cycles other than 50%). It is a term used in synthesizer programming, and is a typical waveform available on many synthesizers. The exact shape of the wave is determined by the duty cycle or pulse width of the oscillator output. In many synthesizers, the duty cycle can be modulated (pulse-width modulation) for a more dynamic timbre.
The pulse wave is also known as the rectangular wave, the periodic version of the rectangular function.
The average level of a rectangular wave is also given by the duty cycle, therefore by varying the on and off periods and then averaging these said periods, it is possible to represent any value between the two limiting levels. This is the basis of pulse-width modulation.
Frequency-domain representation
The Fourier series expansion for a rectangular pulse wave with period , amplitude and pulse length is
where .
Equivalently, if duty cycle is used, and :
Note that, for symmetry, the starting time () in this expansion is halfway through the first pulse.
Alternatively, can be written using the Sinc function, using the definition , as
or with as
Generation
A pulse wave can be created by subtracting a sawtooth wave from a phase-shifted version of itself. If the sawtooth waves are bandlimited, the resulting pulse wave is bandlimited, too. A single ramp wave (sawtooth or triangle) applied to an input of a comparator produces a pulse wave that is not bandlimited. A voltage applied to the other input of the comparator determines the pulse width.
Applications
The harmonic spectrum of a pulse wave is determined by the duty cycle. Acoustically, the rectangular wave has been described variously as having a narrow/thin, nasal/buzzy/biting, clear, resonant, rich, round and bright sound. Pulse waves are used in many Steve Winwood songs, such as "While You See a Chance".
In d |
https://en.wikipedia.org/wiki/Libdvdcss | libdvdcss (or libdvdcss2 in some repositories) is a free and open-source software library for accessing and unscrambling DVDs encrypted with the Content Scramble System (CSS). libdvdcss is part of the VideoLAN project and is used by VLC media player and other DVD player software packages, such as Ogle, xine-based players, and MPlayer.
Comparison with DeCSS
libdvdcss is not to be confused with DeCSS. Whereas DeCSS uses a cracked DVD player key to perform authentication, libdvdcss uses a generated list of possible player keys. If none of them work (for instance, when the DVD drive enforces region coding), libdvdcss brute-forces the key, ignoring the DVD's region code (if any). The legal status of libdvdcss is controversial but there has been—unlike DeCSS—no known legal challenge to it as of June 2022.
Distribution
Many Linux distributions do not contain libdvdcss (for example, Debian, Ubuntu, Fedora and openSUSE) due to fears of running afoul of DMCA-style laws, but they often provide the tools to let the user install it themselves. For example, it used to be available in Ubuntu through Medibuntu, which is no longer available.
Distributions which come pre-installed with libdvdcss include BackTrack, CrunchBang Linux, LinuxMCE, Linux Mint, PCLinuxOS, Puppy Linux 4.2.1, Slax, Super OS, Pardus, and XBMC Live.
It is also in Arch Linux official package repositories.
Usage
Libdvdcss alone is only a library and cannot play DVDs. DVD player applications, such as VLC media player, use this library to decode DVDs. Libdvdcss is optional in many open-source DVD players, but without it, only non-encrypted discs will play.
Using HandBrake or VidCoder for DVD ripping requires that one install libdvdcss (with compilation or Homebrew on macOS)
See also
Advanced Access Content System
Blu-ray |
https://en.wikipedia.org/wiki/Medical%20history | The medical history, case history, or anamnesis (from Greek: ἀνά, aná, "open", and μνήσις, mnesis, "memory") of a patient is a set of information the physicians collect over medical interviews. It involves the patient, and eventually people close to him, so to collect reliable/objective information for managing the medical diagnosis and proposing efficient medical treatments. The medically relevant complaints reported by the patient or others familiar with the patient are referred to as symptoms, in contrast with clinical signs, which are ascertained by direct examination on the part of medical personnel. Most health encounters will result in some form of history being taken. Medical histories vary in their depth and focus. For example, an ambulance paramedic would typically limit their history to important details, such as name, history of presenting complaint, allergies, etc. In contrast, a psychiatric history is frequently lengthy and in depth, as many details about the patient's life are relevant to formulating a management plan for a psychiatric illness.
The information obtained in this way, together with the physical examination, enables the physician and other health professionals to form a diagnosis and treatment plan. If a diagnosis cannot be made, a provisional diagnosis may be formulated, and other possibilities (the differential diagnoses) may be added, listed in order of likelihood by convention. The treatment plan may then include further investigations to clarify the diagnosis.
The method by which doctors gather information about a patient's past and present medical condition in order to make informed clinical decisions is called the history and physical ( the H&P). The history requires that a clinician be skilled in asking appropriate and relevant questions that can provide them with some insight as to what the patient may be experiencing. The standardized format for the history starts with the chief concern (why is the patient in the clinic or hos |
https://en.wikipedia.org/wiki/Neon-burning%20process | The neon-burning process is a set of nuclear fusion reactions that take place in evolved massive stars with at least 8 Solar masses. Neon burning requires high temperatures and densities (around 1.2×109 K or 100 keV and 4×109 kg/m3).
At such high temperatures photodisintegration becomes a significant effect, so some neon nuclei decompose, absorbing 4.73 MeV and releasing alpha particles. This free helium nucleus can then fuse with neon to produce magnesium, releasing 9.316 MeV.
:{| border="0"
|- style="height:2em;"
| ||+ ||γ ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ ||γ
|}
Alternatively:
:{| border="0"
|- style="height:2em;"
| ||+ ||n ||→ || ||+ ||γ
|- style="height:2em;"
| ||+ || ||→ || ||+ ||n
|}
where the neutron consumed in the first step is regenerated in the second.
A secondary reaction causes helium to fuse with magnesium to produce silicon:
+ → + γ
Contraction of the core leads to an increase of temperature, allowing neon to fuse directly as follows:
+ → +
Neon burning takes place after carbon burning has consumed all carbon in the core and built up a new oxygen–neon–sodium–magnesium core. The core ceases producing fusion energy and contracts. This contraction increases density and temperature up to the ignition point of neon burning. The increased temperature around the core allows carbon to burn in a shell, and there will be shells burning helium and hydrogen outside.
During neon burning, oxygen and magnesium accumulate in the central core while neon is consumed. After a few years the star consumes all its neon and the core ceases producing fusion energy and contracts. Again, gravitational pressure takes over and compresses the central core, increasing its density and temperature until the oxygen-burning process can start. |
https://en.wikipedia.org/wiki/Auxetics | Auxetics are structures or materials that have a negative Poisson's ratio. When stretched, they become thicker perpendicular to the applied force. This occurs due to their particular internal structure and the way this deforms when the sample is uniaxially loaded. Auxetics can be single molecules, crystals, or a particular structure of macroscopic matter.
Such materials and structures are expected to have mechanical properties such as high energy absorption and fracture resistance. Auxetics may be useful in applications such as body armor, packing material, knee and elbow pads, robust shock absorbing material, and sponge mops.
History
The term auxetic derives from the Greek word () which means 'that which tends to increase' and has its root in the word (), meaning 'increase' (noun). This terminology was coined by Professor Ken Evans of the University of Exeter.
One of the first artificially produced auxetic materials, the RFS structure (diamond-fold structure), was invented in 1978 by the Berlin researcher K. Pietsch. Although he did not use the term auxetics, he describes for the first time the underlying lever mechanism and its non-linear mechanical reaction so he is therefore considered the inventor of the auxetic net.
The earliest published example of a material with negative Poisson's constant is due to A. G. Kolpakov in 1985, "Determination of the average characteristics of elastic frameworks"; the next synthetic auxetic material was described in Science in 1987, entitled "Foam structures with a Negative Poisson's Ratio" by R.S. Lakes from the University of Wisconsin Madison. The use of the word auxetic to refer to this property probably began in 1991. Recently, cells were shown to display a biological version of auxeticity under certain conditions.
Designs of composites with inverted hexagonal periodicity cell (auxetic hexagon), possessing negative Poisson ratios, were published in 1985.
Properties
Typically, auxetic materials have low density, which |
https://en.wikipedia.org/wiki/International%20Council%20for%20Harmonisation%20of%20Technical%20Requirements%20for%20Pharmaceuticals%20for%20Human%20Use | The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) is an initiative that brings together regulatory authorities and pharmaceutical industry to discuss scientific and technical aspects of pharmaceutical product development and registration. The mission of the ICH is to promote public health by achieving greater harmonisation through the development of technical Guidelines and requirements for pharmaceutical product registration.
Harmonisation leads to a more rational use of human, animal and other resources, the elimination of unnecessary delay in the global development, and availability of new medicines while maintaining safeguards on quality, safety, efficacy, and regulatory obligations to protect public health. Junod notes in her 2005 treatise on Clinical Drug Trials that "Above all, the ICH has succeeded in aligning clinical trial requirements."
History
In the 1980s the European Union began harmonising regulatory requirements. In 1989, Europe, Japan, and the United States began creating plans for harmonisation. The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) was created in April 1990 at a meeting in Brussels. ICH had the initial objective of coordinating the regulatory activities of the European, Japanese and United States regulatory bodies in consultation with the pharmaceutical trade associations from these regions, to discuss and agree the scientific aspects arising from product registration. Since the new millennium, ICH's attention has been directed towards extending the benefits of harmonisation beyond the founding ICH regions.
In 2015, ICH underwent several reforms and changed its name to the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use while becoming a legal entity in Switzerland as a non-profit association. The aim of these reforms was to transform ICH into a t |
https://en.wikipedia.org/wiki/Topological%20module | In mathematics, a topological module is a module over a topological ring such that scalar multiplication and addition are continuous.
Examples
A topological vector space is a topological module over a topological field.
An abelian topological group can be considered as a topological module over where is the ring of integers with the discrete topology.
A topological ring is a topological module over each of its subrings.
A more complicated example is the -adic topology on a ring and its modules. Let be an ideal of a ring The sets of the form for all and all positive integers form a base for a topology on that makes into a topological ring. Then for any left -module the sets of the form for all and all positive integers form a base for a topology on that makes into a topological module over the topological ring
See also |
https://en.wikipedia.org/wiki/141%20%28number%29 | 141 (one hundred [and] forty-one) is the natural number following 140 and preceding 142.
In mathematics
141 is:
a centered pentagonal number.
the sum of the sums of the divisors of the first 13 positive integers.
the second n to give a prime Cullen number (of the form n2n + 1).
an undulating number in base 10, with the previous being 131, and the next being 151.
the sixth hendecagonal (11-gonal) number.
a semiprime: a product of two prime numbers, namely 3 and 47. Since those prime factors are Gaussian primes, this means that 141 is a Blum integer.
a Hilbert prime
In the military
The Lockheed C-141 Starlifter was a United States Air Force military strategic airlifter
K-141 Kursk was a Russian nuclear cruise missile submarine, which sank in the Barents Sea on 12 August 2000
was a United States Navy ship during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy following World War I
was a United States Navy during World War II
In transportation
London Buses route 141 is a Transport for London contracted bus route in London
141 Nottingham–Sutton-in-Ashfield is a bus route in England
The 141 C Ouest was a 2-8-2 steam locomotive of the Chemin de fer de l'État
British Rail Class 141 was the first production model of the Pacer diesel multiple units
Union des Transports Africains de Guinée Flight 141, which crashed in the Bight of Benin on December 25, 2003
The Saipa 141 car produced by SAIPA
The Córas Iompair Éireann 141 class locomotive from General Motors Electro-Motive Division in 1962
In other fields
141 is also:
The year AD 141 or 141 BC
141 AH is a year in the Islamic calendar that corresponds to 759 – 760 CE
141 Lumen is a dark C-type, rocky asteroid orbiting in the asteroid belt
The atomic number of unquadunium, a temporary chemical element
The telephone dialing prefix for withholding one's Caller ID in the United Kingdom
Psalm 141
Sonnet 141 by William |
https://en.wikipedia.org/wiki/Publication%20of%20Darwin%27s%20theory | The publication of Darwin's theory brought into the open Charles Darwin's theory of evolution through natural selection, the culmination of more than twenty years of work.
Thoughts on the possibility of transmutation of species which he recorded in 1836 towards the end of his five-year voyage on the Beagle were followed on his return by findings and work which led him to conceive of his theory in September 1838. He gave priority to his career as a geologist whose observations and theories supported Charles Lyell's uniformitarian ideas, and to publication of the findings from the voyage as well as his journal of the voyage, but he discussed his evolutionary ideas with several naturalists and carried out extensive research on his "hobby" of evolutionary work.
He was writing up his theory in 1858 when he received an essay from Alfred Russel Wallace who was in Borneo, describing Wallace's own theory of natural selection, prompting immediate joint publication of extracts from Darwin's 1844 essay together with Wallace's paper as On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection in a presentation to the Linnaean Society on 1 July 1858. This attracted little notice, but spurred Darwin to write an "abstract" of his work which was published in 1859 as his book On the Origin of Species.
Background
Darwin's ideas developed rapidly from the return in 1836 of the Beagle survey expedition. By December 1838 he had developed the principles of his theory. At that time similar ideas brought others disgrace and association with the revolutionary mob. He was conscious of the need to answer all likely objections before publishing. While he continued with research as his "prime hobby", his priority was an immense amount of work on geology and analysing and publishing findings from the Beagle expedition. This was repeatedly delayed by illness.
Natural history at that time was dominated by clerical naturalists |
https://en.wikipedia.org/wiki/Simultanagnosia | Simultanagnosia (or simultagnosia) is a rare neurological disorder characterized by the inability of an individual to perceive more than a single object at a time. This type of visual attention problem is one of three major components (the others being optic ataxia and optic apraxia) of Bálint's syndrome, an uncommon and incompletely understood variety of severe neuropsychological impairments involving space representation (visuospatial processing). The term "simultanagnosia" was first coined in 1924 by Wolpert to describe a condition where the affected individual could see individual details of a complex scene but failed to grasp the overall meaning of the image.
Simultanagnosia can be divided into two different categories: dorsal and ventral. Ventral occipito-temporal lesions cause a mild form of the disorder, while dorsal occipito-parietal lesions cause a more severe form of the disorder.
Description
Patients with simultanagnosia, a component of Bálint's syndrome, have a restricted spatial window of visual attention and cannot see more than one object at a time in a scene that contains more than one object. For instance, if presented with an image of a table containing both food and various utensils, a patient will report seeing only one item, such as a spoon. If the patient's attention is redirected to another object in the scene, such as a glass, the patient will report that they see the glass but no longer see the spoon. As a result of this impairment, simultanagnosic patients often fail to comprehend the overall meaning of a scene.
In addition, patients note that one stationary object may spontaneously disappear from view as they become aware of another object in the scene.
Simultanagnosic patients often exhibit a phenomenon known as "local capture" where they only identify the local elements of stimuli containing local and global features. However, studies have demonstrated that implicit processing of the global structure can occur. With the appropriate |
https://en.wikipedia.org/wiki/Ratchet%20and%20Clank%20%28characters%29 | Ratchet and Clank are the protagonists of the Ratchet & Clank video game series developed by Insomniac Games, starting with the 2002 Ratchet & Clank. Ratchet is an anthropomorphic alien creature known as a Lombax, while Clank is an escaped robot (real name: XJ-0461 or Defect B5429671) who soon teams up with him.
Appearances
Ratchet
His first appearance was on Planet Veldin, but it is later revealed in the series that Ratchet was originally born on the Lombax home-world of Planet Fastoon in the Polaris Galaxy and later sent to Planet Veldin in the Solana Galaxy by his father Kaden to protect him from Emperor Tachyon. Growing up on Veldin, Ratchet longed to travel to new worlds and even built his own ship.
Shortly after completing his ship, Ratchet met a diminutive robot fugitive whom he dubbed Clank, who helped him to leave Veldin and fulfill his dream of traveling. From this point on, Ratchet and Clank traveled extensively through the Solana, Bogon and Polaris Galaxies, saving them on several occasions.
Clank
After meeting up with Ratchet, they travel to various planets trying to stop the goals of Chairmen Drek, and looking for Captain Qwark to help them. Along the way, Ratchet keeps drifting from the goals that Clank wants to accomplish, causing him to get upset with Ratchet's selfishness. With Clank being the only way Ratchet can pilot his ship, he makes up with him, and gets back on track. In Ratchet & Clank: Going Commando, they are living the lives of heroes and get a call from the CEO of Megacorp, wanting them to help retrieve a dangerous prototype which was stolen. In Ratchet & Clank: Up Your Arsenal, Ratchet and Clank help Captain Qwark defeat his past nemesis, Dr. Nefarious. Meanwhile, Clank is shown to be a movie star, acting as Secret Agent Clank (a PlayStation Portable game was released under the same name, and focuses on the adventures of Clank under this role). A great deal of new information regarding Clank's real origins is shown in the Future |
https://en.wikipedia.org/wiki/Generic%20cell%20rate%20algorithm | The generic cell rate algorithm (GCRA) is a leaky bucket-type scheduling algorithm for the network scheduler that is used in Asynchronous Transfer Mode (ATM) networks. It is used to measure the timing of cells on virtual channels (VCs) and or Virtual Paths (VPs) against bandwidth and jitter limits contained in a traffic contract for the VC or VP to which the cells belong. Cells that do not conform to the limits given by the traffic contract may then be re-timed (delayed) in traffic shaping, or may be dropped (discarded) or reduced in priority (demoted) in traffic policing. Nonconforming cells that are reduced in priority may then be dropped, in preference to higher priority cells, by downstream components in the network that are experiencing congestion. Alternatively they may reach their destination (VC or VP termination) if there is enough capacity for them, despite them being excess cells as far as the contract is concerned: see priority control.
The GCRA is given as the reference for checking the traffic on connections in the network, i.e. usage/network parameter control (UPC/NPC) at user–network interfaces (UNI) or inter-network interfaces or network-network interfaces (INI/NNI) . It is also given as the reference for the timing of cells transmitted (ATM PDU Data_Requests) onto an ATM network by a network interface card (NIC) in a host, i.e. on the user side of the UNI . This ensures that cells are not then discarded by UPC/NCP in the network, i.e. on the network side of the UNI. However, as the GCRA is only given as a reference, the network providers and users may use any other algorithm that gives the same result.
Description of the GCRA
The GCRA is described by the ATM Forum in its User-Network Interface (UNI) and by the ITU-T in recommendation I.371 Traffic control and congestion control in B-ISDN . Both sources describe the GCRA in two equivalent ways: as a virtual scheduling algorithm and as a continuous state leaky bucket algorithm (figure 1).
Lea |
https://en.wikipedia.org/wiki/Advanced%20Simulation%20and%20Computing%20Program | The Advanced Simulation and Computing Program (or ASC) is a super-computing program run by the National Nuclear Security Administration, in order to simulate, test, and maintain the United States nuclear stockpile. The program was created in 1995 in order to support the Stockpile Stewardship Program (or SSP). The goal of the initiative is to extend the lifetime of the current aging stockpile.
History
After the United States' 1992 moratorium on live nuclear testing, the Stockpile Stewardship Program was created in order to find a way to test, and maintain the nuclear stockpile. In response, the National Nuclear Security Administration began to simulate the nuclear warheads using supercomputers. As the stockpile ages, the simulations have become more complex, and the maintenance of the stockpile requires more computing power. Over the years, due to Moore's Law, the ASC program has created several different supercomputers with increasing power, in order to compute the simulations and mathematics.
In celebration of 25 years of ASC accomplishments, the Advanced Simulation and Computing Program has published this report.
Research
The majority of ASC's research is done on super-computers in three different laboratories. The calculations are verified by human calculations.
Laboratories
The ASC program has three laboratories:
Sandia National Laboratories
Los Alamos National Laboratory
Lawrence Livermore National Laboratory
Computing
Current supercomputers
The ASC program currently houses numerous supercomputers on the TOP500 list for computing power. This list changes every six months, so please visit https://top500.org/lists/top500/ for the latest list of NNSA machines. Although these computers may be in separate laboratories, remote computing has been established between the three main laboratories.
Previous supercomputers
ASCI Purple
Red Storm
Blue Gene/L: World's fastest supercomputer, November 2004 – November 2007
Blue Gene Q (aka, Sequoia)
ASCI |
https://en.wikipedia.org/wiki/Adulterant | An adulterant is caused by the act of adulteration, a practice of secretly mixing a substance with another. Typical substances that are adulterated include but are not limited to food, cosmetics, pharmaceuticals, fuel, or other chemicals, that compromise the safety or effectiveness of the said substance.
It will not normally be present in any specification or declared substances due to accident or negligence rather than intent, and also for the introduction of unwanted substances after the product has been made. Adulteration, therefore, implies that the adulterant was introduced deliberately in the initial manufacturing process, or sometimes that it was present in the raw materials and should have been removed, but was not.
An adulterant is distinct from, for example, permitted food preservatives. There can be a fine line between adulterant and additive; chicory may be added to coffee to reduce the cost or achieve a desired flavor—this is adulteration if not declared, but may be stated on the label. Chalk was often added to bread flour; this reduces the cost and increases whiteness, but the calcium confers health benefits, and in modern bread, a little chalk may be included as an additive for this reason.
In wartime, adulterants have been added to make foodstuffs "go further" and prevent shortages. The German word ersatz is widely recognised for such practices during World War II. Such adulteration was sometimes deliberately hidden from the population to prevent loss of morale and propaganda reasons. Some goods considered luxurious in the Soviet Bloc such as coffee were adulterated to make them affordable to the general population.
In food and beverages
Past and present examples of adulterated food, some dangerous, include:
Apple jellies (jams), as substitutes for more expensive fruit jellies, with added colorant and sometimes even specks of wood that simulate raspberry or strawberry seeds
High fructose corn syrup or cane sugar, used to adulterate honey
Red |
https://en.wikipedia.org/wiki/Carbon%20chauvinism | Carbon chauvinism is a neologism meant to disparage the assumption that the chemical processes of hypothetical extraterrestrial life must be constructed primarily from carbon (organic compounds) because as far as is known, carbon's chemical and thermodynamic properties render it far superior to all other elements at forming molecules used in living organisms.
The expression "carbon chauvinism" is also used to criticize the idea that artificial intelligence can't in theory be sentient or truly intelligent because the underlying matter isn't biological.
Concept
The term was used as early as 1973, when scientist Carl Sagan described it and other human chauvinisms that limit imagination of possible extraterrestrial life. It suggests that human beings, as carbon-based life forms who have never encountered any life that has evolved outside the Earth's environment, may find it difficult to envision radically different biochemistries.
Carbon alternatives
Like carbon, silicon can form four stable bonds with itself and other elements, and long chemical chains known as silane polymers, which are very similar to the hydrocarbons essential to life on Earth. Silicon is more reactive than carbon, which could make it optimal for extremely cold environments. However, silanes spontaneously burn in the presence of oxygen at relatively low temperatures, so an oxygen atmosphere may be deadly to silicon-based life. On the other hand, it is worth considering that alkanes are as a rule quite flammable, but carbon-based life on Earth does not store energy directly as alkanes, but as sugars, lipids, alcohols, and other hydrocarbon compounds with very different properties. Water as a solvent would also react with silanes, but again, this only matters if for some reason silanes are used or mass-produced by such organisms.
Silicon lacks an important property of carbon: single, double, and triple carbon-carbon bonds are all relatively stable. Aromatic carbon structures underpin DNA, which c |
https://en.wikipedia.org/wiki/Alachlor | Alachlor is an herbicide from the chloroacetanilide family. It is an odorless, white solid. The greatest use of alachlor is for control of annual grasses and broadleaf weeds in crops. Use of alachlor is illegal in the European Union and no products containing alachlor are currently registered in the United States.
Its mode of action is elongase inhibition, and inhibition of geranylgeranyl pyrophosphate (GGPP) cyclisation enzymes, part of the gibberellin pathway. It is marketed under the trade names Alanex, Bronco, Cannon, Crop Star, Intrro, Lariat, Lasso, Micro-Tech and Partner.
Uses
The largest use of alachlor is as a herbicide for control of annual grasses and broadleaf weeds in crops, primarily on corn, sorghum, and soybeans.
Application details
Alachlor mixes well with other herbicides. It is marketed in mixed formulations with atrazine, glyphosate, trifluralin and imazaquin. It is a selective, systemic herbicide, absorbed by germinating shoots and by roots. Its mode of action is elongase inhibition, and inhibition of geranylgeranyl pyrophosphate (GGPP) cyclisation enzymes, part of the gibberellin pathway. Stated more simply, it works by interfering with a plant's ability to produce protein and by interfering with root growth.
It is most commonly available as microgranules containing 15% active ingredients (AI), or emulsifiable concentrate containing 480 g/ litre of AI. Homologuation in Europe requires a maximum dose of 2,400 g per hectare of AI, or 5 litres/hectare of emulsifiable concentrate or 17 kg/ha of microgranules. The products are applied as either pre-drilling, soil incorporated or pre-emergence.
Safety
The United States Environmental Protection Agency (EPA) classifies the herbicide as toxicity class III - slightly toxic. The Maximum Contaminant Level Goal (MCLG) for Alachlor is zero, to prevent long-term effects. The Maximum Contaminant Level (MCL) for drinking water is two parts per billion (2 ppb).
The EPA cited the following long-term ef |
https://en.wikipedia.org/wiki/Coefficient%20of%20haze | The coefficient of haze (also known as smoke shade) is a measurement of visibility interference in the atmosphere.
One way to measure this is to draw about 1000 cubic feet of air sample through an air filter and obtain the radiation intensity through the filter. The coefficient is then calculated based on the absorbance formula
where is the radiation (400 nm light) intensity transmitted through the sampled filter, and is the radiation intensity transmitted through a clean (control) filter. |
https://en.wikipedia.org/wiki/Coliform%20index | The coliform index is a rating of the purity of water based on a count of fecal bacteria. It is one of many tests done to assure sufficient water quality. Coliform bacteria are microorganisms that primarily originate in the intestines of warm-blooded animals. By testing for coliforms, especially the well known Escherichia coli (E. coli), which is a thermotolerant coliform, one can determine if the water has possibly been exposed to fecal contamination; that is, whether it has come in contact with human or animal feces. It is important to know this because many disease-causing organisms are transferred from human and animal feces to water, from where they can be ingested by people and infect them. Water that has been contaminated by feces usually contains pathogenic bacteria, which can cause disease. Some types of coliforms cause disease, but the coliform index is primarily used to judge if other types of pathogenic bacteria are likely to be present in the water.
The coliform index is used because it is difficult to test for pathogenic bacteria directly. There are many different types of disease-causing bacteria, and they are usually present in low numbers which do not always show up in tests. Thermotolerant coliforms are present in higher numbers than individual types of pathogenic bacteria and they can be tested relatively easily.
However, the coliform index is far from perfect. Thermotolerant coliforms can survive in water on their own, especially in tropical regions, so they do not always indicate fecal contamination. Furthermore, they do not give a good indication of how many pathogenic bacteria are present in the water, and they give no idea at all of whether there are pathogenic viruses or protozoa which also cause diseases and are rarely tested for. Therefore, it does not always give accurate or useful results regarding the purity of water.
See also
Indicator organism
Bacteriological water analysis |
https://en.wikipedia.org/wiki/Laws%20of%20motion | In physics, a number of noted theories of the motion of objects have developed. Among the best known are:
Classical mechanics
Newton's laws of motion
Euler's laws of motion
Cauchy's equations of motion
Kepler's laws of planetary motion
General relativity
Special relativity
Quantum mechanics
Motion (physics) |
https://en.wikipedia.org/wiki/Half-mast | Half-mast or half-staff (American English) refers to a flag flying below the summit of a ship mast, a pole on land, or a pole on a building. In many countries this is seen as a symbol of respect, mourning, distress, or, in some cases, a salute.
The tradition of flying the flag at half-mast began in the 17th century. According to some sources, the flag is lowered to make room for an "invisible flag of death" flying above. However, there is disagreement about where on a flagpole a flag should be when it is at half-mast. It is often recommended that a flag at half-mast be lowered only as much as the hoist, or width, of the flag. British flag protocol is that a flag should be flown no less than two-thirds of the way up the flagpole, with at least the height of the flag between the top of the flag and the top of the pole. It is common for the phrase to be taken literally and for a flag to be flown only halfway up a flagpole, although some authorities deprecate that practice.
When hoisting a flag that is to be displayed at half-mast, it should be raised to the finial of the pole for an instant, then lowered to half-mast. Likewise, when the flag is lowered at the end of the day, it should be hoisted to the finial for an instant, and then lowered.
Australia
The flag of Australia is flown half-mast in Australia:
On the death of the sovereign – from the time of announcement of the death up to and including the funeral. On the day the accession of the new sovereign is proclaimed, it is customary to raise the flag to the peak from 11a.m.;
On the death of a member of a royal family;
On the death of the governor-general or a former governor-general;
On the death of the head of state of another country with which Australia has diplomatic relations – the flag would be flown on the day of the funeral;
On ANZAC day the flag is flown half-mast until noon;
On Remembrance Day flags are flown at peak until 10:30am, at half-mast from 10:30am to 11:03am, then at peak the remainder of t |
https://en.wikipedia.org/wiki/Bridgman%27s%20thermodynamic%20equations | In thermodynamics, Bridgman's thermodynamic equations are a basic set of thermodynamic equations, derived using a method of generating multiple thermodynamic identities involving a number of thermodynamic quantities. The equations are named after the American physicist Percy Williams Bridgman. (See also the exact differential article for general differential relationships).
The extensive variables of the system are fundamental. Only the entropy S , the volume V and the four most common thermodynamic potentials will be considered. The four most common thermodynamic potentials are:
{|
|-----
| Internal energy || U
|-----
| Enthalpy || H
|-----
| Helmholtz free energy || A
|-----
| Gibbs free energy || G
|-----
|}
The first derivatives of the internal energy with respect to its (extensive) natural variables S and V yields the intensive parameters of the system - The pressure P and the temperature T . For a simple system in which the particle numbers are constant, the second derivatives of the thermodynamic potentials can all be expressed in terms of only three material properties
{|
|-----
| heat capacity (constant pressure) || CP
|-----
| Coefficient of thermal expansion || α
|-----
| Isothermal compressibility || βT
|}
Bridgman's equations are a series of relationships between all of the above quantities.
Introduction
Many thermodynamic equations are expressed in terms of partial derivatives. For example, the expression for the heat capacity at constant pressure is:
which is the partial derivative of the enthalpy with respect to temperature while holding pressure constant. We may write this equation as:
This method of rewriting the partial derivative was described by Bridgman (and also Lewis & Randall), and allows the use of the following collection of expressions to express many thermodynamic equations. For example from the equations below we have:
and
Dividing, we recover the proper expression for CP.
The following summary restates various partial |
https://en.wikipedia.org/wiki/Blending%20inheritance | Blending inheritance is an obsolete theory in biology from the 19th century. The theory is that the progeny inherits any characteristic as the average of the parents' values of that characteristic. As an example of this, a crossing of a red flower variety with a white variety of the same species would yield pink-flowered offspring.
Charles Darwin's theory of inheritance by pangenesis, with contributions to egg or sperm from every part of the body, implied blending inheritance. His reliance on this mechanism led Fleeming Jenkin to attack Darwin's theory of natural selection on the grounds that blending inheritance would average out any novel beneficial characteristic before selection had time to act.
Blending inheritance was discarded with the general acceptance of particulate inheritance during the development of modern genetics, after .
History
Darwin's pangenesis
Charles Darwin developed his theory of evolution by natural selection on the basis of an understanding of uniform processes in geology, acting over very long periods of time on inheritable variation within populations. One of those processes was competition for resources, as Thomas Malthus had indicated, leading to a struggle to survive and to reproduce. Since some individuals would by chance have traits that allowed them to leave more offspring, those traits would tend to increase in the population. Darwin assembled many lines of evidence to show that variation occurred and that artificial selection by animal and plant breeding had caused change. All of this demanded a reliable mechanism of inheritance.
Pangenesis was Darwin's attempt to provide such a mechanism of inheritance. The idea was that each part of the parent's body emitted tiny particles called gemmules, which migrated through the body to contribute to that parent's gametes, their eggs or sperms. The theory had an intuitive appeal, as characteristics of all parts of the body, such as shape of nose, width of shoulders and length of legs a |
https://en.wikipedia.org/wiki/Booth%27s%20multiplication%20algorithm | Booth's multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in two's complement notation. The algorithm was invented by Andrew Donald Booth in 1950 while doing research on crystallography at Birkbeck College in Bloomsbury, London. Booth's algorithm is of interest in the study of computer architecture.
The algorithm
Booth's algorithm examines adjacent pairs of bits of the 'N'-bit multiplier Y in signed two's complement representation, including an implicit bit below the least significant bit, y−1 = 0. For each bit yi, for i running from 0 to N − 1, the bits yi and yi−1 are considered. Where these two bits are equal, the product accumulator P is left unchanged. Where yi = 0 and yi−1 = 1, the multiplicand times 2i is added to P; and where yi = 1 and yi−1 = 0, the multiplicand times 2i is subtracted from P. The final value of P is the signed product.
The representations of the multiplicand and product are not specified; typically, these are both also in two's complement representation, like the multiplier, but any number system that supports addition and subtraction will work as well. As stated here, the order of the steps is not determined. Typically, it proceeds from LSB to MSB, starting at i = 0; the multiplication by 2i is then typically replaced by incremental shifting of the P accumulator to the right between steps; low bits can be shifted out, and subsequent additions and subtractions can then be done just on the highest N bits of P. There are many variations and optimizations on these details.
The algorithm is often described as converting strings of 1s in the multiplier to a high-order +1 and a low-order −1 at the ends of the string. When a string runs through the MSB, there is no high-order +1, and the net effect is interpretation as a negative of the appropriate value.
A typical implementation
Booth's algorithm can be implemented by repeatedly adding (with ordinary unsigned binary addition) one of two |
https://en.wikipedia.org/wiki/Giuseppe%20Veronese | Giuseppe Veronese (7 May 1854 – 17 July 1917) was an Italian mathematician. He was born in Chioggia, near Venice.
Education
Veronese earned his laurea in mathematics from the Istituto Tecnico di Venezia in 1872.
Work
Although Veronese's work was severely criticised as unsound by Peano, he is now recognised as having priority on many ideas that have since become parts of transfinite numbers and model theory, and as one of the respected authorities of the time, his work served to focus Peano and others on the need for greater rigor.
He is particularly noted for his hypothesis of relative continuity which was the foundation for his development of the first non-Archimedean linear continuum.
Veronese produced several significant monographs. The most famous appeared in 1891, Fondamenti di geometria a più dimensioni e a più specie di unità rettilinee esposti in forma elementare, normally referred to as Fondamenti di geometria to distinguish it from Veronese' other works also styled Fondamenti. It was this work that was most severely criticised by both Peano and Cantor, however Levi-Civita described it as masterful and Hilbert as profound.
See also
Veronese surface |
https://en.wikipedia.org/wiki/Pteridophyte | A pteridophyte is a vascular plant (with xylem and phloem) that disperses spores. Because pteridophytes produce neither flowers nor seeds, they are sometimes referred to as "cryptogams", meaning that their means of reproduction is hidden.
Ferns, horsetails (often treated as ferns), and lycophytes (clubmosses, spikemosses, and quillworts) are all pteridophytes. However, they do not form a monophyletic group because ferns (and horsetails) are more closely related to seed plants than to lycophytes. "Pteridophyta" is thus no longer a widely accepted taxon, but the term pteridophyte remains in common parlance, as do pteridology and pteridologist as a science and its practitioner, respectively.
Ferns and lycophytes share a life cycle and are often collectively treated or studied, for example by the International Association of Pteridologists and the Pteridophyte Phylogeny Group.
Description
Pteridophytes (ferns and lycophytes) are free-sporing vascular plants that have a life cycle with alternating, free-living gametophyte and sporophyte phases that are independent at maturity. The body of the sporophyte is well differentiated into roots, stem and leaves. The root systems is always adventitious. The stem is either underground or aerial. The leaves may be microphylls or megaphylls. Their other common characteristics include vascular plant apomorphies (e.g., vascular tissue) and land plant plesiomorphies (e.g., spore dispersal and the absence of seeds).
Taxonomy
Phylogeny
Of the pteridophytes, ferns account for nearly 90% of the extant diversity. Smith et al. (2006), the first higher-level pteridophyte classification published in the molecular phylogenetic era, considered the ferns as monilophytes, as follows:
Division Tracheophyta (tracheophytes) - vascular plants
Subdivision Lycopodiophyta (lycophytes) - less than 1% of extant vascular plants
Sub division Euphyllophytina (euphyllophytes)
Infradivision Moniliformopses (monilophytes)
Infradivision Spermatophyta - |
https://en.wikipedia.org/wiki/Oceanic%20basin | In hydrology, an oceanic basin (or ocean basin) is anywhere on Earth that is covered by seawater. Geologically, most of the ocean basins are large geologic basins that are below sea level.
Most commonly the ocean is divided into basins following the continents distribution: the North and South Atlantic (together approximately 75 million km2/ 29 million mi2), North and South Pacific (together approximately 155 million km2/ 59 million mi2), Indian Ocean (68 million km2/ 26 million mi2) and Arctic Ocean (14 million km2/ 5.4 million mi2). Also recognized is the Southern Ocean (20 million km2/ 7 million mi2). All ocean basins collectively cover 71% of the Earth's surface, and together they contain almost 97% of all water on the planet. They have an average depth of almost 4 km (about 2.5 miles).
Definitions of boundaries
Boundaries based on continents
"Limits of Oceans and Seas", published by the International Hydrographic Office in 1953, is a document that defined the ocean's basins as they are largely known today. The main ocean basins are the ones named in the previous section. These main basins are divided into smaller parts. Some examples are: the Baltic Sea (with three subdivisions), the North Sea, the Greenland Sea, the Norwegian Sea, the Laptev Sea, the Gulf of Mexico, the South China Sea, and many more. The limits were set for convenience of compiling sailing directions but had no geographical or physical ground and to this day have no political significance. For instance, the line between the North and South Atlantic is set at the equator. The Antarctic or Southern Ocean, which reaches from 60° south to Antarctica had been omitted until 2000, but is now also recognized by the International Hydrographic Office. Nevertheless, and since ocean basins are interconnected, many oceanographers prefer to refer to one single ocean basin instead of multiple ones.
Older references (e.g., Littlehales 1930) consider the oceanic basins to be the complement to the conti |
https://en.wikipedia.org/wiki/Mercury-arc%20valve | A mercury-arc valve or mercury-vapor rectifier or (UK) mercury-arc rectifier is a type of electrical rectifier used for converting high-voltage or high-current alternating current (AC) into direct current (DC). It is a type of cold cathode gas-filled tube, but is unusual in that the cathode, instead of being solid, is made from a pool of liquid mercury and is therefore self-restoring. As a result mercury-arc valves, when used as intended, are far more robust and durable and can carry much higher currents than most other types of gas discharge tube. Some examples have been in continuous service, rectifying 50-ampere currents, for decades.
Invented in 1902 by Peter Cooper Hewitt, mercury-arc rectifiers were used to provide power for industrial motors, electric railways, streetcars, and electric locomotives, as well as for radio transmitters and for high-voltage direct current (HVDC) power transmission. They were the primary method of high power rectification before the advent of semiconductor rectifiers, such as diodes, thyristors and gate turn-off thyristors (GTOs) in the 1970s. These solid state rectifiers have almost completely replaced mercury-arc rectifiers thanks to their higher reliability, lower cost and maintenance and lower environmental risk.
History
In 1882 Jules Jamin and G. Maneuvrier observed the rectifying properties of a mercury arc. The mercury arc rectifier was invented by Peter Cooper Hewitt in 1902 and further developed throughout the 1920s and 1930s by researchers in both Europe and North America. Before its invention, the only way to convert AC current provided by utilities to DC was by using expensive, inefficient, and high-maintenance rotary converters or motor–generator sets. Mercury-arc rectifiers or "converters" were used for charging storage batteries, arc lighting systems, the DC traction motors for trolleybuses, trams, and subways, and electroplating equipment. The mercury rectifier was used well into the 1970s, when it was finall |
https://en.wikipedia.org/wiki/Quadrature%20%28geometry%29 | In mathematics, particularly in geometry, quadrature (also called squaring) is a historical process of drawing a square with the same area as a given plane figure or computing the numerical value of that area. A classical example is the quadrature of the circle (or squaring the circle).
Quadrature problems served as one of the main sources of problems in the development of calculus. They introduce important topics in mathematical analysis.
History
Antiquity
Greek mathematicians understood the determination of an area of a figure as the process of geometrically constructing a square having the same area (squaring), thus the name quadrature for this process. The Greek geometers were not always successful (see squaring the circle), but they did carry out quadratures of some figures whose sides were not simply line segments, such as the lune of Hippocrates and the parabola. By a certain Greek tradition, these constructions had to be performed using only a compass and straightedge, though not all Greek mathematicians adhered to this dictum.
For a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side (the geometric mean of a and b). For this purpose it is possible to use the following: if one draws the circle with diameter made from joining line segments of lengths a and b, then the height (BH in the diagram) of the line segment drawn perpendicular to the diameter, from the point of their connection to the point where it crosses the circle, equals the geometric mean of a and b. A similar geometrical construction solves the problems of quadrature of a parallelogram and of a triangle.
Problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge was proved in the 19th century to be impossible. Nevertheless, for some figures a quadrature can be performed. The quadratures of the surface of a sphere and a parabola segment discovered by Archimedes became the |
https://en.wikipedia.org/wiki/KOI-7 | KOI-7 (КОИ-7) is a 7-bit character encoding, designed to cover Russian, which uses the Cyrillic alphabet.
In Russian, KOI-7 stands for Kod Obmena Informatsiey, 7 bit (Код Обмена Информацией, 7 бит) which means "Code for Information Exchange, 7 bit".
It was first standardized in GOST 13052-67 (with the 2nd revision GOST 13052-74 / ST SEV 356-76) and GOST 27463-87 / ST SEV 356-86.
Shift Out (SO) and Shift In (SI) control characters are used in KOI-7, where SO starts printing Russian letters (KOI-7 N1), and SI starts printing Latin letters again (KOI-7 N0), or for lowercase and uppercase switching. This version is also known as KOI7-switched aka csKOI7switched.
On ISO 2022 compatible computer terminals KOI7-switched can be activated by the escape sequence ESC ( @ ESC ) N LS0.
KOI-7 was used on machines like the SM EVM (СМ ЭВМ) and DVK (ДВК); KOI-7 N2 was utilized in the machine-language of the (Elektronika D3-28) as four-digit hexadecimal code, (BESM-6), where it was called ВКД, (internal data code). The encodings were also used on RSX-11, RT-11 and similar systems.
KOI-7 N0
KOI-7 N0 (КОИ-7 Н0) is identical to the IRV set in ISO 646:1967. Compared to US-ASCII, the dollar sign ("$") at code point 24 (hex) was replaced by the universal currency sign "¤", but this was not maintained in all cases, in particular not after the fall of the Iron Curtain. Likewise, the IRV set in ISO/IEC 646:1991 also changed the character back to a dollar sign.
KOI-7 N1
KOI-7 N1 (КОИ-7 Н1) was first standardized in GOST 13052-67, and later also in ISO 5427. It is sometimes referred to as "koi-0" as well.
Compared to ASCII and ISO 646 uppercase and lowercase letters are swapped in order to make it easier to recognize Russian text when presented using ASCII.
To trim the alphabet into chunks of 32 characters the dotted Ё/ë was dropped. In order to avoid conflicts with ASCII's and ISO 646's definition as DEL and its usage as EOF marker (-1) in some systems, it dropped the "CAPITAL HARD |
https://en.wikipedia.org/wiki/Target%20fixation | Target fixation is an attentional phenomenon observed in humans in which an individual becomes so focused on an observed object (be it a target or hazard) that they inadvertently increase their risk of colliding with the object. It is associated with scenarios in which the operator is in control of a high-speed vehicle or other mode of transportation, such as fighter pilots, race-car drivers, paragliders, and motorcyclists. In such cases, the observer may fixate so intently on the target that they steer in the direction of their gaze, which is often the ultimate cause of a collision. The term target fixation was originally used in World War II fighter-bomber pilot training to describe pilots flying into targets during a strafing or bombing run.
Cause and effect
Target fixation is caused by becoming focused on one thing that is usually distracting, dangerous, or rewarding. Focus can be caused by "anticipated success", such as when trying to arrive at a destination in a certain amount of time while driving.
While experiencing target fixation, a person can be very susceptible to dangerous situations due to lack of awareness of one's surroundings.
Avoidance
To avoid this phenomenon, one can be aware and in control of vision when in a panic mode or in a reward mode. A person should think about what they see and be aware of their environment before making any decisions.
See also
Tunnel vision |
https://en.wikipedia.org/wiki/Wilting | Wilting is the loss of rigidity of non-woody parts of plants. This occurs when the turgor pressure in non-lignified plant cells falls towards zero, as a result of diminished water in the cells. Wilting also serves to reduce water loss, as it makes the leaves expose less surface area. The rate of loss of water from the plant is greater than the absorption of water in the plant. The process of wilting
modifies the leaf angle distribution of the plant (or canopy) towards more erectophile conditions.
Lower water availability may result from:
drought conditions, where the soil moisture drops below conditions most favorable for plant functioning;
the temperature falls to the point where the plant's vascular system cannot function;
high salinity, which causes water to diffuse from the plant cells and induce shrinkage;
saturated soil conditions, where roots are unable to obtain sufficient oxygen for cellular respiration, and so are unable to transport water into the plant; or
bacteria or fungi that clog the plant's vascular system.
Wilting diminishes the plant's ability to transpire and grow. Permanent wilting leads to plant death. Symptoms of wilting and blights resemble one another.
The plants may recover during the night when evaporation is reduced as the stomata closes.
In woody plants, reduced water availability leads to cavitation of the xylem.
Wilting occurs in plants such as balsam and holy basil. Wilting is an effect of the plant growth-inhibiting hormone, abscisic acid.
With cucurbits, wilting can be caused by the squash vine borer. |
https://en.wikipedia.org/wiki/Game%20of%20the%20Amazons | The Game of the Amazons (in Spanish, El Juego de las Amazonas; often called Amazons for short) is a two-player abstract strategy game invented in 1988 by Walter Zamkauskas of Argentina. The game is played by moving pieces and blocking the opponents from squares, and the last player able to move is the winner. It is a member of the territorial game family, a distant relative of Go and chess.
The Game of the Amazons is played on a 10x10 chessboard (or an international checkerboard). Some players prefer to use a monochromatic board. The two players are White and Black; each player has four amazons (not to be confused with the amazon fairy chess piece), which start on the board in the configuration shown at right. A supply of markers (checkers, poker chips, etc.) is also required.
Rules
White moves first, and the players alternate moves thereafter. Each move consists of two parts. First, one moves one of one's own amazons one or more empty squares in a straight line (orthogonally or diagonally), exactly as a queen moves in chess; it may not cross or enter a square occupied by an amazon of either color or an arrow. Second, after moving, the amazon shoots an arrow from its landing square to another square, using another queenlike move. This arrow may travel in any orthogonal or diagonal direction (even backwards along the same path the amazon just traveled, into or across the starting square if desired). An arrow, like an amazon, cannot cross or enter a square where another arrow has landed or an amazon of either color stands. The square where the arrow lands is marked to show that it can no longer be used. The last player to be able to make a move wins. Draws are impossible.
Territory and scoring
The strategy of the game is based on using arrows (as well as one's four amazons) to block the movement of the opponent's amazons and gradually wall off territory, trying to trap the opponents in smaller regions and gain larger areas for oneself. Each move reduces t |
https://en.wikipedia.org/wiki/Cabletron%20Systems | Cabletron Systems, Inc., was a manufacturer of networking computer equipment throughout the 1980s and 1990s primarily based in Rochester, New Hampshire, in the United States. They also had manufacturing facilities in Ironton, Ohio, and in Ireland.
History
Cabletron was founded in 1983 in a Massachusetts garage by Craig Benson (who later became New Hampshire's governor) and Robert Levine. As manufacturing and design operations expanded, Cabletron relocated to Rochester, New Hampshire, employing 6,600 people at its peak. In 1996 the company eclipsed US$1 billion in sales. Cabletron found its first success in the 10BASE5 Ethernet market, providing the ST-500, the first Ethernet transceiver that featured diagnostic LEDs, and the LAN-MD, the first commercially viable field-deployable 10BASE5 test set. The early products were critical in the history of Ethernet as 10BASE5 Ethernet was generally difficult to operate and maintain and cabling problems were especially difficult to diagnose. Following on this early success, Cabletron developed one of the first modular Ethernet hubs, the MMAC-8 (and its smaller siblings, the MMAC-5 and the MMAC-3) at the time that 10BASE-T was becoming standardized. By developing high-density 10BASE-T modules (24 or 48 ports per slot), Cabletron was able to reduce the price per port of these hubs to a very affordable level, and by introducing a custom Element Management System known as Prism, made the MMAC-8 easy to maintain.
As Cabletron expanded its reach in the networking business, they initially moved into Layer 3 routing by partnering with Cisco, co-developing a Cisco router that would fit into the MMAC-8 hub. Cabletron ultimately developed its own routing capability, but found it increasingly difficult to compete at the low end of the Ethernet market and continue to invest in high-end routing technology.
Recognizing this fact, Cabletron reorganized as a holding company in 2000, hoping to apply appropriate focus to the different parts |
https://en.wikipedia.org/wiki/World%20Wide%20Web%20Virtual%20Library | The World Wide Web Virtual Library (WWW VL) was the first index of content on the World Wide Web and still operates as a directory of e-texts and information sources on the web.
Overview
The Virtual Library was started by Tim Berners-Lee creator of HTML and the World Wide Web itself, in 1991 at CERN in Geneva. Unlike commercial index sites, it is run by a loose confederation of volunteers, who compile pages of key links for particular areas in which they are experts. It is sometimes informally referred to as the "WWWVL", the "Virtual Library" or just "the VL".
The individual indexes, or virtual libraries live on hundreds of different servers around the world. A set of index pages linking these individual libraries is maintained at vlib.org, in Geneva only a few kilometres from where the VL began life. A mirror of this index is kept at East Anglia in the United Kingdom.
History
The Virtual Library was first conceived and run by Tim Berners-Lee in 1991, and later expanded, organised and managed for several years by Arthur Secret as the "virtual librarian", before it became a formally established association with Gerard Manning as its Council's first chairman. The late Bertrand Ibrahim was a key contributor to the pre-association phase of the Virtual Library's development and then served as its Secretary until his untimely death in 2001 at the age of 46. A brief history, with links to archived pages and screenshots, is maintained on the Vlib website.
The Virtual Library grew over the years. For example, there is the WWW-VL History Central Catalogue, which was launched on 21 September 1993 by Lynn H. Nelson at Kansas University. From April 2004, it was relocated at the European University Institute, Florence, Italy, where a history of the catalogue is also available. The Virtual Library museums pages (VLmp) were added by Jonathan Bowen to the Virtual Library to cover museums in 1994.
In 2005, the central WWW Virtual Library website (vlib.org) was taken over by a ne |
https://en.wikipedia.org/wiki/Q-analog | In mathematics, a q-analog of a theorem, identity or expression is a generalization involving a new parameter q that returns the original theorem, identity or expression in the limit as . Typically, mathematicians are interested in q-analogs that arise naturally, rather than in arbitrarily contriving q-analogs of known results. The earliest q-analog studied in detail is the basic hypergeometric series, which was introduced in the 19th century.
q-analogs are most frequently studied in the mathematical fields of combinatorics and special functions. In these settings, the limit is often formal, as is often discrete-valued (for example, it may represent a prime power).
q-analogs find applications in a number of areas, including the study of fractals and multi-fractal measures, and expressions for the entropy of chaotic dynamical systems. The relationship to fractals and dynamical systems results from the fact that many fractal patterns have the symmetries of Fuchsian groups in general (see, for example Indra's pearls and the Apollonian gasket) and the modular group in particular. The connection passes through hyperbolic geometry and ergodic theory, where the elliptic integrals and modular forms play a prominent role; the q-series themselves are closely related to elliptic integrals.
q-analogs also appear in the study of quantum groups and in q-deformed superalgebras. The connection here is similar, in that much of string theory is set in the language of Riemann surfaces, resulting in connections to elliptic curves, which in turn relate to q-series.
"Classical" q-theory
Classical q-theory begins with the q-analogs of the nonnegative integers. The equality
suggests that we define the q-analog of n, also known as the q-bracket or q-number of n, to be
By itself, the choice of this particular q-analog among the many possible options is unmotivated. However, it appears naturally in several contexts. For example, having decided to use [n]q as the q-analog of n, on |
https://en.wikipedia.org/wiki/Flow%20control%20%28data%29 | In data communications, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from overwhelming a slow receiver. Flow control should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node.
Flow control is important because it is possible for a sending computer to transmit information at a faster rate than the destination computer can receive and process it. This can happen if the receiving computers have a heavy traffic load in comparison to the sending computer, or if the receiving computer has less processing power than the sending computer.
Stop-and-wait
Stop-and-wait flow control is the simplest form of flow control. In this method the message is broken into multiple frames, and the receiver indicates its readiness to receive a frame of data. The sender waits for a receipt acknowledgement (ACK) after every frame for a specified time (called a time out). The receiver sends the ACK to let the sender know that the frame of data was received correctly. The sender will then send the next frame only after the ACK.
Operations
Sender: Transmits a single frame at a time.
Sender waits to receive ACK within time out.
Receiver: Transmits acknowledgement (ACK) as it receives a frame.
Go to step 1 when ACK is received, or time out is hit.
If a frame or ACK is lost during transmission then the frame is re-transmitted. This re-transmission process is known as ARQ (automatic repeat request).
The problem with Stop-and-wait is that only one frame can be transmitted at a time, and that often leads to inefficient transmission, because until the sender receives the ACK it cannot transmit any new packet. During this time both the sender and the channel are unutilised.
Pros and cons of stop and wait
Pros
The only advantage of thi |
https://en.wikipedia.org/wiki/Data%20Matrix | A Data Matrix is a two-dimensional code consisting of black and white "cells" or dots arranged in either a square or rectangular pattern, also known as a matrix. The information to be encoded can be text or numeric data. Usual data size is from a few bytes up to 1556 bytes. The length of the encoded data depends on the number of cells in the matrix. Error correction codes are often used to increase reliability: even if one or more cells are damaged so it is unreadable, the message can still be read. A Data Matrix symbol can store up to 2,335 alphanumeric characters.
Data Matrix symbols are rectangular, usually square in shape and composed of square "cells" which represent bits. Depending on the coding used, a "light" cell represents a 0 and a "dark" cell is a 1, or vice versa. Every Data Matrix is composed of two solid adjacent borders in an "L" shape (called the "finder pattern") and two other borders consisting of alternating dark and light "cells" or modules (called the "timing pattern"). Within these borders are rows and columns of cells encoding information. The finder pattern is used to locate and orient the symbol while the timing pattern provides a count of the number of rows and columns in the symbol. As more data is encoded in the symbol, the number of cells (rows and columns) increases. Each code is unique. Symbol sizes vary from 10×10 to 144×144 in the new version ECC 200, and from 9×9 to 49×49 in the old version ECC 000 – 140.
Applications
The most popular application for Data Matrix is marking small items, due to the code's ability to encode fifty characters in a symbol that is readable at and the fact that the code can be read with only a 20% contrast ratio.
A Data Matrix is scalable; commercial applications exist with images as small as (laser etched on a silicon device) and as large as a 1 metre (3 ft) square (painted on the roof of a boxcar). Fidelity of the marking and reading systems are the only limitation.
The US Electronic Industries A |
https://en.wikipedia.org/wiki/Maisto | Maisto is a brand of scale model vehicles introduced in 1990 and owned by May Cheong Group, a Chinese company founded in 1967 in Hong Kong by brothers P.Y. Ngan and Y.C Ngan. Head-quartered in Hong Kong, the brand has its offices in the United States, France and China. MCG also owns other model car brands such as former Italian brand Bburago (whose assets and rights to brand name were acquired in 2006) and Polistil.
The company has also manufactured a number of Tonka products under license from Hasbro. Products under the Maisto brand includes die-cast scale model cars.
Company history
The company was established in Hong Kong in 1967 as "May Cheong Toy Company" by brothers P.Y. Ngan and Y.C Ngan. Products were initially commercialised under the "MC Toy" (using the initials of the company) brand.
Some of the first MC Toys products were direct copies of Matchbox cars, although the firm had original designs as well. Soon after, European cars were added to its range of products. MC produced models of European cars which were not made by Matchbox or Hot Wheels, and the brand became popular for providing the same quality as its contemporaries at cheap prices.
In the mid-1980s, Intex Recreation introduced MC Toy products to the US market with some of the die-cast cars being labeled as Intex. In the late 1980s, MC Toys' vehicles increased their quality, becoming more realistic and accurate to real models, in contrast with other counterpart companies that produced toy-like cars. In 1990, the company introduced the "Maisto" brand of diecast cars. Through the 1990s, Maisto was considered the US division of Master Toy Co. Ltd. of Thailand with May Cheong being the Kowloon, Hong Kong, subsidiary.
The May Cheong Group products are made in China and Thailand. The factories in China and Thailand manufacture 1:12, 1:18, 1:24, 1:25, 1:27, 1:43, 1:31 and 1:64 scale replicas. Most models are officially licensed products, based on popular vehicles. Some models, however, are fanta |
https://en.wikipedia.org/wiki/Algebraic%20graph%20theory | Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.
Branches of algebraic graph theory
Using linear algebra
The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Laplacian matrix of a graph (this part of algebraic graph theory is also called spectral graph theory). For the Petersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to other graph properties. As a simple example, a connected graph with diameter D will have at least D+1 distinct values in its spectrum. Aspects of graph spectra have been used in analysing the synchronizability of networks.
Using group theory
The second branch of algebraic graph theory involves the study of graphs in connection to group theory, particularly automorphism groups and geometric group theory. The focus is placed on various families of graphs based on symmetry (such as symmetric graphs, vertex-transitive graphs, edge-transitive graphs, distance-transitive graphs, distance-regular graphs, and strongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough that lists of graphs can be drawn up. By Frucht's theorem, all groups can be represented as the automorphism group of a connected graph (indeed, of a cubic graph). Another connection with group theory is that, given any group, symmetrical graphs known as Cayley graphs can be generated, and these have properties related to the structure of the group.
This second branch of algebraic graph theory |
https://en.wikipedia.org/wiki/Trimethylamine | Trimethylamine (TMA) is an organic compound with the formula N(CH3)3. It is a trimethylated derivative of ammonia. TMA is widely used in industry: it is used in the synthesis of choline, tetramethylammonium hydroxide, plant growth regulators or herbicides, strongly basic anion exchange resins, dye leveling agents, and a number of basic dyes. At higher concentrations it has an ammonia-like odor, and can cause necrosis of mucous membranes on contact. At lower concentrations, it has a "fishy" odor, the odor associated with rotting fish.
Properties
TMA is a colorless, hygroscopic, and flammable tertiary amine. It is a gas at room temperature but is usually sold as a 40% solution in water. It is also sold in pressurized gas cylinders.
TMA is a nitrogenous base and can be readily protonated to give the trimethylammonium cation. Trimethylammonium chloride is a hygroscopic colorless solid prepared from hydrochloric acid.
Reactivity
Trimethylamine is a good nucleophile, and this reaction is the basis of most of its applications. Trimethylamine is a Lewis base that forms adducts with a variety of Lewis acids.
Production
Trimethylamine is prepared by the reaction of ammonia and methanol employing a catalyst:
3 CH3OH + NH3 → (CH3)3N + 3 H2O
This reaction coproduces the other methylamines, dimethylamine (CH3)2NH and methylamine CH3NH2.
Trimethylamine has also been prepared by a reaction of ammonium chloride and paraformaldehyde:
9 (CH2=O)n + 2n NH4Cl → 2n (CH3)3N•HCl + 3n H2O + 3n CO2↑
Applications
Trimethylamine is used in the synthesis of choline, tetramethylammonium hydroxide, plant growth regulators, herbicides, strongly basic anion exchange resins, dye leveling agents and a number of basic dyes. Gas sensors to test for fish freshness detect trimethylamine.
Importance in the history of psychoanalysis
The first dream of his own which Sigmund Freud tried to analyse in detail, when he was developing his theories about the interpretation of dreams, involved a patient |
https://en.wikipedia.org/wiki/Strain%20energy | In physics, the elastic potential energy gained by a wire during elongation with a tensile (stretching) or compressive (contractile) force is called strain energy. For linearly elastic materials, strain energy is:
where is stress, is strain, is volume, and is Young's modulus:
Molecular strain
In a molecule, strain energy is released when the constituent atoms are allowed to rearrange themselves in a chemical reaction. The external work done on an elastic member in causing it to distort from its unstressed state is transformed into strain energy which is a form of potential energy. The strain energy in the form of elastic deformation is mostly recoverable in the form of mechanical work.
For example, the heat of combustion of cyclopropane (696 kJ/mol) is higher than that of propane (657 kJ/mol) for each additional CH2 unit. Compounds with unusually large strain energy include tetrahedranes, propellanes, cubane-type clusters, fenestranes and cyclophanes. |
https://en.wikipedia.org/wiki/Platonic%20hydrocarbon | In organic chemistry, a Platonic hydrocarbon is a hydrocarbon (molecule) whose structure matches one of the five Platonic solids, with carbon atoms replacing its vertices, carbon–carbon bonds replacing its edges, and hydrogen atoms as needed.
Not all Platonic solids have molecular hydrocarbon counterparts; those that do are the tetrahedron (tetrahedrane), the cube (cubane), and the dodecahedron (dodecahedrane).
Tetrahedrane
Tetrahedrane (C4H4) is a hypothetical compound. It has not yet been synthesized without substituents, but it is predicted to be kinetically stable in spite of its angle strain. Some stable derivatives, including tetra(tert-butyl)tetrahedrane (a hydrocarbon) and tetra(trimethylsilyl)tetrahedrane, have been produced.
Cubane
Cubane (C8H8) has been synthesized. Although it has high angle strain, cubane is kinetically stable, due to a lack of readily available decomposition paths.
Octahedrane
Angle strain would make an octahedron highly unstable due to inverted tetrahedral geometry at each vertex. There would also be no hydrogen atoms because four edges meet at each corner; thus, the hypothetical octahedrane molecule would be an allotrope of elemental carbon, C6, and not a hydrocarbon. The existence of octahedrane cannot be ruled out completely, although calculations have shown that it is unlikely.
Dodecahedrane
Dodecahedrane (C20H20) was first synthesized in 1982, and has minimal angle strain; the tetrahedral angle is 109.5° and the dodecahedral angle is 108°, only a slight discrepancy.
Icosahedrane
The tetravalency (4-connectedness) of carbon excludes an icosahedron because 5 edges meet at each vertex. True pentavalent carbon is unlikely; methanium, nominally , usually exists as . The hypothetical icosahedral lacks hydrogen so it is not a hydrocarbon; it is also an ion.
Both icosahedral and octahedral structures have been observed in boron compounds such as the dodecaborate ion and some of the carbon-containing carboranes.
Other polyhedr |
https://en.wikipedia.org/wiki/Spiritual%20materialism | Spiritual materialism is a term coined by Chögyam Trungpa in his book Cutting Through Spiritual Materialism. The book is a compendium of his talks explaining Buddhism given while opening the Karma Dzong meditation center in Boulder, Colorado. He expands on the concept in later seminars that became books such as Work, Sex, Money. He uses the term to describe mistakes spiritual seekers commit which turn the pursuit of spirituality into an ego-building and confusion-creating endeavor, based on the idea that ego development is counter to spiritual progress.
Conventionally, it is used to describe capitalist and spiritual narcissism, commercial efforts such as "new age" bookstores and wealthy lecturers on spirituality; it might also mean the attempt to build up a list of credentials or accumulate teachings in order to present oneself as a more realized or holy person. Author Jorge Ferrer equates the terms "Spiritual materialism" and "Spiritual Narcissism", though others draw a distinction, that spiritual narcissism is believing that one deserves love and respect or is better than another because one has accumulated spiritual training instead of the belief that accumulating training will bring an end to suffering.
Lords of Materialism
In Trungpa's presentation, spiritual materialism can fall into three categories — what he calls the three "Lords of Materialism" (Tibetan: lalo literally "barbarian") — in which a form of materialism is misunderstood as bringing long-term happiness but instead brings only short-term entertainment followed by long-term suffering:
Physical materialism is the belief that possessions can bring release from suffering. In Trungpa's view, they may bring temporary happiness but then more suffering in the endless pursuit of creating one's environment to be just right. Or on another level it may cause a misunderstanding like, "I am rich because I have this or that" or "I am a teacher (or whatever) because I have a diploma (or whatever)."
Psycho |
https://en.wikipedia.org/wiki/Fieldbus | A fieldbus is a member of a family of industrial digital communication networks used for real-time distributed control. Fieldbus profiles are standardized by the
International Electrotechnical Commission (IEC) as IEC 61784/61158.
A complex automated industrial system is typically structured in hierarchical levels as a distributed control system (DCS). In this hierarchy the upper levels for production managements are linked to the direct control level of programmable logic controllers (PLC) via a non-time-critical communications system (e.g. Ethernet). The fieldbus links the PLCs of the direct control level to the components in the plant of the field level such as sensors, actuators, electric motors, console lights, switches, valves and contactors and replaces the direct connections via current loops or digital I/O signals. The requirement for a fieldbus are therefore time-critical and cost sensitive. Since the new millennium a number of fieldbuses based on Real-time Ethernet have been established. These have the potential to replace traditional fieldbuses in the long term.
Description
A fieldbus is an industrial network system for real-time distributed control. It is a way to connect instruments in a manufacturing plant. A fieldbus works on a network structure which typically allows daisy-chain, star, ring, branch, and tree network topologies. Previously, computers were connected using RS-232 (serial connections) by which only two devices could communicate. This would be the equivalent of the currently used 4–20 mA communication scheme which requires that each device have its own communication point at the controller level, while the fieldbus is the equivalent of the current LAN-type connections, which require only one communication point at the controller level and allow multiple (hundreds) of analog and digital points to be connected at the same time. This reduces both the length of the cable required and the number of cables required. Furthermore, since devic |
https://en.wikipedia.org/wiki/Pythagorean%20hammers | According to legend, Pythagoras discovered the foundations of musical tuning by listening to the sounds of four blacksmith's hammers, which produced consonance and dissonance when they were struck simultaneously. According to Nicomachus in his 2nd-century CE Enchiridion harmonices, Pythagoras noticed that hammer A produced consonance with hammer B when they were struck together, and hammer C produced consonance with hammer A, but hammers B and C produced dissonance with each other. Hammer D produced such perfect consonance with hammer A that they seemed to be "singing" the same note. Pythagoras rushed into the blacksmith shop to discover why, and found that the explanation was in the weight ratios. The hammers weighed 12, 9, 8, and 6 pounds respectively. Hammers A and D were in a ratio of 2:1, which is the ratio of the octave. Hammers B and C weighed 8 and 9 pounds. Their ratios with hammer D were (12:8 = 3:2 = perfect fifth) and (12:9 = 4:3 = perfect fourth). The space between B and C is a ratio of 9:8, which is equal to the musical whole tone, or whole step interval ().
The legend is, at least with respect to the hammers, demonstrably false. It is probably a Middle Eastern folk tale. These proportions are indeed relevant to string length (e.g. that of a monochord) — using these founding intervals, it is possible to construct the chromatic scale and the basic seven-tone diatonic scale used in modern music, and Pythagoras might well have been influential in the discovery of these proportions (hence, sometimes referred to as Pythagorean tuning) — but the proportions do not have the same relationship to hammer weight and the tones produced by them. However, hammer-driven chisels with equal cross-section, show an exact proportion between length or weight and Eigenfrequency.
Earlier sources mention Pythagoras' interest in harmony and ratio. Xenocrates (4th century BCE), while not as far as we know mentioning the blacksmith story, described Pythagoras' interest in gen |
https://en.wikipedia.org/wiki/Jacek%20Karpi%C5%84ski | Jacek Karpiński (9 April 1927 21 February 2010) was a Polish pioneer in computer engineering and computer science.
During World War II, he was a soldier in the Batalion Zośka of the Polish Home Army, and was awarded multiple times with a Cross of Valour. He took part in Operation Kutschera (intelligence) and the Warsaw Uprising, where he was heavily wounded.
Later, he became a developer of one of the first machine learning algorithms, techniques for character and image recognition.
After receiving a UNESCO award in 1960, he travelled for several years around the academic centres in the United States, including MIT, Harvard, Caltech, and many others.
In 1971, he designed one of the first minicomputers, the K-202. Because of the policy on computer development in the People's Republic of Poland, belonging to the Comecon that time, the K-202 was never mass-produced.
Karpiński later became a pig farmer, and in 1981, after receiving a passport, emigrated to Switzerland.
He also founded the Laboratory for Artificial Intelligence of the Polish Academy of Sciences in the early 1960s.
Family and childhood
Jacek Karpiński was born on 9 April 1927 in Turin, Italy into a family of Polish intellectuals and alpinists. His father, Adam 'Akar' Karpiński, was a prominent aeronautic engineer (who co-constructed the SL-1 Akar, the first glider constructed entirely by the Poles) and inventor, credited with projects of innovative climbing equipment (crampons, 'Akar-Ramada' tent). His mother, Wanda Czarnocka-Karpińska, was a respected physician who went on to become Dean of the University of Physical Education in Warsaw. Both were pioneers of winter mountaineering in the Tatra Mountains (first successful winter attacks on Banówka, Nowy Wierch, Lodowy Szczyt and others). Adam Karpiński was also a member of a Polish expedition into the Andes, which was the first to climb the peak Mercedario (6720 m.). Karpiński himself was due to be born in the Vallot winter hut near Mont Blanc, but |
https://en.wikipedia.org/wiki/Robert%20Kowalski | Robert Anthony Kowalski (born 15 May 1941) is an American-British logician and computer scientist, whose research is concerned with developing both human-oriented models of computing and computational models of human thinking. He has spent most of his career in the United Kingdom.
Education
He was educated at the University of Chicago, University of Bridgeport (BA in mathematics, 1963), Stanford University (MSc in mathematics, 1966), University of Warsaw and the University of Edinburgh (PhD in computer science, 1970).
Career
He was a research fellow at the University of Edinburgh (1970–75) and has been at the Department of Computing, Imperial College London since 1975, attaining a chair in Computational logic in 1982 and becoming Emeritus Professor in 1999.
He began his research in the field of automated theorem proving, developing both SL-resolution with Donald Kuehner and the connection graph proof procedure. He developed SLD resolution and the procedural interpretation of Horn clauses, which underpin the operational semantics of backward reasoning in logic programming. With Maarten van Emden, he also developed the minimal model and the fixpoint semantics of Horn clauses, which underpin the logical semantics of logic programming.
With Marek Sergot, he developed both the event calculus and the application of logic programming to legal reasoning. With Fariba Sadri, he developed an agent model in which beliefs are represented by logic programs and goals are represented by integrity constraints.
Kowalski was one of the developers of abductive logic programming, in which logic programs are augmented with integrity constraints and with undefined, abducible predicates. This work led to the demonstration with Phan Minh Dung and Francesca Toni that most logics for default reasoning can be regarded as special cases of assumption-based argumentation.
In his 1979 book, Logic for Problem Solving, Kowalski argues that logical inference provides a simple and powerful mode |
https://en.wikipedia.org/wiki/Swarming%20%28honey%20bee%29 | Swarming is a honey bee colony's natural means of reproduction. In the process of swarming, a single colony splits into two or more distinct colonies.
Swarming is mainly a spring phenomenon, usually within a two- or three-week period depending on the locale, but occasional swarms can happen throughout the producing season. Secondary afterswarms, or cast swarms may happen. Cast swarms are usually smaller and are accompanied by a virgin queen. Sometimes a beehive will swarm in succession until it is almost totally depleted of workers.
One species of honey bee that participates in such swarming behavior is Apis cerana. The reproduction swarms of this species settle away from the natal nest for a few days and will then depart for a new nest site after getting information from scout bees. Scout bees search for suitable cavities in which to construct the swarm's home. Successful scouts will then come back and report the location of suitable nesting sites to the other bees. Apis mellifera participates in a similar swarming process.
Preparation
Worker bees create queen cups throughout the year. When the hive is getting ready to swarm, the queen lays eggs into the queen cups. New queens are raised and the hive may swarm as soon as the queen cells are capped and before the new virgin queens emerge from their queen cells. A laying queen is too heavy to fly long distances. Therefore, the workers will stop feeding her before the anticipated swarm date and the queen will stop laying eggs. Swarming creates an interruption in the brood cycle of the original colony. During the swarm preparation, scout bees will simply find a nearby location for the swarm to cluster. When a honey bee swarm emerges from a hive they do not fly far at first. They may gather in a tree or on a branch only a few metres from the hive. There, they cluster about the queen and send 20–50 scout bees out to find suitable new nest locations. This intermediate stop is not for permanent habitation and they w |
https://en.wikipedia.org/wiki/Woodstock%20of%20physics | The Woodstock of physics was the popular name given by physicists to the marathon session of the American Physical Society’s meeting on March 18, 1987, which featured 51 presentations of recent discoveries in the science of high-temperature superconductors. Various presenters anticipated that these new materials would soon result in revolutionary technological applications, but in the three subsequent decades, this proved to be overly optimistic. The name is a reference to the 1969 Woodstock Music and Art Festival.
Leading up to the meeting
Before a series of breakthroughs in the mid-1980s, most scientists believed that the extremely low temperature requirements of superconductors rendered them impractical for everyday use. However in June 1986, K. Alex Muller and Georg Bednorz working in IBM Zurich broke the record of critical temperature superconductivity in lanthanum barium copper oxide (LBCO) to 35 K above absolute zero, which had remained unbroken at 23 K for 17 years. Their discovery stimulated a great deal of additional research in high-temperature superconductivity.
By March 1987, a flurry of recent research on ceramic superconductors had succeeded in creating ever-higher superconducting temperatures, including the discovery of Maw-Kue Wu and Jim Ashburn at the University of Alabama, who found a critical temperature of 77 K in yttrium barium copper oxide (YBCO). This result was followed by Paul C. W. Chu at the University of Houston's of a superconductor that operated at a temperature that could be achieved by cooling with liquid nitrogen. The scientific community was abuzz with excitement.
Events
The discoveries were so recent that no papers on them had been submitted by the deadline. However, the Society added a last-minute session to their annual meeting to discuss the new research. The session was chaired by physicist M. Brian Maple, a superconductor researcher himself, who was one of the meeting's organizers. It was scheduled to start at 7:30 pm in |
https://en.wikipedia.org/wiki/Communications%20and%20networking%20riser | Communications and networking riser (CNR) is a slot found on certain PC motherboards and used for specialized networking, audio, and telephony equipment. A motherboard manufacturer can choose to provide audio, networking, or modem functionality in any combination on a CNR card. CNR slots were once commonly found on Pentium III-class motherboards, but have since been phased out in favor of on-board or embedded components.
Technology
Physically, a CNR slot has two rows of 30 pins, with two possible pin configurations: Type A and Type B, each with different pin assignments. CNR Type A uses 8-pin PHY interface, while Type B uses 17-pin media-independent interface (MII) bus LAN interface. Both types carry USB and AC'97 signals.
As with AMR, CNR had the cost savings potential for manufacturers by removing analog I/O components from the motherboard. This allowed the manufacturer to only certify with the US Federal Communications Commission (FCC) for the CNR card, and not the entire motherboard. This resulted in a quicker production-to-market time for new motherboards, and allowed mass-production of CNR cards to be used on multiple motherboards.
The ACR slot was a competing specification developed by a group of third-party vendors. Its principal advantage over CNR was the backwards-compatible slot layout which allowed it to use both AMR and ACR cards. The same group also developed a physically smaller version, the MDC.
History
Intel developed the CNR slot to replace its own audio/modem riser (AMR) technology, drawing on two distinct advantages over the AMR slot it replaced; CNR was both capable of being either software based (CPU-controlled) or hardware accelerated (dedicated ASIC), and was plug-and-play compatible. On some motherboards, a CNR slot replaced the last PCI slot, but most motherboard manufacturers engineered boards which allow the CNR and last PCI slot to share the same space.
With the integration of components such as ethernet and audio into the mother |
https://en.wikipedia.org/wiki/Alex%20%28videotex%20service%29 | Alex was the name of an interactive videotex information service offered by Bell Canada in market research from 1988 to 1990 and thence to the general public until 1994.
The Alextel terminal was based on the French Minitel terminals, built by Northern Telecom and leased to customers for $7.95/month. It consisted of a CRT display, attached keyboard, and a 1200 bit/s modem for use on regular phone lines. In 1991 proprietary software was released for IBM PCs that allowed computer users to access the network. Communications on the Alex network was via DATAPAC X.25 protocol.
The system operated in the same fashion as Minitel, whereby users connected to various content providers over the X.25 network and thus access was normally through a local telephone number. The most popular (and most expensive) sites were chat rooms. Using the service could cost as much as per minute. Also offered was an electronic white pages and yellow pages directory. Many users terminated their subscription upon receiving their first invoice. One subscriber racked up a monthly fee of over C$2,000 spending most of his online time in chat.
History
The motivation to develop the Alex terminal and online service came from competitive pressure from France's Minitel, which had expanded into the Quebec market.
Bell Canada received approval from the CRTC to offer the online service as of November 1988.
The advent of the World Wide Web contributed to making this service obsolete. On April 29, 1994, Bell Canada sent a letter to its customers announcing that the service would be terminated on June 3, 1994. In that letter, Mr. T.E. Graham, then Director of Business Planning for Bell Advanced Communications, stated that "Quite simply, the ALEX network is not the right vehicle, nor the appropriate technology, at this time to deliver the information goods needed in our fast-paced society."
The Alextel terminal is reportedly usable as a dumb terminal for VT100 emulation.
Further reading
Proulx, Serge |
https://en.wikipedia.org/wiki/Interior-point%20method | Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems.
An interior point method was discovered by Soviet mathematician I. I. Dikin in 1967 and reinvented in the U.S. in the mid-1980s.
In 1984, Narendra Karmarkar developed a method for linear programming called Karmarkar's algorithm, which runs in provably polynomial time and is also very efficient in practice. It enabled solutions of linear programming problems that were beyond the capabilities of the simplex method. In contrast to the simplex method, it reaches a best solution by traversing the interior of the feasible region. The method can be generalized to convex programming based on a self-concordant barrier function used to encode the convex set.
Any convex optimization problem can be transformed into minimizing (or maximizing) a linear function over a convex set by converting to the epigraph form. The idea of encoding the feasible set using a barrier and designing barrier methods was studied by Anthony V. Fiacco, Garth P. McCormick, and others in the early 1960s. These ideas were mainly developed for general nonlinear programming, but they were later abandoned due to the presence of more competitive methods for this class of problems (e.g. sequential quadratic programming).
Yurii Nesterov, and Arkadi Nemirovski came up with a special class of such barriers that can be used to encode any convex set. They guarantee that the number of iterations of the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.
Karmarkar's breakthrough revitalized the study of interior-point methods and barrier problems, showing that it was possible to create an algorithm for linear programming characterized by polynomial complexity and, moreover, that was competitive with the simplex method.
Already Khachiyan's ellipsoid method was a polynomial-time algorithm; however, it was too slow to be o |
https://en.wikipedia.org/wiki/E-commerce%20payment%20system | An e-commerce payment system (or an electronic payment system) facilitates the acceptance of electronic payment for offline transfer, also known as a subcomponent of electronic data interchange (EDI), e-commerce payment systems have become increasingly popular due to the widespread use of the internet-based shopping and banking.
Credit cards remain the most common forms of payment for e-commerce transactions. As of 2008, in North America, almost 90% of online retail transactions were made with this payment type. It is difficult for an online retailer to operate without supporting credit and debit cards due to their widespread use. Online merchants must comply with stringent rules stipulated by the credit and debit card issuers (e.g. Visa and Mastercard) in accordance with a bank and financial regulation in the countries where the debit/credit service conducts business.
E-commerce payment system often use B2B mode. The security of customer information, business information, and payment information base is a concern during the payment process of transactions under the conventional B2B e-commerce model.
For the vast majority of payment systems accessible on the public Internet, baseline authentication (of the financial institution on the receiving end), data integrity, and confidentiality of the electronic information exchanged over the public network involves obtaining a certificate from an authorized certificate authority (CA) who provides public-key infrastructure (PKI). Even with transport layer security (TLS) in place to safeguard the portion of the transaction conducted over public networks—especially with payment systems—the customer-facing website itself must be coded with great care, so as not to leak credentials and expose customers to subsequent identity theft.
Despite widespread use in North America, there are still many countries such as China and India that have some problems to overcome in regard to credit card security. Increased security measures i |
https://en.wikipedia.org/wiki/Theban%20alphabet | The Theban alphabet, also known as the witches' alphabet, is a writing system, specifically a substitution cipher of the Latin script, that was used by early modern occultists and is popular in the Wicca movement.
Publication history
It was first published in Johannes Trithemius's Polygraphia (1518) in which it was attributed to Honorius of Thebes "as Pietro d'Abano testifies in his greater fourth book". However, it is not known to be extant in any of the known writings attributed to D'Abano (1250–1316). Trithemius' student Heinrich Cornelius Agrippa (1486–1535) included it in his De Occulta Philosophia (Book III, chap. 29, 1533). It is also not known to be found in any manuscripts of the writings of Honorius of Thebes (e.g. Liber Iuratus Honorii, translated as The Sworn Book of Honorius), with the exception of the composite manuscript found in London, British Library Manuscript Sloane 3853, which however openly identifies Agrippa as its source.
Uses and correlations
It is also known as the Honorian alphabet or the Runes of Honorius after the legendary magus (though Theban is dissimilar to the Germanic runic alphabet), or the witches' alphabet due to its use in modern Wicca and other forms of witchcraft as one of many substitution ciphers to hide magical writings such as the contents of a Book of Shadows from prying eyes. The Theban alphabet has not been found in any publications prior to that of Trithemius, and bears little visual resemblance to most other alphabets.
There is one-to-one correspondence between Theban and the letters in the old Latin alphabet. The modern characters J and U are not represented. They are often transliterated using the Theban characters for I and V, respectively. In the original chart by Trithemius, the letter W comes after Z, as it was a recent addition to the Latin alphabet, and did not yet have a standard position. This caused it to be misinterpreted as an ampersand or end-of-sentence mark by later translators and copyists, such |
https://en.wikipedia.org/wiki/Xavier%20Leroy | Xavier Leroy (born 15 March 1968) is a French computer scientist and programmer. He is best known for his role as a primary developer of the OCaml system. He is
Professor of software science at Collège de France. Before his appointment at Collège de France in 2018, he was senior scientist (directeur de recherche) at the French government research institution Inria.
Leroy was admitted to the École normale supérieure in Paris in 1987, where he studied mathematics and computer science. From 1989 to 1992 he did his PhD in computer science under the supervision of Gérard Huet.
He is an internationally recognized expert on functional programming languages and compilers. In recent years, he has taken an interest in formal methods, formal proofs and certified compilation. He is the leader of the CompCert project that develops an optimizing compiler for C (programming language), formally verified in Coq.
Leroy was also the original author of LinuxThreads, the most widely used threading package for Linux versions prior to 2.6. Linux 2.6 introduced NPTL, with much more extensive support from the kernel, to replace LinuxThreads.
In 2015 he was named a fellow of the Association for Computing Machinery "for contributions to safe, high-performance functional programming languages and compilers, and to compiler verification." He was awarded the 2016 Milner Award by the Royal Society, the 2021 ACM Software System Award, and the 2022 ACM SIGPLAN Programming Languages Achievement Award. |
https://en.wikipedia.org/wiki/Trimethylglycine | Trimethylglycine is an amino acid derivative that occurs in plants. Trimethylglycine was the first betaine discovered; originally it was simply called betaine because, in the 19th century, it was discovered in sugar beets (Beta vulgaris subsp. vulgaris).
Medical uses
Betaine, sold under the brand name Cystadane among others, is indicated for the adjunctive treatment of homocystinuria, involving deficiencies or defects in cystathionine beta-synthase (CBS), 5,10-methylene-tetrahydrofolate reductase (MTHFR), or cobalamin cofactor metabolism (cbl).
The most common side effect is elevated levels of methionine in the blood.
Structure and reactions
Trimethylglycine is an N-methylated amino acid. It is a zwitterion as the molecule contains both a quaternary ammonium group and a carboxyl group. The carboxyl group will be partially protonated in aqueous solution below pH 4, that is, approximately below pH equal to (pKa + 2).
(aq) + (aq)
Demethylation of trimethylglycine gives dimethylglycine.
Production and biochemical processes
Processing sucrose from sugar beets yields glycine betaine as a byproduct. The economic value of the trimethylglycine rivals that of the sugar content in sugar beets.
Biosynthesis
In most organisms, glycine betaine is biosynthesized by oxidation of choline in two steps. The intermediate, betaine aldehyde, is generated by the action of the enzyme mitochondrial choline oxidase (choline dehydrogenase, EC 1.1.99.1). Betaine aldehyde is further oxidised in the mitochondria in mice to betaine by the enzyme betaine-aldehyde dehydrogenase (EC 1.2.1.8). In humans betaine aldehyde activity is performed by a nonspecific cystosolic aldehyde dehydrogenase enzyme (EC 1.2.1.3)
Biological function
Trimethylglycine is an organic osmolyte. Sugar beet was cultivated from sea beet, which requires osmolytes in order to survive in the salty soils of coastal areas. Trimethylglycine also occurs in high concentrations (~10 mM) in many marine invertebrates, su |
https://en.wikipedia.org/wiki/Stickland%20fermentation | Stickland fermentation or The Stickland Reaction is the name for a chemical reaction that involves the coupled oxidation and reduction of amino acids to organic acids. The electron donor amino acid is oxidised to a volatile carboxylic acid one carbon atom shorter than the original amino acid. For example, alanine with a three carbon chain is converted to acetate with two carbons. The electron acceptor amino acid is reduced to a volatile carboxylic acid the same length as the original amino acid. For example, glycine with two carbons is converted to acetate.
In this way, amino acid fermenting microbes can avoid using hydrogen ions as electron acceptors to produce hydrogen gas. Amino acids can be Stickland acceptors, Stickland donors, or act as both donor and acceptor. Only histidine cannot be fermented by Stickland reactions, and is oxidised. With a typical amino acid mix, there is a 10% shortfall in Stickland acceptors, which results in hydrogen production. Under very low hydrogen partial pressures, increased uncoupled anaerobic oxidation has also been observed.
It occurs in proteolytic clostridiums such as:
C. perfringens,
C. difficile,
C. sporogenes,
and C. botulinum.
Additionally, sarcosine and betaine can act as electron acceptors. |
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Lov%C3%A1sz | László Lovász (; born March 9, 1948) is a Hungarian mathematician and professor emeritus at Eötvös Loránd University, best known for his work in combinatorics, for which he was awarded the 2021 Abel Prize jointly with Avi Wigderson. He was the president of the International Mathematical Union from 2007 to 2010 and the president of the Hungarian Academy of Sciences from 2014 to 2020.
In graph theory, Lovász's notable contributions include the proofs of Kneser's conjecture and the Lovász local lemma, as well as the formulation of the Erdős–Faber–Lovász conjecture. He is also one of the eponymous authors of the LLL lattice reduction algorithm.
Early life and education
Lovász was born on March 9, 1948, in Budapest, Hungary.
Lovász attended the Fazekas Mihály Gimnázium in Budapest. He won three gold medals (1964–1966) and one silver medal (1963) at the International Mathematical Olympiad. He also participated in a Hungarian game show about math prodigies. Paul Erdős helped introduce Lovász to graph theory at a young age.
Lovász received his Candidate of Sciences (C.Sc.) degree in 1970 at the Hungarian Academy of Sciences. His advisor was Tibor Gallai. He received his first doctorate (Dr.Rer.Nat.) degree from Eötvös Loránd University in 1971 and his second doctorate (Dr.Math.Sci.) from the Hungarian Academy of Sciences in 1977.
Career
From 1971 to 1975, Lovász worked at Eötvös Loránd University as a research associate. From 1975 to 1978, he was a docent at the University of Szeged, and then served as a professor and the Chair of Geometry there until 1982. He then returned to Eötvös Loránd University as a professor and the Chair of Computer Science until 1993.
Lovász was a professor at Yale University from 1993 to 1999, when he moved to the Microsoft Research Center where he worked as a senior researcher until 2006. He returned to Eötvös Loránd University where he was the director of the Mathematical Institute (2006–2011) and a professor in the Department of Compute |
https://en.wikipedia.org/wiki/Charm%20%28quantum%20number%29 | Charm (symbol C) is a flavour quantum number representing the difference between the number of charm quarks () and charm antiquarks () that are present in a particle:
By convention, the sign of flavour quantum numbers agree with the sign of the electric charge carried by the quarks of corresponding flavour. The charm quark, which carries an electric charge (Q) of +, therefore carries a charm of +1. The charm antiquarks have the opposite charge (), and flavour quantum numbers ().
As with any flavour-related quantum numbers, charm is preserved under strong and electromagnetic interaction, but not under weak interaction (see CKM matrix). For first-order weak decays, that is processes involving only one quark decay, charm can only vary by 1 (). Since first-order processes are more common than second-order processes (involving two quark decays), this can be used as an approximate "selection rule" for weak decays.
See also
Quantum number |
https://en.wikipedia.org/wiki/Pospiviroid | Pospiviroid is a genus of ssRNA viroids that infects plants, most commonly tubers. It belongs to the family Pospiviroidae.The first viroid discovered was a pospiviroid, the PSTVd species (potato spindle tuber viroid).
Taxonomy
Pospiviroid has 10 virus species |
https://en.wikipedia.org/wiki/1925%20serum%20run%20to%20Nome | The 1925 serum run to Nome, also known as the Great Race of Mercy and The Serum Run, was a transport of diphtheria antitoxin by dog sled relay across the U.S. territory of Alaska by 20 mushers and about 150 sled dogs across in days, saving the small town of Nome and the surrounding communities from a developing epidemic of diphtheria.
Both the mushers and their dogs were portrayed as heroes in the newly popular medium of radio and received headline coverage in newspapers across the United States. Balto, the lead sled dog on the final stretch into Nome, became the most famous canine celebrity of the era after Rin Tin Tin, and his statue is a popular tourist attraction in both New York City's Central Park and downtown Anchorage, Alaska. But it was Togo's team which covered much of the most dangerous parts of the route and ran the farthest: Togo's team covered while Balto's team ran . The publicity also helped spur an inoculation campaign in the U.S. which dramatically reduced the threat of the disease.
Location and geography
Nome, Alaska, lies approximately two degrees south of the Arctic Circle, and while greatly diminished from its peak of 20,000 inhabitants during the gold rush at the turn of the 20th century, it was still the largest town in northern Alaska in 1925, with 455 Alaska Natives and 975 settlers of European descent.
From November to July, the port on the southern shore of the Seward Peninsula of the Bering Sea was icebound and inaccessible by steamship. The only link to the rest of the world during the winter was the Iditarod Trail, which ran from the port of Seward in the south, across several mountain ranges and the vast Alaska Interior, to the town of Nome. In Alaska and other subarctic regions, the primary source of mail and needed supplies in 1925 was the dog sled; however, within a decade, bush flying would become the dominant method of transportation during the winter months.
Outbreak and call for help
In the winter of 1924–1925, Curti |
https://en.wikipedia.org/wiki/BBN%20Butterfly | The BBN Butterfly was a massively parallel computer built by Bolt, Beranek and Newman in the 1980s. It was named for the "butterfly" multi-stage switching network around which it was built. Each machine had up to 512 CPUs, each with local memory, which could be connected to allow every CPU access to every other CPU's memory, although with a substantially greater latency (roughly 15:1) than for its own. The CPUs were commodity microprocessors. The memory address space was shared.
The first generation used Motorola 68000 processors, followed by a 68010 version.
The Butterfly connect was developed specifically for this computer. The second or third generation, GP-1000 models used Motorola 68020's and scaled to 256 CPUs. The later, TC-2000 models used Motorola MC88100's, and scaled to 512 CPUs.
The Butterfly was initially developed as the Voice Funnel, a router for the ST-II protocol intended for carrying voice and video over IP networks. The Butterfly hardware was later used for the Butterfly Satellite IMP (BSAT) packet switch of DARPA's Wideband Packet Satellite Network which operated at multiple sites around the US over a shared 3 Mbit/s broadcast satellite channel. In the late 1980s, this network became the Terrestrial Wideband Network, based on terrestrial T1 circuits instead of a shared broadcast satellite channel and the BSAT became the Wideband Packet Switch (WPS). Another DARPA sponsored project at BBN produced the Butterfly Multiprocessor Internet Gateway (Internet Router) to interconnect different types of networks at the IP layer. Like the BSAT, the Butterfly Gateway broke the contention of a shared bus minicomputer architecture that had been in use for Internet Gateways by combining the routing computations and I/O at the network interfaces and using the Butterfly's switch fabric to provide the network interconnections. This resulted in significantly higher link throughputs.
The Butterfly began with a proprietary operating system called Chrysalis, |
https://en.wikipedia.org/wiki/CAR%20T%20cell | In biology, chimeric antigen receptors (CARs)—also known as chimeric immunoreceptors, chimeric T cell receptors or artificial T cell receptors—are receptor proteins that have been engineered to give T cells the new ability to target a specific antigen. The receptors are chimeric in that they combine both antigen-binding and T cell activating functions into a single receptor.
CAR T cell therapy uses T cells engineered with CARs to treat cancer. The premise of CAR-T immunotherapy is to modify T cells to recognize cancer cells in order to more effectively target and destroy them. Scientists harvest T cells from people, genetically alter them, then infuse the resulting CAR T cells into patients to attack their tumors.
CAR T cells can be derived either from T cells in a patient's own blood (autologously) or from the T cells of another, healthy, donor (allogeneically). Once isolated from a person, these T cells are genetically engineered to express a specific CAR, using a vector derived from an engineered lentivirus such as HIV (see Lentiviral vector in gene therapy). The CAR programs the recipient's T cells to target an antigen that is present on the surface of tumors. For safety, CAR T cells are engineered to be specific to an antigen that is expressed on a tumor but is not expressed on healthy cells.
After CAR T cells are infused into a patient, they act as a "living drug" against cancer cells. When they come in contact with their targeted antigen on a cell's surface, CAR T cells bind to it and become activated, then proceed to proliferate and become cytotoxic. CAR T cells destroy cells through several mechanisms, including extensive stimulated cell proliferation, increasing the degree to which they are toxic to other living cells (cytotoxicity), and by causing the increased secretion of factors that can affect other cells such as cytokines, interleukins and growth factors.
The surface of CAR T cells can bear either of two types of co-receptors, CD4 and CD8. These |
https://en.wikipedia.org/wiki/Bottomness | In physics, bottomness (symbol B′ using a prime as plain B is used already for baryon number) or beauty is a flavour quantum number reflecting the difference between the number of bottom antiquarks (n) and the number of bottom quarks (n) that are present in a particle:
Bottom quarks have (by convention) a bottomness of −1 while bottom antiquarks have a bottomness of +1. The convention is that the flavour quantum number sign for the quark is the same as the sign of the electric charge (symbol Q) of that quark (in this case, Q = −).
As with other flavour-related quantum numbers, bottomness is preserved under strong and electromagnetic interactions, but not under weak interactions. For first-order weak reactions, it holds that .
This term is rarely used. Most physicists simply refer to "the number of bottom quarks" and "the number of bottom antiquarks". |
https://en.wikipedia.org/wiki/Intermediate%20power%20amplifier | An Intermediate power amplifier (IPA) is one stage of the amplification process in a radio transmitter which usually occurs prior to the final high power amplification. The IPA provides lower power RF energy necessary to drive the final. In very high power transmitters, such as 10 kilowatts and above, multiple IPAs are combined to provide enough drive for the final.
An exciter, an even lower power transmitter, provides a similar service to the IPA by driving it; although an exciter usually encompasses other important functions, such as choosing the frequency of the RF. |
https://en.wikipedia.org/wiki/Topness | Topness (T, also called truth), a flavour quantum number, represents the difference between the number of top quarks (t) and number of top antiquarks () that are present in a particle:
By convention, top quarks have a topness of +1 and top antiquarks have a topness of −1. The term "topness" is rarely used; most physicists simply refer to "the number of top quarks" and "the number of top antiquarks".
Conservation
Like all flavour quantum numbers, topness is preserved under strong and electromagnetic interactions, but not under weak interaction. However the top quark is extremely unstable, with a half-life under 10−23 s, which is the required time for the strong interaction to take place. For that reason the top quark does not hadronize, that is it never forms any meson or baryon, so the topness of a meson or a baryon is always zero. By the time it can interact strongly it has already decayed to another flavour of quark (usually to a bottom quark). |
https://en.wikipedia.org/wiki/Voltage%20doubler | A voltage doubler is an electronic circuit which charges capacitors from the input voltage and switches these charges in such a way that, in the ideal case, exactly twice the voltage is produced at the output as at its input.
The simplest of these circuits is a form of rectifier which take an AC voltage as input and outputs a doubled DC voltage. The switching elements are simple diodes and they are driven to switch state merely by the alternating voltage of the input. DC-to-DC voltage doublers cannot switch in this way and require a driving circuit to control the switching. They frequently also require a switching element that can be controlled directly, such as a transistor, rather than relying on the voltage across the switch as in the simple AC-to-DC case.
Voltage doublers are a variety of voltage multiplier circuits. Many, but not all, voltage doubler circuits can be viewed as a single stage of a higher order multiplier: cascading identical stages together achieves a greater voltage multiplication.
Voltage doubling rectifiers
Villard circuit
The Villard circuit, conceived by Paul Ulrich Villard, consists simply of a capacitor and a diode. While it has the great benefit of simplicity, its output has very poor ripple characteristics. Essentially, the circuit is a diode clamp circuit. The capacitor is charged on the negative half cycles to the peak AC voltage (Vpk). The output is the superposition of the input AC waveform and the steady DC of the capacitor. The effect of the circuit is to shift the DC value of the waveform. The negative peaks of the AC waveform are "clamped" to 0 V (actually −VF, the small forward bias voltage of the diode) by the diode, therefore the positive peaks of the output waveform are 2Vpk. The peak-to-peak ripple is an enormous 2Vpk and cannot be smoothed unless the circuit is effectively turned into one of the more sophisticated forms. This is the circuit (with diode reversed) used to supply the negative high voltage for the |
https://en.wikipedia.org/wiki/List%20of%20United%20States%20regional%20mathematics%20competitions | Many math competitions in the United States have regional restrictions. Of these, most are statewide.
For a more complete list, please visit here .
The contests include:
Alabama
Alabama Statewide High School Mathematics Contest
Virgil Grissom High School Math Tournament
Vestavia Hills High School Math Tournament
Arizona
Great Plains Math League
AATM State High School Contest
California
Bay Area Math Olympiad
Lawrence Livermore National Laboratories Annual High School Math Challenge
Cal Poly Math Contest and Trimathlon
Polya Competition
Bay Area Math Meet
College of Creative Studies Math Competition
LA Math Cup
Math Day at the Beach hosted by CSULB
Math Field Day for San Diego Middle Schools
Mesa Day Math Contest at UC Berkeley
Santa Barbara County Math Superbowl
Pomona College Mathematical Talent Search
Redwood Empire Mathematics Tournament hosted by Humboldt State (middle and high school)
San Diego Math League and San Diego Math Olympiad hosted by the San Diego Math Circle
Santa Clara University High School Mathematics Contest
SC Mathematics Competition (SCMC) hosted by RSO@USC
Stanford Mathematics Tournament
UCSD/GSDMC High School Honors Mathematics Contest
Colorado
Colorado Mathematics Olympiad
District of Columbia
Moody's Mega Math
Florida
Florida-Stuyvesant Alumni Mathematics Competition
David Essner Mathematics Competition
James S. Rickards High School Fall Invitational
FAMAT Regional Competitions:
January Regional
February Regional
March Regional
FGCU Math Competition
Georgia
Central Math Meet(grades 9 - 12)
GA Council of Teachers of Mathematics State Varsity Math Tournament
STEM Olympiads Of America Math, Science & Cyber Olympiads (grades 3 - 8)
Valdosta State University Middle Grades Mathematics Competition
Illinois
ICTM math contest (grades 3–12)
Indiana
[IUPUI High School Math Contest] (grades 9–12)
Huntington University Math Competition (grades 6–12)
Indiana Math League
IASP Academic Super Bowl
Rose-Hulman H |
https://en.wikipedia.org/wiki/LGP-30 | The LGP-30, standing for Librascope General Purpose and then Librascope General Precision, is an early off-the-shelf computer. It was manufactured by the Librascope company of Glendale, California (a division of General Precision Inc.), and sold and serviced by the Royal Precision Electronic Computer Company, a joint venture with the Royal McBee division of the Royal Typewriter Company. The LGP-30 was first manufactured in 1956, at a retail price of $47,000, .
The LGP-30 was commonly referred to as a desk computer. Its height, width, and depth, excluding the typewriter shelf, was . It weighed about , and was mounted on sturdy casters which facilitated moving the unit.
Design
The primary design consultant for the Librascope computer was Stan Frankel, a Manhattan Project veteran and one of the first programmers of ENIAC. He designed a usable computer with a minimal amount of hardware. The single address instruction set had only 16 commands. Magnetic drum memory held the main memory, and the central processing unit (CPU) processor registers, timing information, and the master bit clock, each on a dedicated track. The number of vacuum tubes was minimized by using solid-state diode logic, a bit-serial architecture and multiple use of each of the 15 flip-flops.
It was a binary, 31-bit word computer with a 4096-word drum memory. Standard inputs were the Flexowriter keyboard and paper tape (ten six-bit characters/second). The standard output was the Flexowriter printer (typewriter, working at 10 characters/second). An optional higher-speed paper tape reader and punch was available as a separate peripheral.
The computer contained 113 electronic tubes and 1450 diodes. The tubes were mounted on 34 etched circuit pluggable cards which also contain associated components. The 34 cards were of only 12 different types. Card-extenders were available to permit dynamic testing of all machine functions. 680 of the 1450 diodes were mounted on one pluggable logic board.
|
https://en.wikipedia.org/wiki/Time-lapse%20photography | Time-lapse photography is a technique in which the frequency at which film frames are captured (the frame rate) is much lower than the frequency used to view the sequence. When played at normal speed, time appears to be moving faster and thus lapsing. For example, an image of a scene may be captured at 1 frame per second but then played back at 30 frames per second; the result is an apparent 30 times speed increase. Similarly, film can also be played at a much lower rate than at which it was captured, which slows down an otherwise fast action, as in slow motion or high-speed photography.
Processes that would normally appear subtle and slow to the human eye, such as the motion of the sun and stars in the sky or the growth of a plant, become very pronounced. Time-lapse is the extreme version of the cinematography technique of undercranking. Stop motion animation is a comparable technique; a subject that does not actually move, such as a puppet, can repeatedly be moved manually by a small distance and photographed. Then, the photographs can be played back as a film at a speed that shows the subject appearing to move.
History
Some classic subjects of time-lapse photography include:
Landscapes and celestial motion
Plants and flowers growing
Fruit rotting
Evolution of a construction project
People in the city
The technique has been used to photograph crowds, traffic, and even television. The effect of photographing a subject that changes imperceptibly slowly, creates a smooth impression of motion. A subject that changes quickly is transformed into an onslaught of activity.
The inception of time-lapse photography occurred in 1872 when Leland Stanford hired Eadweard Muybridge to prove whether or not race horses hooves ever are simultaneously in the air when running. The experiments progressed for 6 years until 1878 when Muybridge set up a series of cameras for every few feet of a track which had tripwires the horses triggered as they ran. The photos taken from the |
https://en.wikipedia.org/wiki/Weak%20hypercharge | In the Standard Model of electroweak interactions of particle physics, the weak hypercharge is a quantum number relating the electric charge and the third component of weak isospin. It is frequently denoted and corresponds to the gauge symmetry U(1).
It is conserved (only terms that are overall weak-hypercharge neutral are allowed in the Lagrangian). However, one of the interactions is with the Higgs field. Since the Higgs field vacuum expectation value is nonzero, particles interact with this field all the time even in vacuum. This changes their weak hypercharge (and weak isospin ). Only a specific combination of them, (electric charge), is conserved.
Mathematically, weak hypercharge appears similar to the Gell-Mann–Nishijima formula for the hypercharge of strong interactions (which is not conserved in weak interactions and is zero for leptons).
In the electroweak theory SU(2) transformations commute with U(1) transformations by definition and therefore U(1) charges for the elements of the SU(2) doublet (for example lefthanded up and down quarks) have to be equal. This is why U(1) cannot be identified with U(1)em and weak hypercharge has to be introduced.
Weak hypercharge was first introduced by Sheldon Glashow in 1961.
Definition
Weak hypercharge is the generator of the U(1) component of the electroweak gauge group, and its associated quantum field mixes with the electroweak quantum field to produce the observed gauge boson and the photon of quantum electrodynamics.
The weak hypercharge satisfies the relation
where is the electric charge (in elementary charge units) and is the third component of weak isospin (the SU(2) component).
Rearranging, the weak hypercharge can be explicitly defined as:
where "left"- and "right"-handed here are left and right chirality, respectively (distinct from helicity).
The weak hypercharge for an anti-fermion is the opposite of that of the corresponding fermion because the electric charge and the third component of th |
https://en.wikipedia.org/wiki/Telephone%20number%20pooling | Telephone number pooling, thousands-block number pooling, or just number pooling, is a method of allocating telephony numbering space of the North American Numbering Plan in the United States. The method allocates telephone numbers in blocks of one thousand consecutive numbers of a given central office code to telephony service providers. In the United States it replaced the practice of allocating all 10,000 numbers of a central office prefix at a time. Under number pooling, the entire prefix is assigned to a rate center, to be shared among all providers delivering services in that rate center. Number pooling reduced the quantity of unused telephone numbers in markets which have been fragmented between multiple service providers, avoided central office prefix exhaustion in high growth areas, and extended the lifetime of the North American telephone numbering plan without structure changes of telephone numbers. Telephone number pooling was first tested for area code 847 in Illinois in June 1998, and became national policy in a series of Federal Communications Commission (FCC) orders from 2000 to 2003.
History
The North American Numbering Plan is a closed numbering plan, meaning that it assigns telephone numbers to individual endpoints based on a fixed-length telephone number. The national telephone number consists of a three-digit area code, a three-digit central office code, and a four-digit line number. Thus, each central office provides a resource of 10,000 telephone lines with a unique number each. While often enough for small communities, most cities require multiple central offices to service the community.
In the North American Numbering Plan, mobile telephones do not use distinct area codes from wireline services, but many central offices provide only wireless services, or just wireline services.
After the breakup of the Bell System on January 1, 1984, most telephone service areas in the United States were dominated by one carrier which held a monopoly on |
https://en.wikipedia.org/wiki/Weak%20isospin | In particle physics, weak isospin is a quantum number relating to the electrically charged part of the weak interaction: Particles with half-integer weak isospin can interact with the bosons; particles with zero weak isospin do not.
Weak isospin is a construct parallel to the idea of isospin under the strong interaction. Weak isospin is usually given the symbol or , with the third component written as or .
It can be understood as the eigenvalue of a charge operator.
is more important than ; typically "weak isospin" is used as short form of the proper term "3rd component of weak isospin".
The weak isospin conservation law relates to the conservation of weak interactions conserve . It is also conserved by the electromagnetic and strong interactions. However, interaction with the Higgs field does not conserve , as directly seen by propagation of fermions, mixing chiralities by dint of their mass terms resulting from their Higgs couplings. Since the Higgs field vacuum expectation value is nonzero, particles interact with this field all the time even in vacuum. Interaction with the Higgs field changes particles' weak isospin (and weak hypercharge). Only a specific combination of them, (electric charge), is conserved.
Relation with chirality
Fermions with negative chirality (also called "left-handed" fermions) have and can be grouped into doublets with that behave the same way under the weak interaction. By convention, electrically charged fermions are assigned with the same sign as their electric charge.
For example, up-type quarks (u, c, t) have and always transform into down-type quarks (d, s, b), which have and vice versa. On the other hand, a quark never decays weakly into a quark of the same Something similar happens with left-handed leptons, which exist as doublets containing a charged lepton (, , ) with and a neutrino (, , ) with In all cases, the corresponding anti-fermion has reversed chirality ("right-handed" antifermion) and reversed sign
Fe |
https://en.wikipedia.org/wiki/Area%20code%20split | In telecommunications, an area code split is the practice of introducing a new telephone area code by geographically dividing an existing numbering plan area (NPA), and assigning area codes to the resulting divisions, but retaining the existing area code only for one of the divisions. The purpose of this practice is to provide more central office prefixes, and therefore more telephone numbers, in an area with high demand for telecommunication services, and prevent a shortage of telephone numbers.
An increasing demand for telephone numbers has existed since the development of automatic telephony in the early 20th century, but was spurred especially since the 1990s, with the proliferation of fax machines, pager systems, mobile telephones, computer modems and, eventually, smart phones.
The implementation of an area code split typically involves the establishment of a Class-4 toll switching center for each division of the existing numbering plan area that receive a new area code. The local seven-digit telephone numbers in any of the areas are typically not changed. The existing central office prefixes are maintained and only the central offices of the new divisions are reassigned to a new area code. The impact of a split on the general public involves the printing of new stationery, advertisements, and signage for many customers, and the dissemination of the new area code to family, friends, and customers. Computer systems, and telephone equipment may require updates in address books, speed dialing directories, and other automated equipment.
Since area code splits have substantial impact in the involved communities, and involve substantial cost in telephone plant and exchange equipment, they are planned carefully well ahead of implementation with the intent that an area is not again affected by a subsequent realignment for at least a decade.
The new boundaries of the numbering plan areas are drawn in a manner that minimizes splitting communities and should coincide |
https://en.wikipedia.org/wiki/Novum%20Organum | The Novum Organum, fully Novum Organum, sive Indicia Vera de Interpretatione Naturae ("New organon, or true directions concerning the interpretation of nature") or Instaurationis Magnae, Pars II ("Part II of The Great Instauration"), is a philosophical work by Francis Bacon, written in Latin and published in 1620. The title is a reference to Aristotle's work Organon, which was his treatise on logic and syllogism. In Novum Organum, Bacon details a new system of logic he believes to be superior to the old ways of syllogism. This is now known as the Baconian method.
For Bacon, finding the essence of a thing was a simple process of reduction, and the use of inductive reasoning. In finding the cause of a 'phenomenal nature' such as heat, one must list all of the situations where heat is found. Then another list should be drawn up, listing situations that are similar to those of the first list except for the lack of heat. A third table lists situations where heat can vary. The 'form nature', or cause, of heat must be that which is common to all instances in the first table, is lacking from all instances of the second table and varies by degree in instances of the third table.
The title page of Novum Organum depicts a galleon passing between the mythical Pillars of Hercules that stand either side of the Strait of Gibraltar, marking the exit from the well-charted waters of the Mediterranean into the Atlantic Ocean. The Pillars, as the boundary of the Mediterranean, have been smashed through by Iberian sailors, opening a new world for exploration. Bacon hopes that empirical investigation will, similarly, smash the old scientific ideas and lead to greater understanding of the world and heavens. This title page was liberally copied from Andrés García de Céspedes's Regimiento de Navegación, published in 1606.
The Latin tag across the bottom – Multi pertransibunt & augebitur scientia – is taken from the Old Testament (Daniel 12:4). It means: "Many will travel and knowledge wi |
https://en.wikipedia.org/wiki/IC%20power-supply%20pin | IC power-supply pins denote a voltage and current supply terminals in electric, electronics engineering, and in Integrated circuit design. Integrated circuits (ICs) have at least two pins that connect to the power rails of the circuit in which they are installed. These are known as the power-supply pins. However, the labeling of the pins varies by IC family and manufacturer. The double subscript notation usually corresponds to a first letter in a given IC family (transistors) notation of the terminals (e.g. VDD supply for a drain terminal in FETs etc.).
The simplest labels are V+ and V−, but internal design and historical traditions have led to a variety of other labels being used. V+ and V− may also refer to the non-inverting (+) and inverting (−) voltage inputs of ICs like op amps.
For power supplies, sometimes one of the supply rails is referred to as ground (abbreviated "GND") positive and negative voltages are relative to the ground. In digital electronics, negative voltages are seldom present, and the ground nearly always is the lowest voltage level. In analog electronics (e.g. an audio power amplifier) the ground can be a voltage level between the most positive and most negative voltage level.
While double subscript notation, where subscripted letters denote the difference between two points, uses similar-looking placeholders with subscripts, the double-letter supply voltage subscript notation is not directly linked (though it may have been an influencing factor).
BJTs
ICs using bipolar junction transistors have VCC (+, positive) and VEE (-, negative) power-supply pins though VCC is also often used for CMOS devices as well.
In circuit diagrams and circuit analysis, there are long-standing conventions regarding the naming of voltages, currents, and some components. In the analysis of a bipolar junction transistor, for example, in a common-emitter configuration, the DC voltage at the collector, emitter, and base (with respect to ground) may be written |
https://en.wikipedia.org/wiki/Modeling%20and%20Simulation%20Coordination%20Office | The Modeling and Simulation Coordination Office (M&SCO) is an organization within the United States Department of Defense that provides modeling and simulation technology. The M&SCO was named the Defense Modeling and Simulation Office (DMSO) when it was created by Congress in 2006. It was renamed the Modeling and Simulation Coordination Office in late 2007.
The M&SCO leads DoD modeling and simulation standardization efforts. It is the DoD point of contact for coordinating modeling and simulation activities with NATO and Partnership for Peace (PfP) organizations, and provides support to the DoD modeling and simulation management system.
External links
United States Department of Defense agencies
Military simulation |
https://en.wikipedia.org/wiki/Van%20Arkel%E2%80%93de%20Boer%20process | The van Arkel–de Boer process, also known as the iodide process or crystal-bar process, was the first industrial process for the commercial production of pure ductile titanium, zirconium and some other metals. It was developed by Anton Eduard van Arkel and Jan Hendrik de Boer in 1925. Now it is used in the production of small quantities of ultrapure titanium and zirconium. It primarily involves the formation of the metal iodides and their subsequent decomposition to yield pure metal.
This process was superseded commercially by the Kroll process.
Process
As seen in the diagram below, impure titanium, zirconium, hafnium, vanadium, thorium or protactinium is heated in an evacuated vessel with a halogen at 50–250 °C. The patent specifically involved the intermediacy of TiI4 and ZrI4, which were volatilized (leaving impurities as solid). At atmospheric pressure TiI4 melts at 150 °C and boils at 377 °C, while ZrI4 melts at 499 °C and boils at 600 °C. The boiling points are lower at reduced pressure. The gaseous metal tetraiodide is decomposed on a white hot tungsten filament (1400 °C). As more metal is deposited the filament conducts better and thus a greater electric current is required to maintain the temperature of the filament. The process can be performed in the span of several hours or several weeks, depending on the particular setup.
Generally, the crystal bar process can be performed using any number of metals using whichever halogen or combination of halogens is most appropriate for that sort of transport mechanism, based on the reactivities involved. The only metals it has been used to purify on an industrial scale are titanium, zirconium and hafnium, and in fact is still in use today on a much smaller scale for special purity needs. |
https://en.wikipedia.org/wiki/List%20of%20transmission%20sites | In the following there are lists of sites of notable radio transmitters. During the early history of radio many countries had only a few high power radio stations, operated either by the government or large corporations, which broadcast to the population or to other countries. Because of the large number of transmission sites, this list is not complete. Outside of Europe senders and repeater stations are emphatically presented from international services.
Legend
Europe
Austria
Belarus
Molodecno (VLF)
Belgium
Schoten (FM, TV)
Wavre (MW, SW, dismantled) FM DAB TV)
Overijse (MW closed)
Bosnia and Herzegovina
Mostre transmitter (MW)
Bulgaria
Kaliakra (MW, dismantled)
Vakarel (LW, MW, dismantled)
Croatia
Grbre transmitter (MW)
Deanovec transmitter (MW, KW)
Czech Republic
Liblice (dismantled)
Liblice (MW, on air with low power again)
Topolná (LW dismantled)
Mělník-Chloumek (MW closed)
Dobrochov (MW closed)
Jested (FM)
Denmark
Kalundborg (LW, MW (dismantled))
Finland
Lahti (LW, SW, shut down)
Pori transmitter (LW, SW shut down)
Pasilan linkkitorni (DVB-T)
Anjalankoski Radio and TV-Mast (FM, DVB-T)
Eurajoki TV Mast (DVB-T)
FM- and TV-mast Helsinki-Espoo (FM, DVB-T)
Haapavesi TV Mast (DVB-T)
Hollola TV Mast (DVB-T)
Kuopio Radio and TV Mast (FM, DVB-T)
Lapua Radio and TV-Mast (FM, DVB-T)
Oulu TV Mast (DVB-T)
Pihtipudas TV Mast (DVB-T)
Smedsböle Radio Mast (FM)
Teisko TV-mast (DVB-T)
Tervola Radio and TV-Mast (FM, DVB-T)
Turku radio and television station (FM, DVB-T)
Jyväskylä TV-mast (DVB-T)
France
Allouis (LW, SW)
Le Blanc (VLF)
Issoudun (SW)
Paris-Eiffel Tower (FM, TV)
Lyon-Metallic tower of Fourvière (FM, TV)
Sélestat (MW shut down)
HWU transmitter (LW, SW)
Col de la Madone transmitter (LW, SW)
Lafayette transmitter (VLF)
Limeux transmitting station (FM, TV)
TV Mast Niort-Maisonnay (TV)
Transmitter Le Mans-Mayet (FM, TV)
Realtor transmitter (MW, partially dismantled)
Sud Radio Transmitter Pic Blanc (MW, partially dismantled)
Pic de Nore transmitter (FM, TV)
|
https://en.wikipedia.org/wiki/Herbert%20Callen | Herbert Bernard Callen (July 1, 1919 – May 22, 1993) was an American physicist specializing in thermodynamics and statistical mechanics. He is considered one of the founders of the modern theory of irreversible thermodynamics, and is the author of the classic textbook Thermodynamics and an Introduction to Thermostatistics, published in two editions. During World War II, his services were invoked in the theoretical division of the Manhattan Project.
Life and work
A native of Philadelphia, Herbert Callen received his Bachelor of Science degree from Temple University. His graduate studies were interrupted by the Manhattan Project. He also worked on a U.S. Navy project concerning guided missiles (Project Bumblebee) at Princeton University in 1945. Callen subsequently completed his PhD in physics at the Massachusetts Institute of Technology (MIT) in 1947. He was supervised by the physicist László Tisza. His doctoral dissertation concerns the Kelvin thermoelectric and thermomagnetic relations, and Onsager's reciprocal relations; it was titled On the Theory of Irreversible Processes. Upon receiving his degree, Callen spent a year at the MIT Laboratory for Insulation Research and developed his theory of electrical breakdown for insulators.
In 1948, Callen joined the faculty of the department of physics at the University of Pennsylvania and became a professor in 1956. Specialists consider his most lasting contribution to physics to be the paper co-written with Theodore A. Welton presenting a proof of the fluctuation-dissipation theorem, an extremely general result describing how a system's response to perturbations relates to its behavior at equilibrium. This crucial result became the basis for the statistical theory of irreversible processes and explains how fluctuations dissipate energy into heat in general and the phenomenon of Nyquist noise in particular. Callen then pioneered the thermodynamic Green's functions for magnetism. With his students, he studied many-body pr |
https://en.wikipedia.org/wiki/Myofascial%20trigger%20point | Myofascial trigger points (MTrPs), also known as trigger points, are described as hyperirritable spots in the skeletal muscle. They are associated with palpable nodules in taut bands of muscle fibers. They are a topic of ongoing controversy, as there is limited data to inform a scientific understanding of the phenomenon. Accordingly, a formal acceptance of myofascial "knots" as an identifiable source of pain is more common among bodyworkers, physical therapists, chiropractors, and osteopathic practitioners. Nonetheless, the concept of trigger points provides a framework which may be used to help address certain musculoskeletal pain.
The trigger point model states that unexplained pain frequently radiates from these points of local tenderness to broader areas, sometimes distant from the trigger point itself. Practitioners claim to have identified reliable referred pain patterns which associate pain in one location with trigger points elsewhere. There is variation in the methodology for diagnosis of trigger points and a dearth of theory to explain how they arise and why they produce specific patterns of referred pain.
Compression of a trigger point may elicit local tenderness, referred pain, or local twitch response. The local twitch response is not the same as a muscle spasm. This is because a muscle spasm refers to the entire muscle contracting whereas the local twitch response also refers to the entire muscle but only involves a small twitch, no contraction.
Among physicians, various specialists might use trigger point therapy. These include physiatrists (physicians specializing in physical medicine and rehabilitation), family medicine, and orthopedics. Osteopathic as well as chiropractic schools also include trigger points in their training. Other health professionals, such as athletic trainers, occupational therapists, physiotherapists, acupuncturists, massage therapists and structural integrators are also aware of these ideas and many of them make use |
https://en.wikipedia.org/wiki/MSAV | Microsoft Anti-Virus (MSAV) is an antivirus program introduced by Microsoft for its MS-DOS operating system. The program first appeared in MS-DOS version 6.0 (1993)
and last appeared in MS-DOS 6.22. The first version of the antivirus program was basic, had no inbuilt update facility (updates had to be obtained from a BBS and manually installed by the user) and could scan for 1,234 different viruses. Microsoft Anti-Virus for Windows (MWAV), included as part of the package, was a front end that allowed MSAV to run properly on Windows 3.1x.
In 2009, Microsoft launched an in-house antivirus solution named Microsoft Security Essentials, which later was phased out in favor of Microsoft Defender.
History
Microsoft Anti-Virus was supplied by Central Point Software Inc. (later acquired by Symantec in 1994 and integrated into Symantec's Norton AntiVirus product) and was a stripped-down version of the Central Point Anti-Virus (CPAV) product which Central Point Software Inc., had licensed from Carmel Software Engineering in Haifa, Israel. Carmel Software sold the product as Turbo Anti-Virus both domestically and abroad.
Microsoft Anti-Virus for Windows was also provided by Central Point Software.
Features
MSAV featured the "Detect and Clean" strategy and the detection of boot sector and Trojan horse-type viruses (which were typical virus problems at that time).
The program also had an anti-stealth and check sum feature that could be used to detect any changes in normal files. This technology was intended to make up for the unavailability of regular update packages. The final update of MSAV was released in June 1996 by Symantec. The update added the ability to detect polymorphic viruses and the virus definitions were updated to scan for a total of 2,371 viruses.
VSafe TSR
VSafe is a terminate and stay resident component of MSAV that provided real-time virus protection.
By default, VSafe does the following:
Checks executable files for viruses (on execution).
Checks a |
https://en.wikipedia.org/wiki/CAP%20computer | The Cambridge CAP computer was the first successful experimental computer that demonstrated the use of security capabilities, both in hardware and software. It was developed at the University of Cambridge Computer Laboratory in the 1970s. Unlike most research machines of the time, it was also a useful service machine.
The sign currently on the front of the machine reads:
The CAP project on memory protection ran from 1970 to 1977. It was based on capabilities implemented in hardware, under M. Wilkes and R. Needham with D. Wheeler responsible for the implementation. R. Needham was awarded a BCS Technical Award in 1978 for the CAP (Capability Protection) Project.
Design
The CAP was designed such that any access to a memory segment or hardware required that the current process held the necessary capabilities.
The 32-bit processor featured microprogramming control, two 256-entry caches, a 32-entry write buffer and the capability unit itself, which had 64 registers for holding evaluated capabilities. Floating point operations were available using a single 72-bit accumulator. The instruction set featured over 200 instructions, including basic ALU and memory operations, to capability- and process-control instructions.
Instead of the programmer-visible registers used in Chicago and Plessey System 250 designs, the CAP would load internal registers silently when a program defined a capability. The memory was divided into segments of up to 64K 32-bit words. Each segment could contain data or capabilities, but not both. Hardware was accessed via an associated minicomputer.
All procedures constituting the operating system were written in ALGOL 68C, although a number of other closely associated protected procedures - such as a paginator - are written in BCPL.
Operation
The CAP first became operational in 1976. A fully functional computer, it featured a complete operating system, file system, compilers, and so on. The OS used a process tree structure, with an initial process |
https://en.wikipedia.org/wiki/Reactions%20to%20On%20the%20Origin%20of%20Species | The immediate reactions, from November 1859 to April 1861, to On the Origin of Species, the book in which Charles Darwin described evolution by natural selection, included international debate, though the heat of controversy was less than that over earlier works such as Vestiges of Creation. Darwin monitored the debate closely, cheering on Thomas Henry Huxley's battles with Richard Owen to remove clerical domination of the scientific establishment. While Darwin's illness kept him away from the public debates, he read eagerly about them and mustered support through correspondence.
Religious views were mixed, with the Church of England's scientific establishment reacting against the book, while liberal Anglicans strongly supported Darwin's natural selection as an instrument of God's design. Religious controversy was soon diverted by the publication of Essays and Reviews and debate over the higher criticism.
The most famous confrontation took place at the public 1860 Oxford evolution debate during a meeting of the British Association for the Advancement of Science, when the Bishop of Oxford Samuel Wilberforce argued against Darwin's explanation. In the ensuing debate Joseph Hooker argued strongly in favor of Darwinian evolution. Thomas Huxley's support of evolution was so intense that the media and public nicknamed him "Darwin's bulldog". Huxley became the fiercest defender of the evolutionary theory on the Victorian stage. Both sides came away feeling victorious, but Huxley went on to depict the debate as pivotal in a struggle between religion and science and used Darwinism to campaign against the authority of the clergy in education, as well as daringly advocating the "Ape Origin of Man".
Background
Darwin's ideas developed rapidly after returning from the Voyage of the Beagle in 1836. By December 1838, he had developed the basic principles of his theory. At that time, ideas about the transmutation of species were associated with radical political ideas of the A |
https://en.wikipedia.org/wiki/Fuzzy%20measure%20theory | In mathematics, fuzzy measure theory considers generalized measures in which the additive property is replaced by the weaker property of monotonicity. The central concept of fuzzy measure theory is the fuzzy measure (also capacity, see ), which was introduced by Choquet in 1953 and independently defined by Sugeno in 1974 in the context of fuzzy integrals. There exists a number of different classes of fuzzy measures including plausibility/belief measures, possibility/necessity measures, and probability measures, which are a subset of classical measures.
Definitions
Let be a universe of discourse, be a class of subsets of , and . A function where
is called a fuzzy measure.
A fuzzy measure is called normalized or regular if .
Properties of fuzzy measures
A fuzzy measure is:
additive if for any such that , we have ;
supermodular if for any , we have ;
submodular if for any , we have ;
superadditive if for any such that , we have ;
subadditive if for any such that , we have ;
symmetric if for any , we have implies ;
Boolean if for any , we have or .
Understanding the properties of fuzzy measures is useful in application. When a fuzzy measure is used to define a function such as the Sugeno integral or Choquet integral, these properties will be crucial in understanding the function's behavior. For instance, the Choquet integral with respect to an additive fuzzy measure reduces to the Lebesgue integral. In discrete cases, a symmetric fuzzy measure will result in the ordered weighted averaging (OWA) operator. Submodular fuzzy measures result in convex functions, while supermodular fuzzy measures result in concave functions when used to define a Choquet integral.
Möbius representation
Let g be a fuzzy measure. The Möbius representation of g is given by the set function M, where for every ,
The equivalent axioms in Möbius representation are:
.
, for all and all
A fuzzy measure in Möbius representation M is called normalized
if
|
https://en.wikipedia.org/wiki/Intel%208253 | The Intel 8253 and 8254 are programmable interval timers (PITs), which perform timing and counting functions using three 16-bit counters.
The 825x family was primarily designed for the Intel 8080/8085-processors, but were later used in x86 compatible systems. The 825x chips, or an equivalent circuit embedded in a larger chip, are found in all IBM PC compatibles and Soviet computers like the Vector-06C.
In PC compatibles, Timer Channel 0 is assigned to IRQ-0 (the highest priority hardware interrupt). Timer Channel 1 is assigned to DRAM refresh (at least in early models before the 80386). Timer Channel 2 is assigned to the PC speaker.
The Intel 82c54 (c for CMOS logic) variant handles up to 10 MHz clock signals.
History
The 8253 is described in the 1980 Intel "Component Data Catalog" publication. The 8254, described as a superset of the 8253 with higher clock speed ratings, has a "preliminary" data sheet in the 1982 Intel "Component Data Catalog".
The 8254 is implemented in HMOS and has a "Read Back" command not available on the 8253, and permits reading and writing of the same counter to be interleaved.
Modern PC compatibles, either when using SoC CPUs or southbridge typically implement full 8254 compatibility for backward compatibility and interoperability. The Read Back command being a vital I/O feature for interoperability with multicore CPUs and GPUs.
Variants
There is military version of Intel M8253 with the temperature range of -55 °C to +125 °C which it also have ±10% 5V power tolerance. The available 82C53 CMOS version was outsourced to Oki Electronic Industry Co., Ltd. The available package version of Intel 82C54 was in 28-pin PLCC of sampling at first quarter of 1986.
Features
The timer has three counters, numbered 0 to 2. Each channel can be programmed to operate in one of six modes. Once programmed, the channels operate independently.
Each counter has two input pins – "CLK" (clock input) and "GATE" – and one pin, "OUT", for data output. The |
https://en.wikipedia.org/wiki/Financial%20econometrics | Financial econometrics is the application of statistical methods to financial market data. Financial econometrics is a branch of financial economics, in the field of economics. Areas of study include capital markets, financial institutions, corporate finance and corporate governance. Topics often revolve around asset valuation of individual stocks, bonds, derivatives, currencies and other financial instruments.
It differs from other forms of econometrics because the emphasis is usually on analyzing the prices of financial assets traded at competitive, liquid markets.
People working in the finance industry or researching the finance sector often use econometric techniques in a range of activities – for example, in support of portfolio management and in the valuation of securities. Financial econometrics is essential for risk management when it is important to know how often 'bad' investment outcomes are expected to occur over future days, weeks, months and years.
Topics
The sort of topics that financial econometricians are typically familiar with include:
analysis of high-frequency price observations
arbitrage pricing theory
asset price dynamics
optimal asset allocation
cointegration
event study
nonlinear financial models such as autoregressive conditional heteroskedasticity
realized variance
fund performance analysis such as returns-based style analysis
tests of the random walk hypothesis
the capital asset pricing model
the term structure of interest rates (the yield curve)
value at risk
volatility estimation techniques such as exponential smoothing models and RiskMetrics
Research community
The Society for Financial Econometrics (SoFiE) is a global network of academics and practitioners dedicated to sharing research and ideas in the fast-growing field of financial econometrics. It is an independent non-profit membership organization, committed to promoting and expanding research and education by organizing and sponsoring conferences, programs |
https://en.wikipedia.org/wiki/Disposable%20email%20address | Disposable email addressing, also known as DEA or dark mail or "masked" email, refers to an approach which involves a unique email address being used for every contact, entity, or for a limited number of times or uses. The benefit is that if anyone compromises the address or utilizes it in connection with email abuse, the address owner can easily cancel (or "dispose" of) it without affecting any of their other contacts.
Uses
Disposable email addressing sets up a different, unique email address for every sender/recipient combination. It operates most usefully in scenarios where someone may sell or release an email address to spam lists or to other unscrupulous entities. The most common situations of this type involve online registration for sites offering discussion groups, bulletin boards, chat rooms, online shopping, and file hosting services. At a time when email spam has become an everyday nuisance, and when identity theft threatens, DEAs can serve as a convenient tool for protecting Internet users.
Disposable email addresses can be cancelled if someone starts to use the address in a manner that was not intended by the creator. Examples are the accidental release of an email to a spam list, or if the address was procured by spammers. Alternatively, the user may simply decide not to receive further correspondence from the sender. Whatever the cause, DEA allows the address owner to take unilateral action by simply cancelling the address in question. Later, the owner can determine whether to update the recipient or not.
Disposable email addresses typically forward to one or more real email mailboxes in which the owner receives and reads messages. The contact with whom a DEA is shared never learns the real email address of the user. If a database manages the DEA, it can also quickly identify the expected sender of each message by retrieving the associated contact name of each unique DEA. Used properly, DEA can also help identify which recipients handle email addr |
https://en.wikipedia.org/wiki/Intel%20QuickPath%20Interconnect | The Intel QuickPath Interconnect (QPI) is a point-to-point processor interconnect developed by Intel which replaced the front-side bus (FSB) in Xeon, Itanium, and certain desktop platforms starting in 2008. It increased the scalability and available bandwidth. Prior to the name's announcement, Intel referred to it as Common System Interface (CSI). Earlier incarnations were known as Yet Another Protocol (YAP) and YAP+.
QPI 1.1 is a significantly revamped version introduced with Sandy Bridge-EP (Romley platform).
QPI was replaced by Intel Ultra Path Interconnect (UPI) in Skylake-SP Xeon processors based on LGA 3647 socket.
Background
Although sometimes called a "bus", QPI is a point-to-point interconnect. It was designed to compete with HyperTransport that had been used by Advanced Micro Devices (AMD) since around 2003. Intel developed QPI at its Massachusetts Microprocessor Design Center (MMDC) by members of what had been the Alpha Development Group, which Intel had acquired from Compaq and HP and in turn originally came from Digital Equipment Corporation (DEC).
Its development had been reported as early as 2004.
Intel first delivered it for desktop processors in November 2008 on the Intel Core i7-9xx and X58 chipset.
It was released in Xeon processors code-named Nehalem in March 2009 and Itanium processors in February 2010 (code named Tukwila).
It was supplanted by the Intel Ultra Path Interconnect starting in 2017 on the Xeon Skylake-SP platforms.
Implementation
The QPI is an element of a system architecture that Intel calls the QuickPath architecture that implements what Intel calls QuickPath technology. In its simplest form on a single-processor motherboard, a single QPI is used to connect the processor to the IO Hub (e.g., to connect an Intel Core i7 to an X58). In more complex instances of the architecture, separate QPI link pairs connect one or more processors and one or more IO hubs or routing hubs in a network on the motherboard, allowing all of the |
https://en.wikipedia.org/wiki/Covad | Covad Communications Company, also known as Covad Communications Group, was an American provider of broadband voice and data communications. By 2006, the company had 530,000 subscribers, and ranked as the 16th largest ISP in the United States. Covad was acquired by U.S. Venture Partners, who in 2010 announced a three-way merger of MegaPath, Covad, and Speakeasy, creating a single Managed Services Local Exchange Carrier (MSLEC), providing voice and internet services; the new company was named MegaPath.
In January 2015, telecommunications service provider Global Capacity acquired MegaPath's wholesale and direct access business, which included assets acquired from Covad.
The name Covad was derived from acronyms which have varied over time, including COmbined Voice And Data, Copper Over Voice And Data, and in its earliest form, COpper Value ADded.
History
Covad was the first service provider to offer a national DSL broadband service. In addition they offered Voice over IP, T1, Web hosting, managed security, IP and dial-up, and bundled voice and data services directly through Covad's network and through Internet Service Providers, value-added resellers, telecommunications carriers and affinity groups to businesses. Covad broadband services were available in 44 states, including 235 Metropolitan Statistical Areas (MSAs), a services area available to over 50 percent of all businesses. The company was founded in San Jose, CA.
By 2008, Covad added the Samsung Acemap DSLAM to their portfolio on top of their pre-existing Nokia D50 DSLAMs, to allow for ADSL2+ technology, which can reach DSL speeds up to 15 Mbit/s.
The launch of the AceMAP was primarily instigated to provide combined POTS and ADSL2+ service to Earthlink end users and were deployed in more residential areas instead of concentrating on business-centric markets.
Covad was acquired a by private equity firm, Platinum Equity, in April 2008. In 2010, it was sold to U.S. Venture Partners, which merged Covad, and |
https://en.wikipedia.org/wiki/Javelin%20argument | The javelin argument, credited to Lucretius, is an ancient logical argument that the universe, or cosmological space, must be infinite. The javelin argument was used to support the Epicurean thesis about the universe. It was also constructed to counter the Aristotelian view that the universe is finite.
Overview
Lucretius introduced the concept of the javelin argument in his discourse of space and how it can be bound. He explained:
For whatever bounds it, that thing must itself be bounded likewise; and to this bounding thing there must be a bound again, and so on for ever and ever throughout all immensity. Suppose, however, for a moment, all existing space to be bounded, and that a man runs forward to the uttermost borders, and stands upon the last verge of things, and then hurls forward a winged javelin,— suppose you that the dart, when hurled by the vivid force, shall take its way to the point the darter aimed at, or that something will take its stand in the path of its flight, and arrest it? For one or other of these things must happen. There is a dilemma here that you never can escape from.
The javelin argument has two implications. If the hurled javelin flew onwards unhindered, it meant that the man running was not at the edge of the universe because there is something beyond the edge where the weapon flew. On the other hand, if it did not, the man was still not at the edge because there must be an obstruction beyond that stopped the javelin. However, the argument assumes incorrectly that a finite universe must necessarily have a "limit" or edge. The argument fails in the case that the universe might be shaped like the surface of a hypersphere or torus. (Consider a similar fallacious argument that the Earth's surface must be infinite in area: because otherwise one could go to the Earth's edge and throw a javelin, proving that the Earth's surface continued wherever the javelin hit the ground.) |
https://en.wikipedia.org/wiki/Institute%20of%20Mathematics%2C%20Physics%2C%20and%20Mechanics | Institute of Mathematics, Physics, and Mechanics (; IMFM) is the leading research institution in the areas of mathematics and theoretical computer science in Slovenia. It includes researchers from University of Ljubljana, University of Maribor and University of Primorska. It was founded in 1960.
The IMFM is composed of the following departments:
Department of Mathematrics
Department of Physics
Department of Theoretical Computer Science
The director is Jernej Kozak. |
https://en.wikipedia.org/wiki/Baker%27s%20map | In dynamical systems theory, the baker's map is a chaotic map from the unit square into itself. It is named after a kneading operation that bakers apply to dough: the dough is cut in half, and the two halves are stacked on one another, and compressed.
The baker's map can be understood as the bilateral shift operator of a bi-infinite two-state lattice model. The baker's map is topologically conjugate to the horseshoe map. In physics, a chain of coupled baker's maps can be used to model deterministic diffusion.
As with many deterministic dynamical systems, the baker's map is studied by its action on the space of functions defined on the unit square. The baker's map defines an operator on the space of functions, known as the transfer operator of the map. The baker's map is an exactly solvable model of deterministic chaos, in that the eigenfunctions and eigenvalues of the transfer operator can be explicitly determined.
Formal definition
There are two alternative definitions of the baker's map which are in common use. One definition folds over or rotates one of the sliced halves before joining it (similar to the horseshoe map) and the other does not.
The folded baker's map acts on the unit square as
When the upper section is not folded over, the map may be written as
The folded baker's map is a two-dimensional analog of the tent map
while the unfolded map is analogous to the Bernoulli map. Both maps are topologically conjugate. The Bernoulli map can be understood as the map that progressively lops digits off the dyadic expansion of x. Unlike the tent map, the baker's map is invertible.
Properties
The baker's map preserves the two-dimensional Lebesgue measure.
The map is strong mixing and it is topologically mixing.
The transfer operator maps functions on the unit square to other functions on the unit square; it is given by
The transfer operator is unitary on the Hilbert space of square-integrable functions on the unit square. The spectrum is continuous, a |
https://en.wikipedia.org/wiki/Glucose%20meter | A glucose meter, also referred to as a "glucometer", is a medical device for determining the approximate concentration of glucose in the blood. It can also be a strip of glucose paper dipped into a substance and measured to the glucose chart. It is a key element of glucose testing, including home blood glucose monitoring (HBGM) performed by people with diabetes mellitus or hypoglycemia. A small drop of blood, obtained from slightly piercing a fingertip with a lancet, is placed on a disposable test strip that the meter reads and uses to calculate the blood glucose level. The meter then displays the level in units of mg/dL or mmol/L.
Since approximately 1980, a primary goal of the management of type 1 diabetes and type 2 diabetes mellitus has been achieving closer-to-normal levels of glucose in the blood for as much of the time as possible, guided by HBGM several times a day. The benefits include a reduction in the occurrence rate and severity of long-term complications from hyperglycemia as well as a reduction in the short-term, potentially life-threatening complications of hypoglycemia.
History
Leland Clark presented his first paper about the oxygen electrode, later named the Clark electrode, on 15 April 1956, at a meeting of the American Society for Artificial Organs during the annual meetings of the Federated Societies for Experimental Biology.
In 1962, Clark and Ann Lyons from the Cincinnati Children's Hospital developed the first glucose enzyme electrode. This biosensor was based on a thin layer of glucose oxidase (GOx) on an oxygen electrode. Thus, the readout was the amount of oxygen consumed by GOx during the enzymatic reaction with the substrate glucose. This publication became one of the most often cited papers in life sciences. Due to this work he is considered the “father of biosensors,” especially with respect to the glucose sensing for diabetes patients.
Another early glucose meter was the Ames Reflectance Meter by Anton H. Clemens. It was used i |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.