id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
54,414,033 | https://en.wikipedia.org/wiki/Bilby%20tower | A Bilby tower is a type of survey tower made from steel and used by the United States Coast and Geodetic Survey (USC&GS) from 1927 to 1984. It is named after Jasper S. Bilby who designed it in 1926. In 1927, Herbert Hoover, then the Secretary of Commerce, commended Bilby's tower "for its cost and time efficiency" and cited the surveyor's service as "essential to the United States government".
History
Jasper S. Bilby (1864–1949) was a surveyor employed by the USC&GS from 1884 to 1937. He served as Chief Signalman of the USC&GS from 1930 to 1937. Born in Rush County, Indiana, he later moved to a homestead near Osgood.
Design of the Bilby tower
Bilby began designing the first version of the Bilby tower in 1926 and worked with the Aermotor Windmill Company to develop the first prototypes. The tower was designed to elevate surveyors high enough to look over obstructions and to account for the curvature of the Earth in their calculations. The tower was tested with positive results and Bilby received a commendation from Secretary of Commerce Herbert Hoover for the invention.
The Bilby Tower was a massive success, saving money compared to previous options and increasing the efficiency of the USC&GS surveyors. They could be constructed and deconstructed in less than a third of the time of previous towers, were lighter, and were easier to move. In 1928 alone, use of the Bilby Tower cut costs by up to 35%, and over its first ten years of use it saved the government an estimated $3,000,000 (). Its use also spread outside the United States, reaching as far as Australia and Denmark. The towers were credited by The New York Times as being "one of the greatest aids to geodetic work."
Prior to the introduction of Bilby towers, surveyors would try to build towers only to the minimum functional height to minimize the resourced expended to erect a tower. Bilby towers, with their low costs and ease of setting up, made this less of an issue. The last Bilby tower erected by the National Geodetic Survey was in 1984, and it was placed near Hartford, Connecticut.
Bilby's legacy
In 1930, Bilby was promoted to the newly created position of "Chief Signalman" of the USC&GS. In 1932 the federal retirement age was waived to allow him to continue serving. He retired in 1937. Over the course of his 53-year career, Bilby traveled over 500,000 miles across the United States. Bilby died on July 18, 1949, in Batesville, Indiana. The last remaining tower, at St. Charles Parish in Louisiana, was dismantled by the Surveyors Historical Society in 2012 and re-erected in 2013 at the Osgood Trails Park in Osgood, Indiana, the home town of Bilby.
Features
The Bilby tower was designed to be used for triangulation. The towers have two unconnected parts—an internal tower for mounting surveying instruments and an external tower for surveyors. This separation allowed for isolating the instruments from the vibrations induced by people, which increased the precision of measurements. It was portable, reusable and quick to assemble and dismantle. Its quick erection made it possible to conduct surveying rapidly—a team of five men could assemble a steel Bilby tower in only five hours.
See also
Triangulation (surveying)
Triangulation station
References
Towers
Surveying instruments
Surveying of the United States | Bilby tower | [
"Engineering"
] | 723 | [
"Structural engineering",
"Towers"
] |
54,414,184 | https://en.wikipedia.org/wiki/Muller%E2%80%93Schupp%20theorem | In mathematics, the Muller–Schupp theorem states that a finitely generated group G has context-free word problem if and only if G is virtually free. The theorem was proved by David Muller and Paul Schupp in 1983.
Word problem for groups
Let G be a finitely generated group with a finite marked generating set X, that is a set X together with the map such that the subset generates G. Let be the group alphabet and let be the free monoid on that is is the set of all words (including the empty word) over the alphabet .
The map extends to a surjective monoid homomorphism, still denoted by , .
The word problem of G with respect to X is defined as
where is the identity element of G.
That is, if G is given by a presentation with X finite, then consists of all words over the alphabet that are equal to in G.
Virtually free groups
A group G is said to be virtually free if there exists a subgroup of finite index H in G such that H is isomorphic to a free group. If G is a finitely generated virtually free group and H is a free subgroup of finite index in G then H itself is finitely generated, so that H is free of finite rank.
The trivial group is viewed as the free group of rank 0, and thus all finite groups are virtually free.
A basic result in Bass–Serre theory says that a finitely generated group G is virtually free if and only if G splits as the fundamental group of a finite graph of finite groups.
Precise statement of the Muller–Schupp theorem
The modern formulation of the Muller–Schupp theorem is as follows:
Let G be a finitely generated group with a finite marked generating set X. Then G is virtually free if and only if is a context-free language.
Sketch of the proof
The exposition in this section follows the original 1983 proof of Muller and Schupp.
Suppose G is a finitely generated group with a finite generating set X such that the word problem is a context-free language. One first observes that for every finitely generated subgroup H of G is finitely presentable and that for every finite marked generating set Y of H the word problem is also context-free. In particular, for a finitely generated group the property of having context word problem does not depend on the choice of a finite marked generating set for the group, and such a group is finitely presentable.
Muller and Schupp then show, using the context-free grammar for the language , that the Cayley graph of G with respect to X is K-triangulable for some integer K>0. This means that every closed path in can be, by adding several ``diagonals", decomposed into triangles in such a way that the label of every triangle is a relation in G of length at most K over X.
They then use this triangulability property of the Cayley graph to show that either G is a finite group, or G has more than one end. Hence, by a theorem of Stallings, either G is finite or G splits nontrivially as an amalgamated free product or an HNN-extension where C is a finite group. Then are again finitely generated groups with context-free word-problem, and one can apply the entire preceding argument to them.
Since G is finitely presentable and therefore accessible, the process of iterating this argument eventually terminates with finite groups, and produces a decomposition of G as the fundamental group of a finite graph-of-groups with finite vertex and edge groups. By a basic result of Bass–Serre theory it then follows that G is virtually free.
The converse direction of the Muller–Schupp theorem is more straightforward. If G is a finitely generated virtually free group, then G admits a finite index normal subgroup N such that N is a finite rank free group. Muller and Schupp use this fact to directly verify that G has context-free word problem.
Remarks and further developments
The Muller–Schupp theorem is a far-reaching generalization of a 1971 theorem of Anisimov which states that for a finitely generated group G with a finite marked generating set X the word problem is a regular language if and only if the group G is finite.
At the time the 1983 paper of Muller and Schupp was published, accessibility of finitely presented groups has not yet been established. Therefore, the original formulation of the Muller–Schupp theorem said that a finitely generated group is virtually free if and only if this group is accessible and has context-free word problem. A 1985 paper of Dunwoody proved that all finitely presented groups are accessible. Since finitely generated groups with context-free word problem are finitely presentable, Dunwoody's result together with the original Muller–Schupp theorem imply that a finitely generated group is virtually free if and only if it has context-free word problem (which is the modern formulation of the Muller–Schupp theorem).
A 1983 paper of Linnell established accessibility of finitely generated groups where the orders of finite subgroups are bounded. It was later observed (see ) that Linnell's result together with the original Muller–Schupp theorem were sufficient to derive the modern statement of the Muller–Schupp theorem, without having to use Dunwoody's result.
In the case of torsion-free groups, the situation is simplified as the accessibility results are not needed and one instead uses Grushko theorem about the rank of a free product. In this setting, as noted in the original Muller and Schupp paper, the Muller–Schupp theorem says that a finitely generated torsion-free group has context-free word problem if and only if this group is free.
In a subsequent related paper, Muller and Schupp proved that a ``finitely generated" graph Γ has finitely many end isomorphism types if and only if Γ is the transition graph of a push-down automaton. As a consequence, they show that the monadic theory of a ``context-free" graph (such as the Cayley graph of a virtually free group) is decidable, generalizing a classic result of Rabin for binary trees. Later Kuske and Lohrey proved that virtually free groups are the only finitely generated groups whose Cayley graphs have decidable monadic theory.
Bridson and Gilman applied the Muller–Schupp theorem to show that a finitely generated group admits a ``broom-like" combing if and only if that group is virtually free.
Sénizergues used the Muller–Schupp theorem to show that the isomorphism problem for finitely generated virtually free group is primitive recursive.
Gilman, Hermiller, Holt and Rees used the Muller–Schupp theorem to prove that a finitely generated group G is virtually free if and only if there exist a finite generating set X for G and a finite set of length-reducing rewrite rules over X whose application reduces any word to a geodesic word.
Ceccherini-Silberstein and Woess consider the setting of a finitely generated group G with a finite generating set X, and a subgroup K of G such that the set of all words over the alphabet representing elements of H is a context-free language.
Generalizing the setting of the Muller–Schupp theorem, Brough studied groups with poly-context-free word problem, that is where the word problem is the intersection of finitely many context-free languages. Poly-context-free groups include all finitely generated groups commensurable with groups embeddable in a direct product of finitely many free groups, and Brough conjectured that every poly-context-free group arises in this way. Ceccherini-Silberstein, Coornaert, Fiorenzi, Schupp, and Touikan introduced the notion of a multipass automaton, which are nondeterministic automata accepting precisely all the finite intersections of context-free languages. They also obtained results providing significant evidence in favor of the above conjecture of Brough.
Nyberg-Brodda generalised the Muller-Schupp theorem from groups to ``special monoids", a class of semigroups containing, but strictly larger than, the class of groups, characterising such semigroups with a context-free word problem as being precisely those with a virtually free maximal subgroup.
Subsequent to the 1983 paper of Muller and Schupp, several authors obtained alternate or simplified proofs of the Muller–Schupp theorem.
See also
Infinite tree automaton
Word problem (mathematics)
Formal language
References
External links
Context-free groups and their structure trees, expository talk by Armin Weiß
Geometric group theory
Formal languages | Muller–Schupp theorem | [
"Physics",
"Mathematics"
] | 1,829 | [
"Geometric group theory",
"Group actions",
"Formal languages",
"Mathematical logic",
"Symmetry"
] |
54,414,446 | https://en.wikipedia.org/wiki/Graph%20matching | Graph matching is the problem of finding a similarity between graphs.
Graphs are commonly used to encode structural information in many fields, including computer vision and pattern recognition, and graph matching is an important tool in these areas. In these areas it is commonly assumed that the comparison is between the data graph and the model graph.
The case of exact graph matching is known as the graph isomorphism problem. The problem of exact matching of a graph to a part of another graph is called subgraph isomorphism problem.
Inexact graph matching refers to matching problems when exact matching is impossible, e.g., when the number of vertices in the two graphs are different. In this case it is required to find the best possible match. For example, in image recognition applications, the results of image segmentation in image processing typically produces data graphs with the numbers of vertices much larger than in the model graphs data expected to match against. In the case of attributed graphs, even if the numbers of vertices and edges are the same, the matching still may be only inexact.
Two categories of search methods are the ones based on identification of possible and impossible pairings of vertices between the two graphs and methods that formulate graph matching as an optimization problem. Graph edit distance is one of similarity measures suggested for graph matching. The class of algorithms is called error-tolerant graph matching.
See also
String matching
Pattern matching
References
Computational problems in graph theory | Graph matching | [
"Mathematics",
"Technology"
] | 287 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Computer science stubs",
"Computer science",
"Mathematical relations",
"Computing stubs",
"Mathematical problems"
] |
54,414,569 | https://en.wikipedia.org/wiki/Elliott%20503 | The Elliott 503 was a transistorized computer introduced by Elliott Brothers in 1963. It was software-compatible with the earlier Elliott 803 but was about 70 times faster and a more powerful machine. About 32 units were sold. The basic configuration had 8192 words of 39 bits each for main memory, and operated at a system clock speed of 6.7 megahertz. It weighed more than .
See also
List of transistorized computers
Cluff–Foster–Idelson code
References
0803
Early British computers
Transistorized computers
Computer-related introductions in 1963 | Elliott 503 | [
"Technology"
] | 119 | [
"Computing stubs",
"Computer hardware stubs"
] |
54,416,271 | https://en.wikipedia.org/wiki/Pyongyang%20Touch | The Pyongyang Touch () is a line of smartphone, with its first version launched in North Korea in 2014 and likely produced by the Chinese company Uniscope. It is named after the capital of the country, Pyongyang. Not much is known about the technical data, but the phone is believed to run a modified version of Android. Externally, it resembles the iPhone 3G and is available in white, pink, and blue. Since access to the Internet is denied to a large part of the population in North Korea, there is only access to the Intranet Kwangmyong. It is particularly popular among the younger population, according to Choson Sinbo.
Models
Pyongyang 2413
Pyongyang 2418
The device has a "panorama" function and had a flashlight app as default. The user can use Photoshop to edit photos they take with the phone's camera. The phone also offers medical self-diagnosis services and scientific agricultural apps.
Pyongyang 2421
The device included a security feature that did not work properly.
Pyongyang 2423
The device is able to make Word documents, Excel files and PowerPoints. According to a report by The Stimson Center, the device (along with the Pyongyang 2413) has been jailbroken by North Korean citizens to circumvent surveillance measures and media restrictions on the devices.
Pyongyang 2425
The device includes wireless charging, an advanced camera, and facial recognition.
Pyongyang 2428
The device uses Android 8.1 as its operating system.
References
External links
Pyongyang 2423 review by Hankyoreh
Pyongyang 2425 review (in korean) by the daily nk
Smartphones
Science and technology in North Korea
Mobile phones introduced in 2014
Mobile phones | Pyongyang Touch | [
"Technology"
] | 369 | [
"Mobile computer stubs",
"Mobile technology stubs"
] |
54,416,793 | https://en.wikipedia.org/wiki/Fiber%20network%20mechanics | Fiber network mechanics is a subject within physics and mechanics that deals with the deformation of networks made by the connection of slender fibers. Fiber networks are used to model the mechanics of fibrous materials such as biopolymer networks and paper products. Depending on the mechanical behavior of individual filaments, the networks may be composed of mechanical elements such as Hookean springs, Euler-Bernoulli beams, and worm-like chains. The field of fiber network mechanics is closely related to the mechanical analysis of frame structures, granular materials, critical phenomena, and lattice dynamics.
References
Biophysics
Solid mechanics | Fiber network mechanics | [
"Physics",
"Biology"
] | 123 | [
"Solid mechanics",
"Applied and interdisciplinary physics",
"Biophysics",
"Mechanics"
] |
54,417,247 | https://en.wikipedia.org/wiki/IBM%20System/360%20Model%20195 | The IBM System/360 Model 195 is a discontinued IBM computer introduced on August 20, 1969. The Model 195 was a reimplementation of the IBM System/360 Model 91 design using monolithic integrated circuits. It offers "an internal processing speed about twice as fast as the Model 85, the next most powerful System/360". The Model 195 was discontinued on February 9, 1977, the same date as the System/370 Model 195.
About 20 Model 195 systems were produced.
Technical specifications
The basic CPU cycle time is 54 nanoseconds (ns). The system has a high degree of parallelism and can process up to seven operations at a time.
The system can be configured with 1, 2, or 4 MB of magnetic core memory (models 195J, 195K, and 195L) with a cycle time of 756 ns. A 32 KB cache, called a buffer memory in the IBM announcement, is standard. Memory blocks are brought into cache in units of 64 bytes.
The normal operating system for the Model 195 is OS/360 Multiprogramming with a Variable Number of Tasks (MVT).
Legacy
The Model 195 was later updated as the IBM System/370 Model 195 with the new System/370 instructions and the 370 time-of-day clock and control registers, but without the virtual memory hardware.
References
External links
IBM System/360 Model 195 Functional Characteristics
Computing platforms
System/360 Model 195 | IBM System/360 Model 195 | [
"Technology"
] | 291 | [
"Computing platforms"
] |
54,417,893 | https://en.wikipedia.org/wiki/NGC%207046 | NGC 7046 is a barred spiral galaxy located 193 million light-years away in the constellation of Equuleus. With a high radial velocity of 4,130 km/s, the galaxy is drifting away from the Milky Way. NGC 7046 has an apparent size of 0.990 arcmin, and at its current distance, it has an estimate diameter of 192,639 light years. NGC 7046 has a morphological type of "SBc", which indicates that it is a barred spiral galaxy with a definite bulge.
As of 2021, there have been no observed supernovae in the galaxy, but it has been discovered to be in a small galaxy group.
See also
NGC 7025
References
External links
Barred spiral galaxies
LINER galaxies
Equuleus
7046
11708
66407
Astronomical objects discovered in 1790 | NGC 7046 | [
"Astronomy"
] | 169 | [
"Equuleus",
"Constellations"
] |
54,421,490 | https://en.wikipedia.org/wiki/Radiative%20levitation | Radiative levitation is the name given to a phenomenon that causes the spectroscopically-derived abundance of heavy elements in the photospheres of hot stars to be very much higher than solar abundance or than the expected bulk abundance; for example, the spectrum of the star Feige 86 has gold and platinum abundances three to ten thousand times higher than solar norms.
The mechanism is that heavier elements have large photon absorption cross-sections when partially ionized (see opacity), so efficiently absorb photons from the radiation coming from the core of the star, and some of the energy of the photons gets converted to outward momentum, effectively 'kicking' the heavy atom towards the photosphere. The effect is strong enough that very hot white dwarfs are significantly less bright in the EUV and X-ray bands than would be expected from a black-body model.
The countervailing process is gravitational settling, where, in very high gravitational fields, the effects of diffusion even in a hot atmosphere are cancelled out to the point that the heavier elements will sink unobservably to the bottom and lighter elements settle on the top.
See also
Chemically peculiar star
References
Stellar phenomena | Radiative levitation | [
"Physics"
] | 242 | [
"Physical phenomena",
"Stellar phenomena"
] |
54,423,048 | https://en.wikipedia.org/wiki/Tree%20transducer | In theoretical computer science and formal language theory, a tree transducer (TT) is an abstract machine taking as input a tree, and generating output – generally other trees, but models producing words or other structures exist. Roughly speaking, tree transducers extend tree automata in the same way that word transducers extend word automata.
Manipulating tree structures instead of words enable TT to model syntax-directed transformations of formal or natural languages. However, TT are not as well-behaved as their word counterparts in terms of algorithmic complexity, closure properties, etcetera. In particular, most of the main classes are not closed under composition.
The main classes of tree transducers are:
Top-Down Tree Transducers (TOP)
A TOP T is a tuple such that:
is a finite set, the set of states;
is a finite ranked alphabet, called the input alphabet;
is a finite ranked alphabet, called the output alphabet;
is a subset of Q, the set of initial states; and
is a set of rules of the form , where f is a symbol of Σ, n is the arity of f, q is a state, and u is a tree on Γ and , such pairs being nullary.
Examples of rules and intuitions on semantics
For instance,
is a rule – one customarily writes instead of the pair – and its intuitive semantics is that, under the action of q, a tree with f at the root and three children is transformed into
where, recursively, and are replaced, respectively, with the application of on the first child and
with the application of on the third.
Semantics as term rewriting
The semantics of each state of the transducer T, and of T itself, is a binary relation between input trees (on Σ) and output trees (on Γ).
A way of defining the semantics formally is to see as a term rewriting system, provided that in the right-hand sides the calls are written in the form , where states q are unary symbols. Then the semantics of a state q is given by
The semantics of T is then defined as the union of the semantics of its initial states:
Determinism and domain
As with tree automata, a TOP is said to be deterministic (abbreviated DTOP) if no two rules of δ share the same left-hand side, and there is at most one initial state. In that case, the semantics of the DTOP is a partial function from input trees (on Σ) to output trees (on Γ), as are the semantics of each of the DTOP's states.
The domain of a transducer is the domain of its semantics. Likewise, the image of a transducer is the image of its semantics.
Properties of DTOP
DTOP are not closed under union: this is already the case for deterministic word transducers.
The domain of a DTOP is a regular tree language. Furthermore, the domain is recognisable by a deterministic top-down tree automaton (DTTA) of size at most exponential in that of the initial DTOP.
That the domain is DTTA-recognizable is not surprising, considering that the left-hand sides of DTOP rules are the same as for DTTA. As for the reason for the exponential explosion in the worst case (that does not exist in the word case), consider the rule . In order for the computation to succeed, it must succeed for both children. That means that the right child must be in the domain of . As for the left child, it must be in the domain of both and . Generally, since subtrees can be copied, a single subtree can be evaluated by multiple states during a run, despite the determinism, and unlike DTTA. Thus the construction of the DTTA recognising the domain of a DTOP must account for sets of states and compute the intersections of their domains, hence the exponential. In the special case of linear DTOP, that is to say DTOP where each appears at most once in the right-hand side of each rule, the construction is linear in time and space.
The image of a DTOP is not a regular tree language.
Consider the transducer coding the transformation ; that is, duplicate the child of the input. This is easily done by a rule , where p encodes the identity. Then, absent any restrictions on the first child of the input, the image is a classical non-regular tree language.
However, the domain of a DTOP cannot be restricted to a regular tree language. That is to say, given a DTOP T and a language L, one cannot in general build a DTOP such that the semantics of is that of T, restricted to L.
This property is linked to the reason deterministic top-down tree automata are less expressive than bottom-up automata: once you go down a given path, information from other paths is inaccessible. Consider the transducer coding the transformation ; that is, output the right child of the input. This is easily done by a rule , where p encodes the identity. Now let's say we want to restrict this transducer to the finite (and thus, in particular, regular) domain . We must use the rules . But in the first rule, does not appear at all, since nothing is produced from the left child. Thus, it is not possible to test that the left child is c. In contrast, since we produce from the right child, we can test that it is a or b. In general, the criterion is that DTOP cannot test properties of subtrees from which they do not produce output.
DTOP are not closed under composition. However this problem can be solved by the addition of a lookahead: a tree automaton, coupled to the transducer, that can perform tests on the domain which the transducer is incapable of.
This follows from the point about domain restriction: composing the DTOP encoding identity on with the one encoding must yield a transducer with the semantics , which we know is not expressible by a DTOP.
The typechecking problem—testing whether the image of a regular tree language is included in another regular tree language—is decidable.
The equivalence problem—testing whether two DTOP define the same functions—is decidable.
Bottom-Up Tree Transducers (BOT)
As in the simpler case of tree automata, bottom-up tree transducers are defined similarly to their top-down counterparts, but proceed from the leaves of the tree to the root, instead of from the root to the leaves. Thus the main difference is in the form of the rules, which are of the form .
References
Trees (data structures)
Automata (computation)
Finite automata
Formal languages
Theoretical computer science | Tree transducer | [
"Mathematics"
] | 1,409 | [
"Theoretical computer science",
"Formal languages",
"Mathematical logic",
"Applied mathematics"
] |
54,423,179 | https://en.wikipedia.org/wiki/Trait%C3%A9%20de%20m%C3%A9canique%20c%C3%A9leste | Traité de mécanique céleste () is a five-volume treatise on celestial mechanics written by Pierre-Simon Laplace and published from 1798 to 1825 with a second edition in 1829. In 1842, the government of Louis Philippe gave a grant of 40,000 francs for a 7-volume national edition of the Oeuvres de Laplace (1843–1847); the Traité de mécanique céleste with its four supplements occupies the first 5 volumes.
Tome I. (1798)
Livre I. Des lois générales de l'équilibre et du mouvement
Chap. I. De l'équilibre et de la composition des forces qui agissent sur un point matériel
Chap. II. Du mouvement d'un point matériel
Chap. III. De l'équilibre d'un système de corps
Chap. IV. De l'équilibre des fluides
Chap. V. Principes généraux du mouvement d'un système de corps
Chap. VI. Des lois du mouvement d'un système de corps, dans toutes les relations mathématiquement possibles entre la force et la vitesse
Chat. VII. Des mouvemens d'un corps solide de figure quelconque
Chap. VIII. Du mouvement des fluides
Livre II. De la loi pesanteur universelle, et du mouvement des centres de gravité des corps célestes
Tome II. (1798)
Livre III. De la figure des corps céleste
Livre IV. Des oscillations de la mer et de l'atmosphère
Livre V. Des mouvemens des corps célestes, autour de leurs propre centres de gravité
Tome III. (1802)
Livre VI. Théorie particulières des mouvemens célestes
Livre VII. Théorie de la lune
Tome IV. (1805)
Livre VIII. Théorie des satellites de Jupiter, de Saturne et d'Uranus
Livre IX. Théorie des comètes
Livre X. Sur différens points relatifs au système du monde
This book contains a discussion of continued fractions
and a computation of the complementary error function in terms
of what became to be called the Laplace continued fraction,
1/(1+q/(1+2q/(1+3q/(...))).
Tome V. (1825)
Livre XI. De la figure et de la rotation de la terre
Livre XII. De l'attraction et de la répulsion des sphères, et des lois de l'equilibre et du mouvement des fluides élastiques
Livre XIII. Des oscillations des fluides qui recouvrent les planètes
Livre XIV. Des mouvemens des corps célestes autour de leurs centres de gravité
Livre XV. Du mouvement des planètes et des comètes
Livre XVI. Du mouvement des satellites
English translations
During the early nineteenth century at least five English translations of Mécanique Céleste were published. In 1814 the Reverend John Toplis prepared a translation of Book 1 entitled The Mechanics of Laplace. Translated with Notes and Additions. In 1821 Thomas Young anonymously published a further translation into English of the first book; beyond just translating from French to English he claimed in the preface to have translated the style of mathematics: The translator flatters himself, however, that he has not expressed the author's meaning in English words alone, but that he has rendered it perfectly intelligible to any person, who is conversant with the English mathematicians of the old school only, and that his book will serve as a connecting link between the geometrical and algebraical modes of representation.The Reverend Henry Harte, a fellow at Trinity College, Dublin translated the entire first volume of Mécanique Céleste, with Book 1 published in 1822 and Book 2 published separately in 1827. Similarly to Bowditch (see below), Harte felt that Laplace's exposition was too brief, making his work difficult to understand:... it may be safely asserted, that the chief obstacle to a more general knowledge of the work, arises from the summary manner in which the Author passes over the intermediate steps in several of his most interesting investigations.
Bowditch's translation
The famous American mathematician Nathaniel Bowditch translated the first four volumes of the Traité de mécanique céleste but not the fifth volume; however, Bowditch did make use of relevant portions of the fifth volume in his extensive commentaries for the first four volumes.
Somerville's translation
In 1826, it was still felt by Henry Brougham, president of the Society for the Diffusion of Useful Knowledge, that the British reader was lacking a readable translation of Mécanique Céleste. He thus approached Mary Somerville, who began to prepare a translation which would "explain to the unlearned the sort of thing it is - the plan, the vast merit, the wonderful truths unfolded or methodized - and the calculus by which all this is accomplished". In 1830, John Herschel wrote to Somerville and enclosed a copy of Bowditch's 1828 translation of Volume 1 which Herschel had just received. Undeterred, Somerville decided to continue with the preparation of her own work as she felt the two translations differed in their aims; whereas Bowditch's contained an overwhelming number of footnotes to explain each mathematical step, Somerville instead wished to state and demonstrate the results as clearly as possible.
A year later, in 1831, Somerville's translation was published under the title Mechanism of the Heavens. It received great critical acclaim, with complimentary reviews appearing in the Quarterly Review, the Edinburgh Review, and the Monthly Notices of the Royal Astronomical Society.
References
External links
Translation by Nathaniel Bowditch
Volume I, 1829
Volume II, 1832
Volume III, 1834
Volume IV, 1839 with a memoir of the translator by his son
Historical physics publications
Physics books
Mathematics books
1798 non-fiction books
French books
Celestial mechanics | Traité de mécanique céleste | [
"Physics"
] | 1,262 | [
"Celestial mechanics",
"Classical mechanics",
"Astrophysics"
] |
54,423,208 | https://en.wikipedia.org/wiki/Ranked%20alphabet | In theoretical computer science and formal language theory, a ranked alphabet is a pair of an ordinary alphabet F and a function Arity: F→. Each letter in F has its arity so it can be used to build terms. Nullary elements (of zero arity) are also called constants. Terms built with unary symbols and constants can be considered as strings. Higher arities lead to proper trees.
For instance, in the term
,
a,b,c are constants, g is unary, and f is ternary.
Contrariwise,
cannot be a valid term, as the symbol f appears once as binary, and once as unary, which is illicit, as Arity must be a function.
References
Trees (data structures)
Automata (computation)
Formal languages
Theoretical computer science | Ranked alphabet | [
"Mathematics"
] | 166 | [
"Theoretical computer science",
"Formal languages",
"Mathematical logic",
"Applied mathematics"
] |
54,423,833 | https://en.wikipedia.org/wiki/NGC%205861 | NGC 5861 is an intermediate spiral galaxy in constellation Libra. It is located at a distance of about 85 million light years from Earth, which, given its apparent dimensions, means that NGC 5861 is about 80,000 light years across.
The galaxy features two long spiral arms that dominate the optical disk. The one arm can be traced from its beginning at the center for nearly one and a half revolutions without branching, whereas the other starts to form fragments after one revolution, forming a moderately chaotic pattern. The galaxy hosts a hydroxyl megamaser.
NGC 5861 is the foremost member of a small galaxy group that also includes NGC 5858, which lies 9.6 arcmin north, forming a non-interactive pair. It is located within the same galaxy cloud with NGC 5878.
Supernovae
Two supernovae have been observed in NGC 5861:
SN 1971D (type unknown, mag. 15.5) was discovered by Glenn Jolly and Justus R. Dunlap on 24 February 1971. Observations by Hubble Space Telescope indicate that possibly there is a light echo created by SN 1971D.
SN 2017erp (type Ia, mag. 16.8) was discovered by Kōichi Itagaki on 13 June 2017.
References
External links
Barred spiral galaxies
Libra (constellation)
5861
54097 | NGC 5861 | [
"Astronomy"
] | 280 | [
"Libra (constellation)",
"Constellations"
] |
54,423,957 | https://en.wikipedia.org/wiki/Load%20path%20analysis | Load path analysis is a technique of mechanical and structural engineering used to determine the path of maximum stress in a non-uniform load-bearing member in response to an applied load. Load path analysis can be used to minimize the material needed in the load-bearing member to support the design load.
Load path analysis may be performed using the concept of a load transfer index, U*. In a structure, the main portion of the load is transferred through the stiffest route. The U* index represents the internal stiffness of every point within the structure. Consequently, the line connecting the highest U* values is the main load path. In other words, the main load path is the ridge line of the U* distribution (contour) This method of analysis has been verified in physical experimentation.
Load path calculation using U* index
In a structure, the main portion of the load is transferred through the stiffest route. The U* index represents the internal stiffness of every point within the structure. Consequently, the line connecting the highest U* values is the main load path. In other words, the main load path is the ridge line of the U* distribution (contour). The U* index theory has been validated through two different physical experiments.
Since the U* index predicts the load paths based on the structural stiffness, it is not affected by the stress concentration problems. The load transfer analysis using the U* index is a new design paradigm for vehicle structural design. It has been applied in design analysis and optimization by automotive manufacturers like Honda and Nissan.
In the image to the right, a structural member with a central hole is placed under load bearing stress. Figure (a) shows the U* distribution and the resultant load paths while figure (b) is the von Mises Stress distribution. As can be seen from figure (b), higher stresses can be observed at the vicinity of the hole. However, it is unreasonable to conclude the main load passes that area with stress concentration because the hole (which has no material) is not important for carrying the load. The stress concentration caused by the structural singularities like a hole or a notch makes the load transfer analysis more difficult.
References
Mechanical engineering | Load path analysis | [
"Physics",
"Engineering"
] | 446 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
54,424,157 | https://en.wikipedia.org/wiki/Biotron%20%28Wisconsin%29 | The Biotron is a research facility located at the University of Wisconsin-Madison that "provides controlled environments and climate-controlled greenhouses to support plant, animal, and materials research for university, non-profit, and commercial clients."
History
An evolution of the phytotron, the development of the facility had its roots in the late 1950s as a campaign established by the Botanical Society of America in search of a national phytotron. With additional funding and support by the National Science Foundation the Biotron was eventually envisioned as a combination facility that would allow both plant and animal tests to be conducted.
Plant physiologist Folke Skoog would be instrumental in bringing the Biotron to the University of Wisconsin-Madison. A colleague of Frits Went at Caltech, Skoog oversaw the two proposals the university submitted to the Botanical Society in 1959. The interdisciplinary nature and scope of the project quickly led it to becoming Madison's most expensive facility at around $4.2 million. It was up to Harold Senn, appointed director of the Biotron to concentrate on assembling funding, which he was able to accomplish by January 1963, with the Ford Foundation and National Institute of Health contributing to the project.
The Biotron was officially dedicated on September 18, 1970 with many experiments under precise controlled environmental conditions already under study, such as a lizard in the Palm Springs desert or black-eyed peas growing in hot and humid Nigeria. When finally completed in 1971, the Biotron contained over fifty rooms with many able to variate temperature from as low as -25C to as high as 50C with humidity adjustable anywhere from 1%-100%. Data from the various tests and sensors would then be fed and logged into a PDP-8/E-AA computer. Senn supposedly spent an additional $3,000 of the Biotron budget on increasing the computer's memory an extra 4 kilobytes.
In 1977, the International Crane Foundation brought endangered Siberian crane eggs from the U.S.S.R. to the Biotron for incubation and initial feeding. A hyperbaric chamber was added and used for experiments on the effects of diving on pregnancy. A hypobaric chamber was used for high altitude tests of devices administering doses of vaccines and drugs.
Although the Biotron went through a period of decline in the 1980s, unlike other facilities it never closed. In 1986, the first experiments using LEDs to grow plants were developed with NASA and tests were performed for the Galileo probe before its 1989 launch. Much of its work in the 1990s partnered with NASA in the Controlled Ecological Life Support System (CELSS) and its goal in researching the viability of certain vegetables for space travel, in particular the potato. Study of animal hibernation were done by ESA for human space exploration.
Rooms
The Biotron has 45 rooms that are capable of simulating a range of environmental variables with precision and control. These rooms feature separate air handling and control over the temperature, humidity, and lighting. Some rooms specialize in isolation from sound and vibration, electromagnetic radiation, or pressure.
See also
Phytotron
References
External links
BIOTRON - History of the building.
The Biotron at World of Trons
University of Wisconsin–Madison
Buildings and structures in Wisconsin
Greenhouses in the United States
Atmospheric chemistry
Controlled ecological life support systems
1971 establishments in Wisconsin | Biotron (Wisconsin) | [
"Chemistry"
] | 683 | [
"nan"
] |
54,424,499 | https://en.wikipedia.org/wiki/NGC%207047 | NGC 7047 is an intermediate spiral galaxy located about 270 million light-years away in the constellation of Aquarius. NGC 7047 is also classified as a LINER-type galaxy. NGC 7047 has an estimated diameter of 127,350 light years. It was discovered by French astronomer Édouard Stephan on August 20, 1873.
One supernova has been observed in NGC 7047: PTF09cjq (type II, mag. unknown) was discovered 22 October 2009 by the Palomar Transient Factory.
See also
NGC 7038
List of NGC objects (7001–7840)
References
External links
Intermediate spiral galaxies
LINER galaxies
Aquarius (constellation)
7047
11712
066461
+00-54-010
Astronomical objects discovered in 1873
Discoveries by Édouard Stephan | NGC 7047 | [
"Astronomy"
] | 157 | [
"Constellations",
"Aquarius (constellation)"
] |
54,424,534 | https://en.wikipedia.org/wiki/NGC%203191 | NGC 3191 (also known as NGC 3192) is a barred spiral galaxy in the constellation Ursa Major. It was discovered on 5 February 1788 by William Herschel. It is located at a distance of about 400 million light years from Earth, which, given its apparent dimensions, means that NGC 3191 is about 115,000 light years across.
The galaxy has been distorted and interacts with a companion about 0.5 arcminutes to the west, a galaxy identified as KUG 1015+467. An extremely blue tidal bridge lies between them. It was discovered by Gaia on 23 May 2017.
Supernovae
Three supernovae have been observed in NGC 3191:
SN 1988B (type Ia, mag. 15.5) was discovered by Paul Wild on 18 January 1988, 10" north of the galaxy's center.
PTF10bgl (typeII-P, mag. unknown) was discovered by the Palomar Transient Factory on 6 February 2010.
SN 2017egm (type SLSN-I, mag. 16.72) was identified as a Type I superluminous supernova. It is the closest supernova of this type observed and also the first to be found in a massive spiral galaxy.
See also
List of NGC objects (3001–4000)
References
External links
Barred spiral galaxies
Ursa Major
3191
05565
030136
Astronomical objects discovered in 1788
Discoveries by William Herschel
+08-19-018 | NGC 3191 | [
"Astronomy"
] | 306 | [
"Ursa Major",
"Constellations"
] |
54,424,691 | https://en.wikipedia.org/wiki/No-hiding%20theorem | The no-hiding theorem states that if information is lost from a system via decoherence, then it moves to the subspace of the environment and it cannot remain in the correlation between the system and the environment. This is a fundamental consequence of the linearity and unitarity of quantum mechanics. Thus, information is never lost. This has implications in the black hole information paradox and in fact any process that tends to lose information completely. The no-hiding theorem is robust to imperfection in the physical process that seemingly destroys the original information.
This was proved by Samuel L. Braunstein and Arun K. Pati in 2007. In 2011, the no-hiding theorem was experimentally tested using nuclear magnetic resonance devices where a single qubit undergoes complete randomization; i.e., a pure state transforms to a random mixed state. Subsequently, the lost information has been recovered from the ancilla qubits using suitable local unitary transformation only in the environment Hilbert space in accordance with the no-hiding theorem. This experiment for the first time demonstrated the conservation of quantum information.
Formal statement
Let be an arbitrary quantum state in some Hilbert space and let there be a physical process that transforms with . If is independent of the input state , then in the enlarged Hilbert space the mapping is of the formwhere is the initial state of the environment, 's are the orthonormal basis of the environment Hilbert space and denotes the fact that one may augment the unused dimension of the environment Hilbert space by zero vectors.
The proof of the no-hiding theorem is based on the linearity and the unitarity of quantum mechanics. The original information which is missing from the final state simply remains in the subspace of the environmental Hilbert space. Also, note that the original information is not in the correlation between the system and the environment. This is the essence of the no-hiding theorem. One can in principle, recover the lost information from the environment by local unitary transformations acting only on the environment Hilbert space. The no-hiding theorem provides new insights to the nature of quantum information. For example, if classical information is lost from one system it may either move to another system or can be hidden in the correlation between a pair of bit strings. However, quantum information cannot be completely hidden in correlations between a pair of subsystems. Quantum mechanics allows only one way to completely hide an arbitrary quantum state from one of its subsystems. If it is lost from one subsystem, then it moves to other subsystems.
Conservation of quantum information
In physics, conservation laws play important roles. For example, the law of conservation of energy states that the energy of a closed system must remain constant. It can neither increase nor decrease without coming in contact with an external system. If we consider the whole universe as a closed system, the total amount of energy always remains the same. However, the form of energy keeps changing. One may wonder if there is any such law for the conservation of information. In the classical world, information can be copied and deleted perfectly. In the quantum world, however, the conservation of quantum information should mean that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. But the no-hiding theorem is a more general proof of conservation of quantum information which originates from the proof of conservation of wave function in quantum theory.
It may be noted that the conservation of entropy holds for a quantum system undergoing unitary time evolution and that if entropy represents information in quantum theory, then it is believed then that information should somehow be conserved. For example, one can prove that pure states remain pure states and probabilistic combinations of pure states (called as mixed states) remain mixed states under unitary evolution. However, it was never proved that if the probability amplitude disappears from one system, it will reappear in another system. Now, using the no-hiding theorem one can make a precise statement. One may say that as energy keeps changing its form, the wave function keep moving from one Hilbert space to another Hilbert space. Since the wave function contains all the relevant information about a physical system, the conservation of wave function is tantamount to conservation of quantum information.
References
Theorems in quantum mechanics
Quantum information theory
No-go theorems | No-hiding theorem | [
"Physics",
"Mathematics"
] | 881 | [
"Theorems in quantum mechanics",
"No-go theorems",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Physics theorems"
] |
44,377,886 | https://en.wikipedia.org/wiki/C11H15ClO2 | {{DISPLAYTITLE:C11H15ClO2}}
The molecular formula C11H15ClO2 (molar mass: 214.69 g/mol, exact mass: 214.0761 u) may refer to:
Metaglycodol
Phenaglycodol
Molecular formulas | C11H15ClO2 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
44,379,249 | https://en.wikipedia.org/wiki/Ruselectronics | JSC Ruselectronics (, also AO Roselektronika), is a Russian state-owned holding company founded in 1997. It is fully owned by Rostec.
Ruselectronics was reputed to be responsible for the production of approximately 80 percent of all Russian electronics components as of 2015.
History
Ruselectronics integrates the electronics sector companies focused on designing and producing electronic materials, equipment, semiconductor products and microwave technologies. The Holding company was established in the beginning of 2009 on the basis of the holding that was created in 1997.
At the end of 2012 the supervisory board of the Corporation decided to integrate Sirius and Orion groups of companies into the Ruselectronics Holding.
In December 2012, Rostec’s supervisory board transferred the assets of Sirius and Orion to Russian Electronics.
Orion was founded in 2009 as a special-purpose research and industrial association to develop communication systems, subsystems and equipment for defence, special and double purposes. Its companies were located in six federal regions. Orion employed 11,000 people.
It controlled 17 organizations, including JSC Omsk Research Institute for Instrument Engineering, JSC Barnaul Special Vostok Construction Bureau, and JSC Integral Research Institute for Special Communication Systems.
Sirius was established in 2009. Its key products include customized and replicated software of various uses, television equipment and devices for television reception, in particular, in extreme conditions (space environment, ultrahigh temperature environment and other hostile environments), automated control system elements, hardware and software for automated special-purpose systems, comprehensive security systems for critical facilities, territories and passenger transportation systems and telecommunication equipment. The company included over 20 enterprises, such as Internavigation Research Center for Advanced Navigation Technologies; JSC Radiozavod; FGUP Solid-State Engineering Construction Bureau; JSC Solnechnogorsk Instrument Plant; JSC Kristall Research Center; JSC Novosibirsk State Design Institute; JSC Novosibirsk Institute for Software Systems; JSC Popov Broadcast Reception and Acoustics Institute; JSC Television Research Institute; JSC Rastr Research Institute for Industrial Television.
As a result of enlargement, restructuring and liquidation, in the period from 2014 to 2016, 70 organizations will be established on the basis of 120 plus organizations of Ruselectronics. According to deputy director of Ruselectronics, basic scenario of the holding company's development strategy implementation envisions its revenues’ growth in 2012–2020 to 130.7 billion roubles ($3.7 billion) from 42.7 billion roubles ($1.2 billion). Quantity of Ruselectronics subsidiaries will be reduced from 123 to 70.
In 2015, Igor Kozlov was appointed as the Chairman of the Directors Board of Ruselectronics while being the Council of the Minister of Telecom and Mass Communications of the Russian Federation. In 2016 Igor Kozlov became the CEO of Ruselectronics. In 2017 Alexey Belinskiy appointed as a director general of JSC "Ruselectronics".
In 2018, Alexey Belinsky leaves the post of interim CEO of Roselectronics Holding. Alexander Borisov, Head of the A. I. Shokin NPP Istok, became acting interim CEO of Roselectronics. In April 2019, Alexander Borisov was appointed CEO of Roselectronics Holding.
State corporation Rostec later announced plans to sell 75% of Ruselectronics shares on IPO.
Ruselectronics is a sanctioned entity appearing on the US Sanctions List following the 2022 invasion of Ukraine by Russia.
Products
Ruselectronics provides semiconductor devices, photo detectors and light emitting elements, displays, emitters, microwave devices and vacuum tubes, electronic materials and structures, and electronic equipment and systems. Its products include diodes; AC and DC plasma display panels, and bar and digital displays, as well as plasma monitors for industrial applications; and co-based and nano alloys, and IR LED chips. The company's products also include millimeter-waverband waveguide isolators and circulators, and ferrite phase shifters; broad and narrow bandwidth, and cryogenic coaxial isolators and circulators; and high-power coaxial circulators.
Ruselectronics subsidiaries are also specializing in the development of software-defined radio systems, including SDR systems for naval surface ships. The company supplies the "Serp" line of anti-drone EW systems since 2023.
Innovation Program
In 2014 during annual exhibition Open Innovations Ruselectronics presented a unique rescue solution from high-rises. Ruselectronics subsidiary JSC 'Spetsmagnit' designed a power-independent group escape system (EKSS) for rescuing people from high floors in high-rises based on magnetic eddy-current braking systems – two escape and rescue pods equipped with magnetic systems which interact with an electric strap control bus housed in a separate fireproof shaft placed on the outside of the building. As an escape pod full of people starts moving down, gravity induces eddy currents in the bus, which generate braking force by interacting with the system's magnetic field. One escape pod can lower up to 25 people at a time from a height of up 100 meters (330 ft) in less than a minute. In the meantime, the second (empty) escape pod connect to the first one, will rise to the top, ready to evacuate the next group of people.
At Interpolitex 2014, the 18th International Exhibition of Technologies and Methods to Ensure National Security, Ruselectronics presented a new mobile command post with thermal imaging cameras for remote observation and surveillance during security operations in areas affected by natural disasters, emergencies caused by technology, and in mass gatherings and potentially volatile crowds:
“The new vehicle-mounted thermal imaging system can be used by Emergency Response during forest fires and by the police in zero visibility. We really want to supply the latest state-of-the-art technologies to people who save other people’s lives,” Ruselectronics CEO Andrey Zverev said.
Another innovative solution at Open Innovations-2014, a technology to apply a protective nanocoating on medical instruments, was presented by another Ruselectronics subsidiary, JSC 'S.A. Vekshinsky Scientific Research Institute for Vacuum Machinery'.
Overall Ruselectronics planned to invest more than $5.8bn in its innovation development between 2014 and 2020. The new innovation development program will help us boost sales revenues, take our products to the global markets, and make sure Russia gets a foothold in new market segments and ultimately takes a leadership role in a number of technology areas.
Andrey Zverev, Ruselectronics CEO
Ruselectronics has plans to invest more than US$3bn in technical modernization of its assets across Russia and then inject about US$2.3bn in R&D, with the rest going to infrastructure improvements, staff training, and international economic collaboration.
Joint-ventures
Alcatel-Lucent
In November 2009 Ruselectronics and French Alcatel-Lucent RT (Alcatel-Lucent) entered into a joint venture agreement. The JV was created for development and production of high-tech telecommunication equipment and its promotion in Russian and CIS markets. In 2012 the parties signed long-term collaboration memorandum on research, development and implementation of LTE technologies. According to the memorandum, under Alcatel-Lucent management a new integrated R&D centre will be created in Moscow using Ruselectronics JSC 'Pulsar' facilities as a base.
Sumitomo Wiring Systems
The joint venture with Japanese Sumitomo Wiring System opened in 2014 in Yekaterinburg on a site of Ruselectronics subsidiary JSC 'Radio Equipment Plant'. The enterprise employs 290 people, a number that is expected to increase to 650 by the end of the 2014, and will produce automotive parts for VAZ, Renault and Nissan.
Rohde & Schwarz/ Funkwerk AG
In 2011 Ruselectronics created a joint-venture with German company Rohde & Schwarz. Production of Rohde & Schwarz designed base stations of TETRA standard is located on JSC 'Omsky Scientific-Research Institute of semiconductors' facility. Same agreement signed with Funkwerk AG.
Tata Power SED
In 2014 Orion subbranch of Ruselelectronics signed a memorandum on cooperation with Indian defence company Tata Power SED
The parties have agreed to cooperate on the development and production of high-tech products in the area of transport and security infrastructure systems for the Indian civil and military aviation market.
CETC International
According to Rostec CEO Sergey Chemezov, Ruselectronics is going to build LED-production plant with Chinese corporation CETC within a joint-venture project in special economic zone in Tomsk.
China Aerospace Science and Technology Corporation (CASC)
In 2014 Rostec and China Aerospace Science and Technology Corporation (CASC) signed an Agreement for a Strategic Cooperation aimed at facilitating cooperation in R&D and production of electronic components, information technology, communications, automation systems and new materials. Ruselectronics will participate in this joint-venture as a Rostec Corporation' electronic equipment suncidiary.
ZTE
Ruselectronics signed the agreement with ZTE Corporation during Mobile World Congress 2015 in Barcelona.
Ruselectronics is seeking to enlarge its activities in the fields of innovative technologies and solutions for the “Smart City, "Smart Transit System", “intelligent transportation system”, and “Intelligent Antenna System” programs. ZTE plans to jointly develop versatile solutions based on GoTa (Global Open Trunking Architecture) technology and digital trunking products. This technology was first implemented to ensure security at the National Games of the People's Republic of China in Jiangsu Province. The system provided secure communications to tens of thousands of subscribers, including games organizers, medical personnel, security officers, and other employees.
Ruselectronics as well has already overseen the successful implementation of the “Safe City” system in a number of Russian cities. For example, in March 2013, Russia's first large-scale information system created on the basis of a domestic hardware and software was introduced in Krasnoyarsk. The system facilitates effective action of operative teams, as well as the prediction and prevention of various incidents and crimes.
The turnover between Ruselectronics and ZTE is expected to reach yuan 1.2 billion.
Subsidiaries
Ruselectronics owns more than 120 companies, including following entities:
Machinery Plants
Giricond, JSС, Saint-Petersburg
Plazma JSC, Ryazan Region
SRIEEM JSC Kaluga Region
Telegraph equipment plant JSC Kaluga Region
RMPCIP JSC, Ryazan Region
Radiozavod JSC, Perm Region
Ferrite-Domen Company, JSC Saint-Petersburg
CIME, JSC Saratov Region
Cyclone Co, JSC, Moscow
Optron, JSC Moscow
Logic JSC, Moscow
Svetlana JSC, Saint-Petersburg
Svetlana-Rost CJSC, Saint-Petersburg
Angstrem, PJSC, Moscow
Angstrem-M, PJSC, Moscow
Russian Telecom Equipment Company CJSC, Moscow
Alagir Resistance Plant JSC, North Ossetia-Alania
Razryad JSC, North Ossetia-Alania
GERMANIUM, JSC Krasnoyarsk Region
GRAN JSC, North Ossetia-Alania
Omega JSC, Tomsk Region
Oxid Novosibirsk Radio Component Plant’, Novosibirsk Region
DZRD JSC, Tula Region
MARS Factory, JSC Tver Region
Semi-conductor device plant JSC, Mary El
IPTD, JSCMoscow
Nalchik semi-conductor device plant JSC, Kabardino-Balkaria
Smolensk radio-component plant JSC, Smolensk Region
Topaz JSC, North Ossetia-Alania
Lithium-Element JSC, Saratov
Nyima’ Progress JSC, Moscow
Pulsar State Plant JSC, Moscow
JSC SPE Salyut, Nizhny Novgorod Region
Oktava Plant, JSC Tula
Specmagnit JSC, Moscow
Meteor plant JSC, Volgograd Region
Electron-Optronic JSC, Saint-Petersburg
Scientific Production Companies
Inject JSC, Saratov Region
Thorium FSUE, Moscow
Cyclone-Test JSC, Moscow Region
Almaz JSC, Saratov Region
Vostok JSC, Novosibirsk Region
Kontakt JSC, Saratov Region
Istok JSC, Moscow Region
Pulsar JSC, Moscow
Rigel JSC, Saint-Petersburg
Binom JSC, North Ossetia-Alania
TFP OSTER SPB, Saint-Petersburg
Research institutes
Research Institute of Electronic and Mechanical Devices, JSC Penza Region
Research Institute Electron JSC, Saint-Petersburg
Research Institute of Technology of Production, JSC, Nizhny Novgorod Region
Machinery Research Institute, JSC, Smolensk Region
S.A. Vekshinsky Scientific Research Institute for Vacuum Machinery, JSC, Moscow
Scientific-Research Institute Electronics JSC, Moscow
Scientific-Research Institute of EM JSC, North Ossetia-Alania
Scientific-Research Institute Platan with Plant, JSC, Moscow Region
Central Scientific-Research Institute Cyclone, JSC, Moscow Region
Scientific-Research Institute of Electrical Carbon Products, OJSC, Saratov Region
Scientific-Research Institute of semi-conductor plant, JSC, Tomsk Region
Scientific-Research Institute Volga JSC, Saratov Region
Omsky Scientific-Research Institute of semiconductors, JSC, Omsk
Russian Research Institute Electronstandart, JSC Saint-Petersburg
Design Bureaus
Novosibirsk semi-conductor plant and experimental Design Bureau, Novosibirsk Region
Ikar Design Bureau JSC, Nizhny Novgorod Region
Special design bureau of relay equipment JSC, Nizhny Novgorod Region
Central Design Bureau Deiton JSC, Moscow
MELZ Design Bureau JSC, Moscow
Other type of entities
Rosel Trading House, CJSC, Moscow
Electronintoring Foreign Trade Association JSC, Moscow
Radioexport Foreign Trade Association, Moscow
Fryazino special construction and erection department JSC, Moscow Region
New Light Technologies, CJSC, Moscow
Svyazdorinvest, JSC, Moscow
MosElectronProject, JSC, Moscow
Saratovelectronproject JSC, Saratov Region
Electron Construction General Management, Moscow
References
External links
Official website of JSC Ruselectronics
Ruselectronics Presents New Mobile Command Post
Catalogue of Ruselectronics products
Computer companies of Russia
Computer hardware companies
Technology companies of Russia
Companies based in Moscow
Mechanical engineering companies of Russia
Russian brands
Networking hardware companies
Nanoelectronics
Rostec | Ruselectronics | [
"Materials_science",
"Technology"
] | 3,080 | [
"Nanotechnology",
"Computer hardware companies",
"Computers",
"Nanoelectronics"
] |
44,379,567 | https://en.wikipedia.org/wiki/Dialectical%20materialism | Dialectical materialism is a materialist theory based upon the writings of Karl Marx and Friedrich Engels that has found widespread applications in a variety of philosophical disciplines ranging from philosophy of history to philosophy of science. As a materialist philosophy, Marxist dialectics emphasizes the importance of real-world conditions and the presence of functional contradictions within and among social relations, which derive from, but are not limited to, the contradictions that occur in social class, labour economics, and socioeconomic interactions. Within Marxism, a contradiction is a relationship in which two forces oppose each other, leading to mutual development.
In contrast with the idealist perspective of Hegelian dialectics, the materialist perspective of Marxist dialectics emphasizes that contradictions in material phenomena could be resolved with dialectical analysis, from which is synthesized the solution that resolves the contradiction, whilst retaining the essence of the phenomena. Marx proposed that the most effective solution to the problems caused by contradiction was to address the contradiction and then rearrange the systems of social organization that are the root of the problem.
Dialectical materialism recognises the evolution of the natural world, and thus the emergence of new qualities of being human and of human existence. Engels used the metaphysical insight that the higher level of human existence emerges from and is rooted in the lower level of human existence. That the higher level of being is a new order with irreducible laws, and that evolution is governed by laws of development, which reflect the basic properties of matter in motion.
In the 1930s, in the Soviet Union, the book Dialectical and Historical Materialism (1938), by Joseph Stalin, set forth the Soviet formulation of dialectical materialism and of historical materialism, which were taught in the Soviet system of education. In the People's Republic of China, an analogous text was the essay On Contradiction (1937), by Mao Zedong, which was a foundational document of Maoism.
The term
The term dialectical materialism was coined in 1887 by Joseph Dietzgen, a socialist who corresponded with Marx, during and after the failed 1848 German Revolution. Casual mention of the term "dialectical materialism" is also found in the biography Frederick Engels, by philosopher Karl Kautsky, written in 1899. Marx himself had talked about the "materialist conception of history", which was later referred to as "historical materialism" by Engels. Engels "substantially developed materialist dialectics" in his incomplete 1883 work Dialectics of Nature. Georgi Plekhanov, the father of Russian Marxism, first used the term "dialectical materialism" in 1891 in his writings on Georg Wilhelm Friedrich Hegel and Marx. Stalin further delineated and defined dialectical and historical materialism as the world outlook of Marxism–Leninism, and as a method to study society and its history.
Historical background
Marx and Engels each began their adulthood as Young Hegelians, one of several groups of intellectuals inspired by the philosopher Hegel. Marx's doctoral thesis, The Difference Between the Democritean and Epicurean Philosophy of Nature, was concerned with the atomism of Epicurus and Democritus, which is considered the foundation of materialist philosophy. Marx was also familiar with Lucretius's theory of clinamen.
Marx and Engels both concluded that Hegelian philosophy, at least as interpreted by their former colleagues, was too abstract and was being misapplied in attempts to explain the social injustice in recently industrializing countries such as Germany, France, and the United Kingdom, which was a growing concern in the early 1840s, as exemplified by Dickensian inequity.
In contrast to the conventional Hegelian dialectic of the day, which emphasized the idealist observation that human experience is dependent on the mind's perceptions, Marx developed Marxist dialectics, which emphasized the materialist view that the world of the concrete shapes socioeconomic interactions and that those in turn determine sociopolitical reality.
Whereas some Hegelians blamed religious alienation (estrangement from the traditional comforts of religion) for societal ills, Marx and Engels concluded that alienation from economic and political autonomy, coupled with exploitation and poverty, was the real culprit.
In keeping with dialectical ideas, Marx and Engels thus created an alternative theory, not only of why the world is the way it is but also of which actions people should take to make it the way it ought to be. In Theses on Feuerbach (1845), Marx wrote a famous quote, "The philosophers have only interpreted the world, in various ways. The point, however, is to change it." Dialectical materialism is thus closely related to Marx's and Engels's historical materialism (and has sometimes been viewed as synonymous with it). Marx rejected Fichte's language of "thesis, antithesis, synthesis".
Dialectical materialism is an aspect of the broader subject of materialism, which asserts the primacy of the material world: in short, matter precedes thought. Materialism is a realist philosophy of science, which holds that the world is material; that all phenomena in the universe consist of "matter in motion," wherein all things are interdependent and interconnected and develop according to natural law; that the world exists outside consciousness and independently of people's perception of it; that thought is a reflection of the material world in the brain, and that the world is in principle knowable.
Marx criticized classical materialism as another idealist philosophy—idealist because of its transhistorical understanding of material contexts. The Young Hegelian Ludwig Feuerbach had rejected Hegel's idealistic philosophy and advocated materialism. Despite being strongly influenced by Feuerbach, Marx rejected Feuerbach's version of materialism (anthropological materialism) as inconsistent. The writings of Engels, especially Anti-Dühring (1878) and Dialectics of Nature (1875–82), were the source of the main doctrines of dialectical materialism.
Marx's dialectics
The concept of dialectical materialism emerges from statements by Marx in the second edition postface to his magnum opus, Das Kapital. There Marx says he intends to use Hegelian dialectics but in revised form. He defends Hegel against those who view him as a "dead dog" and then says, "I openly avowed myself as the pupil of that mighty thinker Hegel". Marx credits Hegel with "being the first to present [dialectic's] form of working in a comprehensive and conscious manner". But he then criticizes Hegel for turning dialectics upside down: "With him it is standing on its head. It must be turned right side up again, if you would discover the rational kernel within the mystical shell.".
Marx's criticism of Hegel asserts that Hegel's dialectics go astray by dealing with ideas, with the human mind. Hegel's dialectic, Marx says, inappropriately concerns "the process of the human brain"; it focuses on ideas. Hegel's thought is in fact sometimes called dialectical idealism, and Hegel himself is counted among a number of other philosophers known as the German idealists. Marx, on the contrary, believed that dialectics should deal not with the mental world of ideas but with "the material world", the world of production and other economic activity. For Marx, a contradiction can be solved by a desperate struggle to change the social world. This was a very important transformation because it allowed him to move dialectics out of the contextual subject of philosophy and into the study of social relations based on the material world.
For Marx, human history cannot be fitted into any neat a priori schema. He explicitly rejects the idea of Hegel's followers that history can be understood as "a person apart, a metaphysical subject of which real human individuals are but the bearers". To interpret history as though previous social formations have somehow been aiming themselves toward the present state of affairs is "to misunderstand the historical movement by which the successive generations transformed the results acquired by the generations that preceded them". Marx's rejection of this sort of teleology was one reason for his enthusiastic (though not entirely uncritical) reception of Charles Darwin's theory of natural selection.
For Marx, dialectics is not a formula for generating predetermined outcomes but is a method for the empirical study of social processes in terms of interrelations, development, and transformation. In his introduction to the Penguin edition of Marx's Capital, Ernest Mandel writes, "When the dialectical method is applied to the study of economic problems, economic phenomena are not viewed separately from each other, by bits and pieces, but in their inner connection as an integrated totality, structured around, and by, a basic predominant mode of production."
Marx's own writings are almost exclusively concerned with understanding human history in terms of systemic processes, based on modes of production (broadly speaking, the ways in which societies are organized to employ their technological powers to interact with their material surroundings). This is called historical materialism. More narrowly, within the framework of this general theory of history, most of Marx's writing is devoted to an analysis of the specific structure and development of the capitalist economy.
For his part, Engels applies a "dialectical" approach to the natural world in general, arguing that contemporary science is increasingly recognizing the necessity of viewing natural processes in terms of interconnectedness, development, and transformation. Some scholars have doubted that Engels' "dialectics of nature" is a legitimate extension of Marx's approach to social processes. Other scholars have argued that despite Marx's insistence that humans are natural beings in an evolving, mutual relationship with the rest of nature, Marx's own writings pay inadequate attention to the ways in which human agency is constrained by such factors as biology, geography, and ecology.
Engels's dialectics
Engels postulated three laws of dialectics from his reading of Hegel's Science of Logic. Engels elucidated these laws as the materialist dialectic in his work Dialectics of Nature:
The law of the unity and conflict of opposites
The law of the passage of quantitative changes into qualitative changes
The law of the negation of the negation
The first law, which originates with the ancient Ionian philosopher Heraclitus, can be clarified through the following examples:
The first law was seen by both Hegel and Vladimir Lenin as the central feature of a dialectical understanding:
The second law Hegel took from Ancient Greek philosophers, notably the paradox of the heap, and explanation by Aristotle, and it is equated with what scientists call phase transitions. It may be traced to the ancient Ionian philosophers, particularly Anaximenes from whom Aristotle, Hegel, and Engels inherited the concept. For all these authors, one of the main illustrations is the phase transitions of water. There has also been an effort to apply this mechanism to social phenomena, whereby population increases result in changes in social structure. The law of the passage of quantitative changes into qualitative changes can also be applied to the process of social change and class conflict.
The third law, "negation of the negation", originated with Hegel. Although Hegel coined the term "negation of the negation", it gained its fame from Marx's using it in Capital. There Marx wrote this: "The [death] knell of capitalist private property sounds. The expropriators are expropriated. The capitalist mode of appropriation, the result of the capitalist mode of production, produces capitalist private property. This is the first negation of individual private property ... But capitalist production begets, with the inexorability of a law of Nature, its own negation. It [this new negation] is the negation of negation."
Z. A. Jordan notes, "Engels made constant use of the metaphysical insight that the higher level of existence emerges from and has its roots in the lower; that the higher level constitutes a new order of being with its irreducible laws; and that this process of evolutionary advance is governed by laws of development which reflect basic properties of 'matter in motion as a whole'."
Lenin's contributions
After reading Hegel's Science of Logic in 1914, Lenin made some brief notes outlining three "elements" of logic. They are:
The determination of the concept out of itself [the thing itself must be considered in its relations and in its development];
The contradictory nature of the thing itself (the other of itself), the contradictory forces and tendencies in each phenomenon;
The union of analysis and synthesis.
Lenin develops these in a further series of notes, and appears to argue that "the transition of quantity into quality and vice versa" is an example of the unity and opposition of opposites expressed tentatively as "not only the unity of opposites but the transitions of every determination, quality, feature, side, property into every other [into its opposite?]."
In his essay "On the Question of Dialectics", Lenin stated, "Development is the 'struggle' of opposites."
He stated, "The unity (coincidence, identity, equal action) of opposites is conditional, temporary, transitory, relative. The struggle of mutually exclusive opposites is absolute, just as development and motion are absolute."
In Materialism and Empiriocriticism (1908), Lenin explained dialectical materialism as three axes: (i) the materialist inversion of Hegelian dialectics, (ii) the historicity of ethical principles ordered to class struggle, and (iii) the convergence of "laws of evolution" in physics (Helmholtz), biology (Darwin), and in political economy (Marx). Hence, Lenin was philosophically positioned between historicist Marxism (Labriola) and determinist Marxism—a political position close to "social Darwinism" (Kautsky) . Moreover, late-century discoveries in physics (x-rays, electrons), and the beginning of quantum mechanics, philosophically challenged previous conceptions of matter and materialism, thus matter seemed to be disappearing. Lenin disagreed:
'Matter disappears' means that the limit within which we have hitherto known matter disappears, and that our knowledge is penetrating deeper; properties of matter are disappearing that formerly seemed absolute, immutable, and primary, and which are now revealed to be relative and characteristic only of certain states of matter. For the sole 'property' of matter, with whose recognition philosophical materialism is bound up, is the property of being an objective reality, of existing outside of the mind.
Lenin was developing the work of Engels, who said that "with each epoch-making discovery, even in the sphere of natural science, materialism has to change its form". One of Lenin's challenges was distancing materialism, as a viable philosophical outlook, from the "vulgar materialism" expressed in the statement "the brain secretes thought in the same way as the liver secretes bile" (attributed to 18th-century physician Pierre Jean Georges Cabanis); "metaphysical materialism" (matter composed of immutable particles); and 19th-century "mechanical materialism" (matter as random molecules interacting per the laws of mechanics). The philosophic solution that Lenin (and Engels) proposed was "dialectical materialism", wherein matter is defined as objective reality, theoretically consistent with (new) developments occurring in the sciences.
Lenin reassessed Feuerbach's philosophy and concluded that it was in line with dialectical materialism.
Trotsky's contributions
In 1926, Trotsky said in a speech:
In his book In Defence of Marxism, Leon Trotsky defended the dialectical method of scientific socialism during the factional schisms within the American Trotskyist movement in the period 1939–40. Trotsky viewed dialectics as an essential method of analysis to discern class nature of the Soviet Union. Specifically, he described scientific socialism as "the conscious expression of the unconscious historical process; namely, the instinctive and elemental drive of the proletariat to reconstruct society on communist beginnings".
Lukács's contributions
György Lukács, Minister of Culture in the brief Béla Kun government of the Hungarian Soviet Republic (1919), published History and Class Consciousness (1923), in which he defined dialectical materialism as the knowledge of society as a whole, knowledge which, in itself, was the class consciousness of the proletariat. In the first chapter "What is Orthodox Marxism?", Lukács defined orthodoxy as fidelity to the "Marxist method", not fidelity to "dogmas":
Orthodox Marxism, therefore, does not imply the uncritical acceptance of the results of Marx's investigations. It is not the "belief" in this or that thesis, nor the exegesis of a "sacred" book. On the contrary, orthodoxy refers exclusively to method. It is the scientific conviction that dialectical materialism is the road to truth and that its methods can be developed, expanded, and deepened, only along the lines laid down by its founders. (§1)
In his later works and actions, Lukács became a leader of Democratic Marxism. He modified many of his formulations of his 1923 works and went on to develop a Marxist ontology and played an active role in democratic movements in Hungary in 1956 and the 1960s. He and his associates became sharply critical of the formulation of dialectical materialism in the Soviet Union that was exported to those countries under its control. In the 1960s, his associates became known as the Budapest School.
Lukács, in his philosophical criticism of Marxist revisionism, proposed an intellectual return to the Marxist method. So did Louis Althusser, who later defined Marxism and psychoanalysis as "conflictual sciences", stating that political factions and revisionism are inherent to Marxist theory and political praxis, because dialectical materialism is the philosophic product of class struggle:
For this reason, the task of orthodox Marxism, its victory over Revisionism and utopianism can never mean the defeat, once and for all, of false tendencies. It is an ever-renewed struggle against the insidious effects of bourgeois ideology on the thought of the proletariat. Marxist orthodoxy is no guardian of traditions, it is the eternally vigilant prophet proclaiming the relation between the tasks of the immediate present and the totality of the historical process. (§5)
...the premise of dialectical materialism is, we recall: 'It is not men's consciousness that determines their existence, but, on the contrary, their social existence that determines their consciousness'.... Only when the core of existence stands revealed as a social process can existence be seen as the product, albeit the hitherto unconscious product, of human activity. (§5)
Philosophically aligned with Marx is the criticism of the individualist, bourgeois philosophy of the subject, which is founded upon the voluntary and conscious subject. Against said ideology is the primacy of social relations. Existence—and thus the world—is the product of human activity, but this can be seen only by accepting the primacy of social process on individual consciousness. This type of consciousness is an effect of ideological mystification.
At the 5th Congress of the Communist International (July 1924), Grigory Zinoviev formally denounced Lukács's heterodox definition of Orthodox Marxism as exclusively derived from fidelity to the "Marxist method", and not to Communist party dogmas; and denounced the philosophical developments of the German Marxist theorist Karl Korsch.
Stalin's contributions
In the 1930s, Stalin and his associates formulated a version of dialectical and historical materialism that became the "official" Soviet interpretation of Marxism. It was codified in Stalin's work, Dialectical and Historical Materialism (1938), and popularized in textbooks used for compulsory education within the Soviet Union and throughout the Eastern Bloc.
Mao's contributions
In On Contradiction (1937), Mao Zedong outlined a version of dialectical materialism that subsumed two of Engels's three principal laws of dialectics, "the transformation of quantity into quality" and "the negation of the negation" as sub-laws (and not principal laws of their own) of the first law, "the unity and interpenetration of opposites".
Ho Chi Minh's contributions
In his 1947 article New Life, Ho Chi Minh described the dialectical relationship between the old and the new in building society, stating:
As a heuristic in science and elsewhere
Historian of science Loren Graham has detailed at length the role played by dialectical materialism in the Soviet Union in disciplines throughout the natural and social sciences. He has concluded that, despite the Lysenko period in genetics and constraints on free inquiry imposed by political authorities, dialectical materialism had a positive influence on the work of many Soviet scientists.
Some evolutionary biologists, such as Richard Lewontin and Stephen Jay Gould, have tried to employ dialectical materialism in their approach. They view dialectics as playing a precautionary heuristic role in their work. Lewontin's perspective offers the following idea:
Dialectical materialism is not, and never has been, a programmatic method for solving particular physical problems. Rather, a dialectical analysis provides an overview and a set of warning signs against particular forms of dogmatism and narrowness of thought. It tells us, "Remember that history may leave an important trace. Remember that being and becoming are dual aspects of nature. Remember that conditions change and that the conditions necessary to the initiation of some process may be destroyed by the process itself. Remember to pay attention to real objects in time and space and not lose them in utterly idealized abstractions. Remember that the qualitative effects of context and interaction may be lost when phenomena are isolated". And above all else, "Remember that all the other caveats are only reminders and warning signs whose application to different circumstances of the real world is contingent."
Gould shared similar views regarding a heuristic role for dialectical materialism. He wrote that:
...dialectical thinking should be taken more seriously by Western scholars, not discarded because some nations of the second world have constructed a cardboard version as an official political doctrine.
...when presented as guidelines for a philosophy of change, not as dogmatic precepts true by fiat, the three classical laws of dialectics embody a holistic vision that views change as interaction among components of complete systems and sees the components themselves not as a priori entities, but as both products and inputs to the system. Thus, the law of "interpenetrating opposites" records the inextricable interdependence of components: the "transformation of quantity to quality" defends a systems-based view of change that translates incremental inputs into alterations of state, and the "negation of negation" describes the direction given to history because complex systems cannot revert exactly to previous states.
This heuristic was also applied to the theory of punctuated equilibrium proposed by Gould and Niles Eldredge. They wrote that "history, as Hegel said, moves upward in a spiral of negations", and that "punctuated equilibria is a model for discontinuous tempos of change (in) the process of speciation and the deployment of species in geological time." They noted that "the law of transformation of quantity into quality... holds that a new quality emerges in a leap as the slow accumulation of quantitative changes, long resisted by a stable system, finally forces it rapidly from one state into another", a phenomenon described in some disciplines as a paradigm shift. Apart from the commonly cited example of water turning to steam with increased temperature, Gould and Eldredge noted another analogy in information theory, "with its jargon of equilibrium, steady state, and homeostasis maintained by negative feedback", and "extremely rapid transitions that occur with positive feedback".
Lewontin, Gould, and Eldredge were thus more interested in dialectical materialism as a heuristic than a dogmatic form of 'truth' or a statement of their politics. Nevertheless, they found a readiness for critics to "seize upon" key statements and portray punctuated equilibrium, and exercises associated with it, such as public exhibitions, as a "Marxist plot".
The Communist Party's official interpretation of Marxism, dialectical materialism, fit Alexander Oparin's studies on the origins of life as 'a flow, an exchange, a dialectical unity'. This notion was re-enforced by Oparin's association with Lysenko.
In 1972, China's Cultural Revolution slowed down and scientific research restarted. Astrophysicist and cosmologist Fang Lizhi found an opportunity to read some recent astrophysics papers in western journals, and soon wrote his first paper on cosmology, "A Cosmological Solution in Scalar-tensor Theory with Mass and Blackbody Radiation", which was published on the journal Wu Li (Physics), Vol. 1, 163 (1972). This was the first modern cosmological research paper in mainland China. Fang assembled a group of young faculty members of USTC around him to conduct astrophysics research.
According to the dialectical materialism philosophy, both time and space must be infinite, while the Big Bang theory allows the possibility of the finiteness of space and time.
Dialectical materialism influenced Zhu Zhixian's studies in child psychology.
Criticism
Philosopher Allen Wood argued that, in its form as an official Soviet philosophy, dialectical materialism was doomed to be superficial because "creativity or critical thinking" was impossible in an authoritarian environment. Nevertheless, he considered the basic aims and principles of dialectical materialism to be in harmony with rational scientific thought.
Economist and philosopher Ludwig von Mises wrote a critique of Marxist materialism which he published as a part of his 1957 work Theory and History: An Interpretation of Social and Economic Evolution. H. B. Acton described Marxism as "a philosophical farrago". Max Eastman argued that dialectical materialism lacks a psychological basis.
Leszek Kołakowski criticized the laws of dialectics in Main Currents of Marxism, arguing that they consist partly of truisms with no specific Marxist content, partly of philosophical dogmas, partly of nonsense, and partly of statements that could be any of these things depending on how they are interpreted.
Of the term
Joseph Needham, an influential historian of science and a Christian who nonetheless was an adherent of dialectical materialism, suggested that a more appropriate term might be "dialectical organicism".
Marxist rejection
Anti-communist, formerly Marxist humanist, Leszek Kołakowski argued that dialectical materialism was not truly Marxist.
See also
Books
Fundamentals of Marxism–Leninism
Dialectical Materialism and Historical Materialism
Concepts
Classical Marxism
Critique of political economy
Dialectical monism
Marxist philosophy of nature
Methodological naturalism
Orthodox Marxism
Parametric determinism
Philosophical realism
Philosophy in the Soviet Union
People
Alexander Spirkin
Fidel Castro
Ludovico Geymonat
Maurice Cornforth
Shulamith Firestone
Teodor Oizerman
References
Further reading
First published in 1971, as "Главные философские направления" – The author traces the struggle between materialism and idealism on the basis of the dialectical-materialist conception of the history of philosophy. The book was in 1979 awarded the Plekhanov prize under the decision of the USSR Academy of Sciences.
Dialectic
Ideology of the Communist Party of the Soviet Union
Marxist theory
Materialism | Dialectical materialism | [
"Physics"
] | 5,733 | [
"Materialism",
"Matter"
] |
44,380,573 | https://en.wikipedia.org/wiki/Muscle%20tissue%20engineering | Muscle tissue engineering is a subset of the general field of tissue engineering, which studies the combined use of cells and scaffolds to design therapeutic tissue implants. Within the clinical setting, muscle tissue engineering involves the culturing of cells from the patient's own body or from a donor, development of muscle tissue with or without the use of scaffolds, then the insertion of functional muscle tissue into the patient's body. Ideally, this implantation results in full regeneration of function and aesthetic within the patient's body. Outside the clinical setting, muscle tissue engineering is involved in drug screening, hybrid mechanical muscle actuators, robotic devices, and the development of engineered meat as a new food source.
Innovations within the field of muscle tissue engineering seek to repair and replace defective muscle tissue, thus returning normal function.The practice begins by harvesting and isolating muscle cells from a donor site, then culturing those cells in media. The cultured cells form cell sheets and finally muscle bundles which are implanted into the patient.
Overview
Muscle is a naturally aligned organ, with individual muscle fibers packed together into larger units called muscle fascicles. The uniaxial alignment of muscle fibers allows them to simultaneously contract in the same direction and properly propagate force on the bones via the tendons. Approximately 45% of the human body is composed of muscle tissue, and this tissue can be classified into three different groups: skeletal muscle, cardiac muscle, and smooth muscle. Muscle plays a role in structure, stability, and movement in mammalian bodies. The basic unit for a muscle is a muscle fiber, which is made up of myofilaments actin and myosin. This muscle fiber contains sarcomeres which generate the force required for contraction.
A major focus of muscle tissue engineering is to create constructs with the functionality of native muscle and ability to contract. To this end, alignment of the tissue engineered construct is extremely important. It has been shown that cells grown on substrates with alignment cues form more robust muscle fibers. Several other design criteria considered in muscle tissue engineering include the scaffold porosity, stiffness, biocompatibility, and degradation timeline. Substrate stiffness should ideally be in the myogenic range, which has been shown to be 10-15 kPa.
The purpose of muscle tissue engineering is to reconstruct functional muscular tissue which has been lost via traumatic injury, tumor ablation, or functional damage caused by myopathies. Until now, the only method used to restore muscular tissue function and aesthetic was free tissue transfer. Full function is typically not restored, however, which results in donor site morbidity and volume deficiency. The success of tissue engineering as it pertains to the regeneration of skin, cartilage, and bone indicates that the same success will be found in engineering muscular tissue. Early innovations in the field yielded in vitro cell culturing and regeneration of muscle tissue which would be implanted in the body, but advances in recent years have shown that there may be potential for in vivo muscle tissue engineering using scaffolding.
Etymology
The term muscle tissue engineering, while it is a subset of the much larger discipline, tissue engineering, was first coined in 1988 when Herman Vandenburgh, a surgeon, cultured avian myotubes in collagen-coated culture plates. This started a new era of in vitro tissue engineering. The ideal was officially adopted in 1988 in Vandenburgh's publication titled Maintenance of Highly Contractile Tissue-Cultured Avian Skeletal Myotubes in Collagen Gel. In 1989, the same group determined that mechanical stimulation of myoblasts in vitro facilitates engineered skeletal muscle growth.
History
19th Century
A rudimentary understanding of muscle tissue began to develop as early as 1835, when embryonic myogenesis was first described. In the 1860s, it was shown that muscle is capable of regeneration and an experimental regeneration was conducted to better understand the specific method by which this was done in vivo. Following this discovery, muscle generation and degeneration in man were described for the first time. Researchers consequently assessed several aspects of muscle regeneration in vivo, including "the continuous or discontinuous regeneration depending on tissue type" to increase functional understanding of the phenomena. It was not until the 1960s, however, that researchers determined what components were required for muscle regeneration.
20th Century
In 1957, it was determined via DNA content that myoblasts proliferate, but myonuclei do not. Following this discovery, the satellite cell was experimentally uncovered by Mauro and Katz as stem cells which sit on the surface of the myofibre and have the capability to differentiate into muscle cells. Satellite cells provide myoblasts for growth, differentiation, and repair of muscle tissue. Muscle tissue engineering officially began as a discipline in 1988 when Herman Vandenburgh cultured avian myotubes in collagen-coated culture plates. Following this development, it was found in 1989 that mechanical stimulation of myoblasts in vitro facilitates engineered skeletal muscle growth. Most of the modern innovations in the field of muscle tissue engineering are found in the 21st century.
21st Century
Between 2000 and 2010, the effects of volumetric muscle loss (VML) were assessed as it pertains to muscle tissue engineering. VML can be caused by a variety of injuries or diseases, including general trauma, postoperative damage, cancer ablation, congenital defects, and degenerative myopathy. Although muscle contains a stem cell population called satellite cells that are capable of regenerating small muscle injuries, muscle damage in VML is so extensive that it overwhelms muscle's natural regenerative capabilities. Currently VML is treated through an autologous muscle flap or graft but there are various problems associated with this procedure. Donor site morbidity, lack of donor tissue, and inadequate vascularization all limit the ability of doctors to adequately treat VML. The field of muscle tissue engineering attempts to address this problem through the design of a functional muscle construct that can be used to treat the damaged muscle instead of harvesting an autologous muscle flap from elsewhere on the patient's body.
Research conducted between 2000 and 2010 informed the conclusion that functional analysis of a tissue engineered muscle construct is important to illustrate its potential to help regenerate muscle. A variety of assays are generally used to evaluate a tissue engineered muscle construct including immunohistochemistry, RT-PCR, electrical stimulation and resulting peak-to-peak voltage, scanning electron microscope imaging, and in vivo response.
The most recent advances in the field include cultured meat, biorobotic systems, and biohybrid impants in regenerative medicine or disease modeling.
Examples
The majority of current advancements in muscle tissue engineering reside in the skeletal muscle category, so the majority of these examples will have to do with skeletal muscle engineering and regeneration. We will review a couple of examples of smooth muscle tissue engineering and cardiac muscle tissue engineering in this section as well.
Skeletal Muscle Tissue Engineering (SMTE)
Avian myotubes: highly contractile skeletal myotubes cultured and differentiated in vitro on collagen-coated culture plates
Cultured Meat (CM): cultured, cell based, lab grown, in vitro, clean meat obtained through cellular agriculture
Human Bio-Artificial Muscle (BAM): formed through a seven day, in vitro tissue engineering procedure in which human myoblasts fuse and differentiate into aligned myofibres in an extracellular matrix; these constructs are used for intramuscular drug injection to replace pre- or non-clinical injection models and complement animal studies
Myoblast transfer in the treatment of Duchenne's Muscular Dystrophy (DMD): an in vivo technique to replace dystrophin, a skeletal muscle protein which is deficient in patients with DMD; myoblasts fuse with muscle fibers and contribute their nuclei which then replace deficient gene products in the host nuclei
Autologous hematopoetic stem cell transplantation (AHSCT) as a method for treating Multiple Sclerosis (MS): an in vivo technique for treating MS in which the immune system is destroyed and is reconstituted with hematopoetic stem cells; has been shown to reduce the effects of MS for 4-5 years in 70-80% of patients
Volumetric muscle loss repair using Muscle Derived Stem Cells (MDSCs): an in situ technique for muscle loss repair in which patients have suffered from trauma or combat injuries; MDSCs cast in an in situ fibrin gel were capable of forming new myofibres that became engrafted in a muscle defect that was created by a partial-thickness wedge resection in the tibialis anterior muscle of laboratory mice
Development of skeletal muscle organoids to model neuromuscular disorders and muscular dystrophies; an in vitro technique in which human pluripotent stem cells (hPSCs) are differentiated into functional 3D human skeletal muscle organoid (hSkMOs); hPSCs were guided towards the paraxial mesodermal lineage which then gives rise to myogenic pregenitor cells and myoblasts in well plates with no scaffold; organoids were round, uniformly sized, and exhibited homogeneous morphology upon full development and were shown to successfully model muscle development and regeneration
Bioprinted Tibialis Anterior (TA) Muscle in Rats: an in vitro technique in which bioengineered skeletal muscle tissue composed of human primary muscle pregenitor cells (hMPCs) was fabricated – upon implantation, the bioprinted material reached 82% functional recovery in rodent models of the TA muscle
Smooth Muscle Tissue Engineering
Autologous MDSC Injections to Treat Urinary Incontinence: an in vivo injection technique for pure stress incontinence in female subjects in which defective muscle cells were replaced with stem cells that would differentiate to become functioning smooth muscle cells in the urinary sphincter
Vascular Smooth Muscle regeneration using induced pluripotent stem cells (iPSCs); an in vitro technique in which iPSCs were differentiated into proliferative smooth muscle cells using a nanofibrous scaffold.
Formation of coiled three-dimensional (3D) cellular constructs containing smooth muscle-like cells differentiated from dedifferentiated fat (DFAT) cells: an in vitro technique for controlling the 3D organization of smooth muscle cells in which DFAT cells are suspended in a mixture of extracellular proteins with optimized stiffness so that they differentiate into smooth muscle-like cells with specific 3D orientation; a muscle tissue engineered construct for a smooth muscle cell precursor
Cardiac Muscle Tissue Engineering
Intracoronary Administration of Bone Marrow-Derived Progenitor Cells: an in vivo technique in which progenitor cells derived from bone marrow are administered into an infarct artery to differentiate into functional cardiac cells and recover contractile function after an acute, ST-elevation myocardial infarction, thus preventing adverse remodeling of the left ventricle.
Human Cardiac Organoids:an in vitro, scaffold-free technique for producing a functioning cardiac organoid; cardiac spheroids made from a mixed cell population derived from human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) cultured on gelatin-coated well plates, without a scaffold, resulted in the generation of a functioning cardiac organoid
Methods
Muscle tissue engineering methods are consistently categorized across literature into three groups: in situ, in vivo, and in vitro muscle tissue engineering. We will assess each of these categories and detail specific practices used in each one.
In Situ
“In situ” is a latin phrase whose literal translation is “on site.” It is a term that has been used in the English language since the mid-eighteenth century to describe something that is in its original place or position. In the context of muscle tissue engineering, in situ tissue engineering involves the introduction and implantation of an acellular scaffold into the site of injury or degenerated tissue. The goal of in situ muscle tissue engineering is to encourage host cell recruitment, natural scaffold formation, and proliferation and differentiation of host cells. The main idea which in situ muscle tissue engineering is based on is the self-healing, regenerative properties of the mammalian body. The primary method for in situ muscle tissue engineering is described in the following section:
As described in Biomaterials for In Situ Tissue Regeneration: A Review (Abdulghani & Mitchell, 2019), in situ muscle tissue engineering requires very specific biomaterials which have the capability to recruit stem cells or progenitor cells to the site of the muscle defect, thus allowing regeneration of tissue without implantation of seed cells. The key to a successful scaffold is the appropriate properties (i.e. biocompatibility, mechanical strength, elasticity, biodegradability) and the correct shape and volume for the specific muscle defect in which they are implanted. This scaffold should effectively mimic the cellular response of the host tissue, and Mann et al. have found that Polyethylene glycol-based hydrogels are very successful as in situ biomaterial scaffolds because they are chemically modified to be degraded by biological enzymes, thus encouraging cell migration and proliferation. Beyond Polyethylene glycol-based hydrogels, synthetic biomaterials such as PLA and PCL are successful in situ scaffolds as they can be fully customized to each specific patient. These materials' stiffness, degradation, and porosity properties are tailored to the degenerated tissue's topology, volume, and cell type so as to provide the optimal environment for host cell migration and proliferation.
In situ engineering promotes natural regeneration of damaged tissue by effectively mimicking the mammalian body's own wound healing response. The use of both biological and synthetic biomaterials as scaffolds promotes host cell migration and proliferation directly to the defect site, thus decreasing the amount of time required for muscle tissue regeneration. Furthermore, in situ engineering effectively bypasses the risk of implant rejection by the immune system due to the biodegradable qualities in each scaffold.
In Vivo
"In vivo" is a latin phrase whose literal translation is "in a living thing." This term is used in the English language to describe a process which occurs inside of a living organism. In the realm of muscle tissue engineering, this term applies to the seeding of cells into a biomaterial scaffold immediately prior to implantation. The goal of in vivo muscle tissue engineering is to create a cell-seeded scaffold that once implanted into the wound site will preserve cell efficacy. In vivo methods provide a greater amount of control over cell phenotype, mechanical properties, and functionality of the tissue construct.
As described in Skeletal Muscle Tissue Engineering: Biomaterials-Based Strategies for the Treatment of Volumetric Muscle Loss (Carnes & Pins, 2020), in vivo muscle tissue engineering builds on the concept of in situ engineering by not only implanting a biomaterial scaffold with specific mechanical and chemical properties, but also seeding the scaffold with the specific cell type needed for regeneration of the tissue. Reid et al. describe common scaffolds utilized in the in vivo muscle tissue engineering process. These scaffolds include hydrogels infused with hyaluronic acid (HA), gelatin silk fibroin, and chitosan as these materials promote muscle cell migration and proliferation. For example, a biodegradable and renewable material derived from chitin known as chitosan, has unique mechanical properties which support smooth muscle cell differentiation and retention in the tissue regeneration site. When this scaffold is further functionalized with Arginine-Glycine-Aspartic Acid (RGD), it provides a better growth environment for smooth muscle cells. Another scaffold commonly used is decellularized extracellular matrix (ECM) tissue as it is fully biocompatible, biodegradable, and contains all of the necessary protein binding sites for full functional recovery and integration of muscle tissue. Once seeded with cells, this material becomes an optimal environment for cell proliferation and integration with existing tissue as it effectively mimics the environment in which tissue naturally regenerates in the mammalian body.
The in vivo muscle tissue engineering technique provides the wound healing process with a "head start" in development, as the body no longer needs to recruit host cells to begin regeneration. This approach also bypasses the need for cell manipulation prior to implantation, thus ensuring that they maintain all of their mechanical and functional properties.
In Vitro
"In vitro" is a latin phrase whose literal translation is "within the glass." This term is used in the English language to describe a process which occurs outside of a living organism. Within the context of muscle tissue engineering, the term "in vitro" applies to the seeding of cells into a biomaterial scaffold with growth factors and nutrients, then culturing these constructs until a functional construct, such as myofibres, is developed. These developed constructs are then implanted into the wound site with the expectation that they will continue to proliferate and integrate into host muscle tissue. The goal of in vitro muscle tissue engineering is to increase the functionality of the tissue before it is ever implanted into the body, thus increasing mechanical properties and potential to thrive in the host body.
Abdulghani & Mitchell describe in vitro muscle tissue engineering as a concept with utilizes the same basic strategies of in vivo tissue engineering. The difference between the two methods, however, is the development of a fully functional tissue engineered muscle graft (TEMG) that occurs in the in vitro technique. In vitro muscle tissue engineering includes the seeding of cells onto a biomaterial scaffold, but goes a step further by adding growth factors and biochemical and biophysical cues to promote cell growth, proliferation, differentiation, and finally regeneration into a functional muscle tissue construct. Typically, in vitro scaffolds contain specific surface features which guide the direction of cell proliferation. They are usually fibrous with aligned pores as these features encourage cell adhesion during regeneration. Beyond the types of scaffolds used in this technique, a largely important aspect of this technique is the electrical and mechanical stimulation which mimic the natural regeneration environment and encourage the expansion of intracellular communication pathways. Before TEMGs are introduced into the wound defect, they musts be vascularized to promote proper integration with the host tissue. To achieve vascularization, researchers typically seed a scaffold with multiple cell types in order to develop both muscle tissue and vascular pathways. This process prevents rejection of the TEMG upon implantation as it is able to effectively thrive in the host tissue environment. There is always a risk of immune rejection when implanting fully developed tissue, though, so this method tissue regeneration is the most closely monitored post-implantation.
The in vitro muscle tissue engineering technique is used to create muscle tissue with more successful functional and mechanical properties. According to Carnes & Pins in Skeletal Muscle Tissue Engineering: Biomaterials-Based Strategies for the Treatment of Volumetric Muscle Loss, this approach develops a microenvironment that is more conducive to enhancing tissue regeneration upon implantation, thus restoring full functionality to patients.
Future Work
Current muscle tissue engineering trends lead towards the development of skeletal muscle regeneration techniques over smooth muscle or cardiac muscle regeneration. A current trend found throughout literature is the treatment of Volumetric Muscle Loss (VML) using muscle tissue engineering techniques. VML is the result of abrupt loss of skeletal muscle due to surgical resection, trauma, or combat injuries. It has been observed that tissue grafts, the current treatment plan, do not restore full functionality or aesthetic integrity to the site of injury. Muscle tissue engineering offers an optimistic possibility for patients, as in situ, in vivo, and in vitro techniques have been proven to restore functionality to muscle tissue in the wound site. Methods being explored include acellular scaffold implantation, cell-seeded scaffold implantation, and in vitro fabrication of muscle grafts. Preliminary data from each of these methods promises a solution for patients suffering from VML.
Beyond specific technological advances in the field of muscle tissue engineering, researchers are working to establish a connection with the larger umbrella that is tissue engineering.
References
Wikipedia Student Program
Tissue engineering | Muscle tissue engineering | [
"Chemistry",
"Engineering",
"Biology"
] | 4,178 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
44,380,931 | https://en.wikipedia.org/wiki/Dinkus | In typography, a dinkus is a typographic symbol which often consists of three spaced asterisks or bullets in a horizontal row, i.e. ∗∗∗ or ••• . The symbol has a variety of uses, and it usually denotes an intentional omission or a logical "break" of varying degree in a written work. This latter use is similar to a subsection, and it indicates to the reader that the subsequent text should be re-contextualized. When used this way, the dinkus typically appears centrally aligned on a line of its own with vertical spacing before and after the symbol. The dinkus has been in use in various forms since . Historically, the dinkus was often represented as an asterism, , though this use has fallen out of favor and is now nearly obsolete.
Etymology
The word was coined by an artist on the Australian periodical, The Bulletin, in the 1920s and is derived from the word dinky.
Usage
The dinkus is used for various purposes, but many of them are related to an intentional break in the flow of the text.
Subsection break
A dinkus can be used to accentuate a break between subsections of a single overarching section. When an author chooses to use a dinkus to divide a larger section, the intent is to maintain an overall sense of continuity within the overall chapter or section while changing elements of the setting or timeline. For instance, when the writer is introducing a flashback or other jarring scene change, a dinkus can help denote the change in setting within the overall theme of the chapter; in that case, it can be preferable to the initiation of a new chapter. This technique is used especially in literary fiction.
Intentionally omitted information
Many applications of the dinkus, including those that were common historically, have indicated intentional omission of information. In these cases, the dinkus is used to inform the reader that the information has been omitted. It can also be used to mean "untitled" or that the author or title was withheld. This is evident, for example, in some editions of Album for the Young by composer Robert Schumann (№ 21, 26, and 30).
A dinkus can also be used in any context as a simple means of abbreviation of any text. The dinkus is also used specifically in this capacity within the sphere of lawmaking, particularly for city ordinances. When used in legal text, the dinkus indicates an abbreviation within amendments to code while not implying the repeal of the omitted sections.
Ornamentation
Newspapers, magazines, and other works can use dinkuses as simple ornamentation of typography, for solely aesthetic reasons. When a dinkus is used primarily for aesthetic purposes, it often takes the form of a fleuron, e.g. ❧, or sometimes a dingbat. While fleurons, dingbats, and dinkuses are usually distinct, their uses can overlap.
Poetic symbolism
In some cases, the use of a dinkus has been employed in poetry in order to convey non-verbal meaning. This is exemplified in the poem Thresholes by Lara Mimosa Montes, in which the poet makes frequent use of a circular dinkus, ○ , as a form of "punctuation at the level of the full text, rather than the phrase or the sentence" throughout the course of the work.
Variations
Many variations of dinkuses are composed partially or entirely of asterisks, although other symbols can be used to achieve the same goals. Some examples include a series of dots, fleurons, asterisms, or small drawings. Esperanto Braille punctuation commonly uses a series of colons, , as a dinkus.
Gallery
Other uses of the term "dinkus"
Among older Hungarian Americans and Polish Americans, dinkus is an archaic term for Easter Monday.
In Australian English, particularly in the news media, the word "dinkus" refers to a small photograph of the author of a news article. Outside of Australia, this is often referred to as a headshot.
References
Further reading
Daisy Alioto's analysis of the dinkus in The Paris Review: Ode to the Dinkus.
Typographical symbols
Punctuation | Dinkus | [
"Mathematics"
] | 869 | [
"Symbols",
"Typographical symbols"
] |
44,381,435 | https://en.wikipedia.org/wiki/China%20Dark%20Matter%20Experiment | The China Dark Matter Experiment (CDEX) is a search for dark matter WIMP particles at the China Jinping Underground Laboratory. CDEX was the first experiment to be hosted at CJPL, beginning construction of its shield in June 2010, the same month that laboratory construction was completed, and before CJPL's official opening on 12 December.
CDEX has p-type point-contact germanium detector surrounded by NaI(Tl) crystals, similar to the CoGeNT experiment. The CDEX-0 prototype was used to develop the current CDEX-1 detector, which has a detector mass of roughly 1 kg. Future plans include scaling to CDEX-10 and CDEX-1T.
CDEX-1 had first low mass results in 2013 and published limits on WIMP masses 6–20 GeV in 2014.
References
Experiments for dark matter search
Physics experiments | China Dark Matter Experiment | [
"Physics"
] | 180 | [
"Dark matter",
"Physics experiments",
"Unsolved problems in physics",
"Experiments for dark matter search",
"Experimental physics",
"Particle physics",
"Particle physics stubs"
] |
44,381,526 | https://en.wikipedia.org/wiki/Transition%20modeling | Transition modeling is the use of a model to predict the change from laminar and turbulent flows in fluids and their respective effects on the overall solution. The complexity and lack of understanding of the underlining physics of the problems makes simulating the interaction between laminar and turbulent flow to be difficult and very case specific. Transition does have the wide range of turbulence options available for most computational fluid dynamics (CFD) applications for the following reasons:
Transition involves a wide range of scales where the energy and momentum transfer are strongly influenced by inertial or non-linear effects that are unique to the simulation.
Transition also occurs by different means, such as natural and bypass, and modeling all possibilities is difficult.
Most CFD programs use Reynolds-averaged Navier–Stokes equations, in which averaging eliminates linear disturbance.
Common models
The following is a list of commonly employed transition models in modern engineering applications.
Stability theory approach
Intermittency Transport
Laminar Fluctuation Energy Method
Direct numerical simulation
Large Eddy Simulation
Gamma-Re Transition Model
References
Aerodynamics
Turbulence models | Transition modeling | [
"Chemistry",
"Engineering"
] | 214 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
44,381,774 | https://en.wikipedia.org/wiki/Gamma-Re%20Transition%20Model | Gamma-Re (γ-Re) transition model is a two equation model used in Computational Fluid Dynamics (CFD) to modify turbulent transport equations to simulate laminar, laminar-to-turbulent and turbulence states in a fluid flow. The Gamma-Re model does not intend to model the physics of the problem but attempts to fit a wide range of experiments and transition methods into its formulation. The transition model calculated an intermittency factor that creates (or extinguishes) turbulence by slowly introducing turbulent production at the laminar-to-turbulent transition location.
Principle
The goal of developing the gamma-Re () transition model was to develop a transition model based on local variables which could be easily implemented into modern CFD code with unstructured grids and massive parallel execution. The majority of earlier transition models such as the model needs to know the structure of the boundary layer and the integration along it; both concepts are hard to implement in three dimensions along many subdivisions of a grid. Another key insight to the formulation of this model is that the Reynolds vorticity number can be related to the Reynolds transition onset number so there is a local way to determine the transition location. The gamma-Re transition model has two equations and is based on the two-equation turbulence models in the context of turbulence modeling. This way both local and global trends can be modelled. The intermittency or gamma determines the percentage of time the flow is turbulent (0 = fully laminar, 1 = fully turbulent). The intermittency acts on the production term of the turbulent kinetic energy transport equation in the SST model to simulate laminar/turbulence flows.
Standard Gamma-Theta model
For intermittency
For Transition Momentum Thickness Reynolds Number
Modification to SST Turbulence Model
Applications
The present model was appropriate for the prediction of an expansion swirl flow.
Other models
Following are some more models which are usually employed.
en
low-Reynolds Number
References
Computational fluid dynamics
Scientific models | Gamma-Re Transition Model | [
"Physics",
"Chemistry"
] | 401 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
44,381,794 | https://en.wikipedia.org/wiki/Gliese%2015%20Ab | Gliese 15 Ab (GJ 15 Ab), also called Groombridge 34 Ab, rarely called GX Andromedae b is an extrasolar planet approximately 11 light-years away in the constellation of Andromeda. It is found in the night sky orbiting the star Gliese 15 A, which is at right ascension 00h 18m 22.89s and declination +44° 01′ 22.6″.
Discovery
It was discovered in August 2014, deduced from analysis of the radial velocities of the parent star by the Eta-Earth Survey using HIRES at Keck Observatory. It has around 5.35 ± 0.75 Earth masses, and is thought to be a Super-Earth with a diameter greater than that of the Earth. However, researches using the CARMENES spectrograph failed to detect the planet in 2017. The detection of planet was recovered in 2018, with revised minimum mass of 3.03 .
Orbit
Gliese 15 Ab has a close inner orbit around Gliese 15 A with a semi-major axis of only 0.0717 ± 0.0034 AU, making an orbital period that is just a little longer than 11.4 days, the orbit appears to be relatively circular, with an orbital eccentricity of about 0.12. It orbits too close to Gliese 15 A to be located in the habitable zone and is unlikely to harbour life.
Notes
References
External links
Open Exoplanet Catalogue entry
Andromeda (constellation)
Exoplanets discovered in 2014
Terrestrial planets
Exoplanets detected by radial velocity
Hot Neptunes
1
0015
fr:Gliese 15#Les planètes GJ 15 Ab et GJ 15 Ac | Gliese 15 Ab | [
"Astronomy"
] | 351 | [
"Andromeda (constellation)",
"Constellations"
] |
44,382,509 | https://en.wikipedia.org/wiki/Example-centric%20programming | Example-centric programming is an approach to software development that helps the user to create software by locating and modifying small examples into a larger whole. That approach can be helped by tools that allow an integrated development environment (IDE) to show code examples or API documentation related to coding behaviors occurring in the IDE. “Borrow” tactics are often employed from online sources, by programmers leaving the IDE to troubleshoot.
The purpose of example-centric programming is to reduce the time spent by developers searching online. Ideally, in example-centric programming, the user interface integrates with help module examples for assistance without programmers leaving the IDE. The idea for this type of “instant documentation” is to reduce programming interruptions. The usage of this feature is not limited to experts, as some novices reap the benefits of an integrated knowledge base, without resorting to frequent web searches or browsing.
Background
The growth of the web has fundamentally changed the way software is built. Vast increase in information resources and the democratization of access and distribution are main factors in the development of example-centric programming for end-user development. Tutorials are available on the web in seconds thus broadening the space of who writes it: designers, scientists, or hobbyists. By 2012 13 million program as a part of their job, yet only three million of those are actual professional programmers.
Prevalence of online code repositories, documentation, blogs and forums—enables programmers to build applications iteratively searching for, modifying, and combining examples.
Using the web is integral to an opportunistic approach to programming when focusing on speed and ease of development over code robustness and maintainability. There is a widespread use of the web by programmers, novices and experts alike, to prototype, ideate, and discover.
To develop software quickly programmers often mash up various existing systems. As part of this process, programmers must often search for suitable components and learn new skills, thus they began using the web for this purpose.
When developing software programmers spend 19% of their programming time on the web. Individuals use the web to accomplish several different kinds of activities. The intentions behind web use vary in form and time spent. Programmers spend most of the time learning a new concept, the least time is spent reminding themselves of details of a concept they already know, and in between they use the web to clarify their existing knowledge.
Example-centric programming tries to solve the issue of having to get out of the development environment to look for references and examples while programming. For instance, traditionally, to find API documentation and sample code, programmers will either visit the language reference website or go to search engines and make API specific queries. When trying to learn something new, programmers use web tutorials for just-in-time learning. Additionally, programmers deliberately choose not to remember complicated syntax and instead use the web as an external memory that can be accessed when needed.
Benefits
Some of the benefits of example-centric programming include:
Prevention of usage errors
Reduction of time searching for code examples
Reduction of time searching for API documentation
Clarification of existing knowledge and reminding of forgotten details
Emergent programming
Emergence can be defined as a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties. The extensive amount of code publicly available on the web can be used to find this type of patterns and regularities. By modeling how developers use programming languages in practices, algorithms for finding common idioms and detecting unlikely code can be created.
This process is limited to the amount of code that programmers are willing and able to share. Because people write more code than they share online there is a lot of duplicated effort. To fully use the power of the crowd, the effort required to publish code online should be reduced.
Examples
Blueprint
Blueprint is a plugin for Adobe Flash Builder that automatically augments queries with code context, presents a code-centric view of search results, embeds the search experience into the editor, and retains a link between copied code and its source. It is designed to help programmers with web searches and allow them to easily remember forgotten details and clarify existing knowledge.
It displays results from a varied set of web pages enabling users to browse and evaluate search results rapidly.
Blueprint is task-specific, meaning that it will specifically search for examples in the programming language.
Redprint
Redprint is a browser-based development environment for PHP that integrates API specific "instant example" and "instant documentation" display interfaces. The prototype IDE was developed by Anant Bhardwaj, then at Stanford University on the premise that task-specific example interfaces leave programmers having to understand the example code that has been found, and thus Redprint also includes an API specific search interface. The API specific search interface searches for relevant API specific examples and documentation.
Codex
Codex is a knowledge base that records common practices for Ruby. Uses crowdsourced data from developers and searches all code, looking for patterns, that way if someone is coding in a strange way, Codex lets them know that they are doing something wrong.
Codex uses statistical linting to find poorly written code, or code which is syntactically different from well written code, and warn the user, pattern annotation to automatically discover common programming idioms and annotate them with metadata using crowdsourcing, and library generation to construct a utility package that encapsulates emergent software practice.
Codelets
A codelet is a block of example code an interactive helper widget that assists the user in understanding and integrating the example.
Bing Code Search
Bing Code Search is an extension to Microsoft Visual Studio developed by a team made of people from Visual Studio, Bing and Microsoft Research that allows developers to search code examples and documentation from Bing directly from IntelliSense.
Bing Code Search gathers its code samples from MSDN, StackOverflow, Dotnetperls and CSharp411.
Codota
Codota helps developers find typical Java code examples by analyzing millions of code snippets available on sites such as GitHub and StackOverflow. Codota ranks these examples by criteria such as commonality of the coding patterns, credibility of the origin and clarity of the code.
The Codota plugin for the IntelliJ IDEA and Android Studio IDEs allows developers to get code examples for using Java and android APIs without having to leave their editor.
UpCodeIn
UpCodeIn is a source code search engine that allows developers to find and reuse software components from the Internet. A unique feature of UpCodeIn compared to other source code search engines is its ability to find code for syntax element, for example you can find methods with specific parameter type, annotation, variables.
UpCodeIn understand syntax of many programming languages like Java, JavaScript, Python and C#.
See also
Emergence
List of human–computer interaction topics
User experience
User experience design
Web usability
Crowdsourcing
References
External links
Joel Brandt Talk
Human–computer interaction
Computer programming
Software features
Software design | Example-centric programming | [
"Technology",
"Engineering"
] | 1,431 | [
"Computer programming",
"Software features",
"Software engineering",
"Software design",
"Human–machine interaction",
"Design",
"Computers",
"Human–computer interaction"
] |
44,383,793 | https://en.wikipedia.org/wiki/Nu%20Coronae%20Borealis | The Bayer designation ν Coronae Borealis (Nu Coronae Borealis) is an optical pair of stars in the constellation Corona Borealis:
ν1 Coronae Borealis
ν2 Coronae Borealis
As of 2011, the pair had an angular separation of along a position angle of 164°.
References
Coronae Borealis, Nu
Corona Borealis
Double stars | Nu Coronae Borealis | [
"Astronomy"
] | 73 | [
"Corona Borealis",
"Constellations"
] |
44,383,980 | https://en.wikipedia.org/wiki/Nu1%20Coronae%20Borealis | {{DISPLAYTITLE:Nu1 Coronae Borealis}}
Nu1 Coronae Borealis is a solitary, red-hued star located in the northern constellation of Corona Borealis. It is faintly visible to the naked eye, having an apparent visual magnitude of 5.20. Based upon an annual parallax shift of , it is located roughly 650 light years from the Sun. At its distance, the visual magnitude is diminished by an extinction of 0.1 due to interstellar dust. This object is drifting closer with a radial velocity of −13 km/s.
This is an evolved red giant star with a stellar classification of M2 III. It is a variable star of uncertain type, showing a change in brightness with an amplitude of 0.0114 magnitude and a frequency of 0.22675 cycles per day, or 4.41 days/cycle. It has about 81 times the Sun's radius and is radiating nearly 1,300 times the Sun's luminosity from its photosphere at an effective temperature of 3,828 K.
References
Corona Borealis, Nu1
Corona Borealis
Corona Borealis, Nu1
Durchmusterung objects
Coronae Borealis, 20
147749
080197
6107 | Nu1 Coronae Borealis | [
"Astronomy"
] | 255 | [
"Corona Borealis",
"Constellations"
] |
44,384,009 | https://en.wikipedia.org/wiki/Nu2%20Coronae%20Borealis | {{DISPLAYTITLE:Nu2 Coronae Borealis}}
Nu2 Coronae Borealis is a solitary, orange-hued star located in the northern constellation of Corona Borealis. It is faintly visible to the naked eye, having an apparent visual magnitude of +5.4. Based upon an annual parallax shift of 5.49 mas, it is located roughly 590 light years from the Sun. At that distance, the visual magnitude is diminished by an extinction of 0.1 due to interstellar dust.
This is an evolved red giant star with a stellar classification of K5 III. The measured angular diameter of Nu2 Coronae Borealis is . At its estimated distance, this yields a physical size of about 50 times the radius of the Sun. Nu2 Coronae Borealis is radiating 530 times the Sun's luminosity from its photosphere at an effective temperature of 3,940 K.
References
K-type giants
Corona Borealis
Corona Borealis, Mu
Durchmusterung objects
Coronae Borealis, 21
147767
080214
6108 | Nu2 Coronae Borealis | [
"Astronomy"
] | 225 | [
"Corona Borealis",
"Constellations"
] |
44,384,039 | https://en.wikipedia.org/wiki/Svetlana%20Gerasimenko | Svetlana Ivanovna Gerasimenko (; ; born 1945) is a Soviet and Tajikistani astronomer origin and discoverer of comet 67P/Churyumov–Gerasimenko.
Early life
Gerasimenko was born in the Ukrainian SSR in 1945. She is an ethnic Ukrainian; her father was Ukrainian and her mother Polish.
Discovery of comet 67P/Churyumov–Gerasimenko
On 11 September 1969 Gerasimenko while working at the Alma-Ata Astrophysical Institute, near Almaty, the then-capital city of Kazakh Soviet Socialist Republic, Soviet Union photographed the comet 32P/Comas Solà using a 50-cm Maksutov telescope.
After she returned to her home institute, Klim Ivanovych Churyumov of the Kyiv National University's Astronomical Observatory examined this photograph and found a cometary object near the edge of the plate, but assumed that this was Comas Solà. On 22 October, about a month after the photograph was taken, he discovered that the object could not be Comas Solà, because it was 2-3 degrees off the expected position. Further scrutiny produced a faint image of Comas Solà at its expected position on the plate, thus proving that the other object was a different comet. By looking through all the material collected they found this new object on four more plates, dated 9 and 21 September.
Honors
Named after her
Periodic comet 67P/Churyumov–Gerasimenko
Minor planet 3945 Gerasimenko
See also
Timeline of women in science
References and notes
1945 births
Tajikistani people of Russian descent
Soviet cosmologists
Discoverers of comets
Women astronomers
Living people
20th-century Tajikistani scientists
Soviet astronomers
Taras Shevchenko National University of Kyiv alumni
People from Baryshivka
Tajikistani women scientists
21st-century Tajikistani scientists | Svetlana Gerasimenko | [
"Astronomy"
] | 378 | [
"Women astronomers",
"Astronomers"
] |
44,385,148 | https://en.wikipedia.org/wiki/UNIX%20Network%20Programming | Unix Network Programming is a book written by W. Richard Stevens. It was published in 1990 by Prentice Hall and covers many topics regarding UNIX networking and Computer network programming. The book focuses on the design and development of network software under UNIX. The book provides descriptions of how and why a given solution works and includes 15,000 lines of C code. The book's summary describes it as "for programmers seeking an in depth tutorial on sockets, transport level interface (TLI), interprocess communications (IPC) facilities under System V and BSD UNIX." The book has been translated into several languages, including Chinese, Italian, German, Japanese and others.
Later editions have expanded into two volumes, Volume 1: The Sockets Networking API and Volume 2: Interprocess Communications.
In the movie Wayne's World 2, the book is briefly referenced.
References
External links
Unix Network Programming, Vol. 1
Prentice Hall interview with Rich Stevens, author of Unix Programming, Volume 1: Networking APIs, Sockets and XTI, 2/e
UNIX Network Programming, Volume 1, Second Edition Aug 1, 1998, By David Bausum
Computer books
1990 books | UNIX Network Programming | [
"Technology"
] | 238 | [
"Works about computing",
"Computer books"
] |
44,385,352 | https://en.wikipedia.org/wiki/ISO/IEC%2020248 | ISO/IEC 20248 Automatic Identification and Data Capture Techniques – Data Structures – Digital Signature Meta Structure is an international standard specification under development by ISO/IEC JTC 1/SC 31/WG 2. This development is an extension of SANS 1368, which is the current published specification. ISO/IEC 20248 and SANS 1368 are equivalent standard specifications. SANS 1368 is a South African national standard developed by the South African Bureau of Standards.
ISO/IEC 20248 [and SANS 1368] specifies a method whereby data stored within a barcode and/or RFID tag is structured and digitally signed. The purpose of the standard is to provide an open and interoperable method, between services and data carriers, to verify data originality and data integrity in an offline use case. The ISO/IEC 20248 data structure is also called a "DigSig" which refers to a small, in bit count, digital signature.
ISO/IEC 20248 also provides an effective and interoperable method to exchange data messages in the Internet of Things [IoT] and machine to machine [M2M] services allowing intelligent agents in such services to authenticate data messages and detect data tampering.
Description
ISO/IEC 20248 can be viewed as an X.509 application specification similar to S/MIME. Classic digital signatures are typically too big (the digital signature size is typically more than 2k bits) to fit in barcodes and RFID tags while maintaining the desired read performance. ISO/IEC 20248 digital signatures, including the data, are typically smaller than 512 bits. X.509 digital certificates within a public key infrastructure (PKI) is used for key and data description distribution. This method ensures the open verifiable decoding of data stored in a barcode and/or RFID tag into a tagged data structure; for example JSON and XML.
ISO/IEC 20248 addresses the need to verify the integrity of physical documents and objects. The standard counters verification costs of online services and device to server malware attacks by providing a method for multi-device and offline verification of the data structure. Examples documents and objects are education and medical certificates, tax and share/stock certificates, licences, permits, contracts, tickets, cheques, border documents, birth/death/identity documents, vehicle registration plates, art, wine, gemstones and medicine.
A DigSig stored in a QR code or near field communications (NFC) RFID tag can easily be read and verified using a smartphone with an ISO/IEC 20248 compliant application. The application only need to go online once to obtain the appropriate DigSig certificate, where after it can offline verify all DigSigs generated with that DigSig certificate.
A DigSig stored in a barcode can be copied without influencing the data verification. For example; a birth or school certificate containing a DigSig barcode can be copied. The copied document can also be verified to contain the correct information and the issuer of the information. A DigSig barcode provides a method to detect tampering with the data.
A DigSig stored in an RFID/NFC tag provides for the detection of copied and tampered data, therefore it can be used to detect the original document or object. The unique identifier of the RFID tag is used for this purpose.
The DigSig Envelope
ISO/IEC 20248 calls the digital signature meta structure a DigSig envelope. The DigSig envelope structure contains the DigSig certificate identifier, the digital signature and the timestamp. Fields can be contained in a DigSig envelope in 3 ways; Consider the envelope DigSig{a, b, c} which contains field sets a, b and c.
a fields are signed and included in the DigSig envelope. All the information (the signed field value and the field value is stored on the AIDC) is available to verify when the data structure is read from the AIDC (barcode and/or RFID).
b fields are signed but NOT included in the DigSig envelope - only the signed field value is stored on the AIDC. Therefore the value of a b field must be collected by the verifier before verification can be performed. This is useful to link a physical object with an barcode and/or RFID tag to be used as an anti-counterfeiting measure; for example the seal number of a bottle of wine may be a b field. The verifier needs to enter the seal number for a successful verification since it is not stored in the barcode on the bottle. When the seal is broken the seal number may also be destroyed and yielded unreadable; the verification can therefore not take place since it requires the seal number. A replacement seal must display the same seal number; using holograms and other techniques may make the generation of a new copied seal number not viable. Similarly the unique tag ID, also known is the TID in ISO/IEC 18000, can be used in this manner to prove that the data is stored on the correct tag. In this case the TID is a b field. The interrogator will read the DigSig envelope from the changeable tag memory and then read the non-changeable unique TID to allow for the verification. If the data was copied from one tag to another, then the verification process of the signed TID, as stored in the DigSig envelope, will reject the TID of the copied tag.
c fields are NOT signed but included in the DigSig envelope - only the field value is stored on the AIDC. A c field can therefore NOT be verified, but extracted from the AIDC. This field value may be changed without affecting the integrity of the signed fields.
The DigSig Data Path
Typically data stored in a DigSig originate as structured data; JSON or XML. The structured data field names maps directly on the DigSig Data Description [DDD]. This allows the DigSig Generator to digitally sign the data, store it in the DigSig envelope and compact the DigSig envelope to fit in the smallest bits size possible. The DigSig envelope is then programmed in an RFID tag or printed within a barcode symbology.
The DigSig Verifier reads the DigSig envelope from the barcode or RFID tag. It then identifies the relevant DigSig certificate, which it uses to extract the fields from the DigSig envelope and obtain the external fields. The Verifier then performs the verification and makes the fields available as structured data for example JSON or XML.
Examples
QR example
The following education certificate examples use the URI-RAW DigSig envelope format. The URI format allows a generic barcode reader to read the DigSig where after it can be verified online using the URI of the trusted issuer of the DigSig. Often the ISO/IEC 20248 compliant smartphone application (App) will be available on this website for down load, where after the DigSig can be verified offline. Note, a compliant App must be able to verify DigSigs from any trusted DigSig issuer.
The university certificate example illustrates the multi-language support of SANS 1368.
RFID and QR Example
In this example a vehicle registration plate is fitted with an ISO/IEC 18000-63 (Type 6C) RFID tag and printed with a QR barcode. The plate is both offline verifiable using a smartphone, when the vehicle is stopped; or using an RFID reader, when the vehicle drive past the reader.
Note the 3 DigSig Envelope formats; RAW, URI-RAW and URI-TEXT.
The DigSig stored in the RFID tag is typically in a RAW envelope format to reduce the size from the URI envelope format. Barcodes will typically use the URI-RAW format to allow generic barcode readers to perform an online verification. The RAW format is the most compact but it can only be verified with a SANS 1368 compliant application.
The DigSig stored in the RFID tag will also contain the TID (Unique Tag Identifier) within the signature part. A DigSig Verifier will therefore be able to detect data copied onto another tag.
QR with External data example
The following QR barcode is attached to a computer or smartphone to prove it belongs to a specific person. It uses a b type field, described above, to contain a secure personal identification number [PIN] remembered by the owner of the device. The DigSig Verifier will ask for the PIN to be entered, before the verification can take place. The verification will be negative if the PIN is incorrect. The PIN for the example is "123456".
The DigSig Data Description for the above DigSig is as follows:
{ "defManagementFields":
{ "mediasize":"50000",
"specificationversion":1,
"country":"ZAR",
"DAURI":"https://www.idoctrust.com/",
"verificationURI":"http://sbox.idoctrust.com/verify/",
"revocationURI":"https://sbox.idoctrust.com/chkrevocation/",
"optionalManagementFields":{}}},
"defDigSigFields":
[{ "fieldid":"cid",
"type":"unsignedInt",
"benvelope":false},
{ "fieldid":"signature",
"type":"bstring",
"binaryformat":"{160}",
"bsign":false},
{ "fieldid":"timestamp",
"type":"date",
"binaryformat":"Tepoch"},
{ "fieldid":"name",
"fieldname":{"eng":"Name"},
"type":"string",
"range":"[a-zA-Z ]",
"nullable":false},
{ "fieldid":"idnumber",
"fieldname":{"eng":"Employee ID Number"},
"type":"string",
"range":"[0-9 ]"},
{ "fieldid":"sn",
"fieldname":{"eng":"Asset Serial Number"},
"type":"string",
"range":"[0-9a-zA-Z ]"},
{ "fieldid":"PIN",
"fieldname":{"eng":"6 number PIN"},
"type":"string",
"binaryformat":"{6}",
"range":"[0-9]",
"benvelope":false,
"pragma":"enterText"}]}
References
SANS 1368, Automatic identification and data capture techniques — Data structures — Digital Signature meta structure
FIPS PUB 186-4, Digital Signature Standard (DSS) – Computer security – Cryptography
IETF RFC 3076, Canonical XML Version 1.0
IETF RFC 4627, The application/JSON media type for JavaScript Object Notation (JSON)
IETF RFC 3275, (Extensible Markup Language) XML-Signature syntax and processing
IETF RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile
ISO 7498-2, Information processing systems – Open systems interconnection – Basic reference model – Part 2: Security architecture
ISO/IEC 9594-8 (ITU X.509), Information technology – Open Systems Interconnection – The Directory: Public-key and attribute certificate frameworks
ISO/IEC 10181-4, Information technology – Open Systems Interconnection – Security frameworks for open systems: Non-repudiation framework
ISO/IEC 11770-3, Information technology – Security techniques – Key management – Part 3: Mechanisms using asymmetric techniques
ISO/IEC 11889 (all parts), Information technology – Trusted Platform Module
ISO/IEC 15415, Information technology – Automatic identification and data capture techniques – Bar code print quality test specification – Two-dimensional symbols
ISO/IEC 15419, Information technology – Automatic identification and data capture techniques – Bar code digital imaging and printing performance testing
ISO/IEC 15423, Information technology – Automatic identification and data capture techniques – Bar code scanner and decoder performance testing
ISO/IEC 15424, Information technology – Automatic identification and data capture techniques – Data Carrier Identifiers (including Symbology Identifiers)
ISO/IEC 15963, Information technology – Radio frequency identification for item management – Unique identification for RF tags
ISO/IEC 16022, Information technology – Automatic identification and data capture techniques – Data Matrix bar code symbology specification
ISO/IEC 16023, Information technology – International symbology specification – MaxiCode
ISO/IEC 18000 (all parts), Information technology – Radio frequency identification for item management
ISO/IEC 18004, Information technology – Automatic identification and data capture techniques – QR Code 2005 bar code symbology specification
ISO/IEC TR 14516, Information technology – Security techniques – Guidelines for the use and management of Trusted Third Party services
ISO/IEC TR 19782, Information technology – Automatic identification and data capture techniques– Effects of gloss and low substrate opacity on reading of bar code symbols
ISO/IEC TR 19791, Information technology – Security techniques – Security assessment of operational systems
ISO/IEC TR 29162, Information technology – Guidelines for using data structures in AIDC media
ISO/IEC TR 29172, Information technology – Mobile item identification and management –Reference architecture for Mobile AIDC services
External links
http://csrc.nist.gov
http://www.ietf.org
https://web.archive.org/web/20141217133239/http://idoctrust.com/
http://www.iso.org
http://www.itu.int
http://www.sabs.co.za
Barcodes
Radio-frequency identification | ISO/IEC 20248 | [
"Engineering"
] | 2,980 | [
"Radio-frequency identification",
"Radio electronics"
] |
44,386,524 | https://en.wikipedia.org/wiki/Manta%20Matcher | Manta Matcher is a global online database for manta rays.
Creation
It is one of the Wildbook Web applications developed by Wild Me, a 501(c)(3) not-for-profit organization in the United States, and was created in partnership with Andrea Marshall of the Marine Megafauna Foundation.
Manta rays have unique spot patterning on their undersides, which allows for individual identification. Scuba divers around the world can photograph mantas and upload their manta identification photographs to the Manta Matcher website, supporting global research and conservation efforts.
Identification of rays
Manta Matcher is a pattern-matching software that eases researcher workload; key spot pattern features are extracted using a scale-invariant feature transform (SIFT) algorithm, which can cope with complications presented by highly variable spot patterns and low contrast photographs.
Purpose and research supported
This citizen science tool is free to use by researchers worldwide. Manta Matcher represents a global initiative to centralize manta ray sightings and facilitate research on these vulnerable species through collaborative studies, including the cross-referencing of regional databases.
Manta Matcher has already supported research that contributed to the listing of reef mantas (Manta alfredi) on Appendix 1 of the Convention on Migratory Species in November 2014.
References
External links
Myliobatidae
Online databases
Biodiversity databases | Manta Matcher | [
"Biology",
"Environmental_science"
] | 271 | [
"Biodiversity databases",
"Environmental science databases",
"Biodiversity"
] |
44,387,756 | https://en.wikipedia.org/wiki/V%20Coronae%20Borealis | V Coronae Borealis (V CrB) is a Mira-type long period variable star and carbon star in the constellation Corona Borealis. Its apparent magnitude varies between 6.9 and 12.6 over a period of 357 days.
V Coronae Borealis is too far from Earth for its parallax to be measured effectively. Basing on a period of 357 days, the absolute magnitude of V Coronae Borealis has been calculated to be -4.62. It is estimated to be from Earth, has a luminosity of and a rather cool effective temperature of 1,800 K, these implying a very large radius of about , making V Coronae Borealis one of the largest stars so far discovered. If placed in the center of the Solar System, its size would engulf all rocky planets and reach parts of the asteroid belt.
Notes
References
Corona Borealis
Mira variables
Coronae Borealis, V
Carbon stars
Durchmusterung objects
141826
077501 | V Coronae Borealis | [
"Astronomy"
] | 201 | [
"Corona Borealis",
"Constellations"
] |
44,387,871 | https://en.wikipedia.org/wiki/HD%20145457 | HD 145457 is a star located in the northern constellation of Corona Borealis (The Northern Crown) at a distance of around 442 light-years from the Sun, as determined through parallax measurements. It has been formally named Kamuy by the IAU, after a spiritual or divine being in Ainu mythology. With an apparent magnitude of 6.57, it is barely visible to the unaided eye on dark nights clear of light pollution. It is drifting closer to the Sun with a radial velocity of −3.2 km/s.
HD 145457 is an aging giant star with a stellar classification of K0 III that has cooled and expanded off the main sequence after exhausting its core hydrogen supply. With the assumption that it is a helium-burning object, the properties of HD 145457 can be derived by comparison with evolutionary tracks. With an age of 5.2 billion years old, it is around 1.57 times as massive as the Sun and has swollen to around 10 times its diameter. It is radiating 50 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,738 K.
It is a lithium-rich giant, unusual since lithium is rapidly destroyed once a star becomes a red giant. One explanation for the excess lithium in these stars has been a recent engulfment of a planet, but it is now thought more likely to be due to nucleosynthesis in the star. It is generally assumed that these lithium-rich giants are members of the red clump, core helium burning stars at the cool end of the horizontal branch.
Planetary system
HD 145457 has an exoplanetary companion called HD 145457 b discovered in 2010. 2.9 times as massive as Jupiter, it orbits about every 176 days with an orbital eccentricity of . Its semimajor axis is 0.76 AU. HD 145457 b was discovered by precise Doppler measurements with the Subaru Telescope.
As part of the IAU NameExoWorlds project in 2019, HD 145457 b has been formally named Chura. The name was selected by Japan. Chura is a word in the Ryukyuan/Okinawan language meaning natural beauty.
References
K-type giants
Planetary systems with one confirmed planet
Corona Borealis
BD+27 2595
145457
079219
Kamuy | HD 145457 | [
"Astronomy"
] | 489 | [
"Corona Borealis",
"Constellations"
] |
44,388,426 | https://en.wikipedia.org/wiki/Josef-Maria%20Jauch | Josef Maria Jauch (September 20, 1914 in Lucerne – August 30, 1974 in Geneva) was a Swiss/American theoretical physicist, known for his work on quantum electrodynamics and on the foundations of quantum theory, and leader of the "Geneva School" of mathematical physics.
Biography
Early life
Jauch was born on 20 September 1914 in Lucerne, Switzerland, the son of Josef Alois Jauch (a telegraph operator) and Emma Laura Rosa Jauch (née Conti). He had two older siblings: Adelheid Jauch and Emil Josef Karl Jauch. After his mother died in 1916, his father remarried, and a half-sister was born: Margrit Jauch (Fuchs). At the age of twelve he became fascinated with a fact he found stated in a popular astronomy book, that a body in a circular orbit with period , if brought to a stop, would fall into the central mass in time , which he showed could be derived from Kepler's law. Jauch was also interested in music, studying the violin from age twelve with his father, and then professionally after his father died when he was fifteen, performing chamber music from the age sixteen, and continuing throughout his student years in Zurich.
In 1933 Jauch began studies at the ETH Zürich, paying his fees with loans from friends in Lucerne because he had no money, and taking courses on thermodynamics from Wolfgang Pauli, on probability and graph theory from George Pólya, and on Galois theory and topology from Heinz Hopf. His Diplom Thesis was written under Pauli in 1938 on higher-spin particles in Dirac theory, presenting his results to the Swiss Physical Society in 1938. Upon presenting his results, Pauli reportedly said after a few minutes simply, "Das habe ich mir auch so gedach" ("I thought so too").
The War Years
With few academic jobs available in Switzerland at the outbreak of World War II, Jauch became a part-time teacher at Trogen in Appenzell, where he received an international exchange fellowship to study a Ph.D. at the University of Minnesota on Pólya's recommendation. There he studied higher symmetries of classical and quantum systems under Edward Lee Hill, for a dissertation entitled On Contact Transformations and Group Theory in Quantum Mechanical Problems, which in particular gave a prototype model for strong interactions using the representation theory of .
During his doctoral studies in Minneapolis, Jauch met Anna Tonette "Tonia" Hegland, a graduate student in the School of Social Work at the University of Minnesota, and the two were married on 1 January 1940. After receiving his doctorate in 1940, Jauch returned to Zurich to take up a research assistantship offered to him by Pauli at the ETH. However, it was extremely difficult to carry out research in Switzerland during the war: as Heinrich Behnke wrote to Erich Hecke in a letter of 8 March 1940, "The Paulis would be very happy if you paid them a visit again [in Zurich]. However, it is probably immensely difficult to obtain permission for this. I had an official invitation, and nevertheless had fabulously many difficulties. ... on the day of departure, my nerves had had it." Soon after Jauch's arrival, the Paulis left Zurich for Princeton, where they stayed for the remainder of the war. Meanwhile, Jauch continued working alongside Pauli's students under Gregor Wentzel, working on pair theory until 1942. During these years, Jauch and his wife Tonia became acquainted with Carl Jung, and met regularly with him for dream analyses. As the war became increasingly dangerous and unpredictable, the Jauchs returned to the United States on the last civilian ship to leave Europe during the war, the SS Drottningholm.
After arriving in the U.S., Jauch looked for a job and received an offer to join Pauli in Princeton as an Assistant Professor in 1942. There Pauli and Jauch studied the magnetic moment of the neutron, as well as the infrared divergence problem using Dirac field theory, reporting their results to the American Physical Society in 1944. In 1943, Jauch also taught classes in advanced quantum mechanics every other week at Cornell University. During their time in Princeton, Jauch and his wife had three children: Karl (1943), Eldri (1944), and Aletha (1945) (Aletha Solter).
After the War
In March 1946, Jauch decided to explore new directions by joining Bell Laboratories in Murray Hill, New Jersey as a research scientist for four months, where he studied luminescence in solids. In the autumn of 1946 he was appointed Assistant Professor at the University of Iowa, becoming a U.S. citizen in 1946. While at Iowa, Jauch continued to perform as a violinist. He also developed a lasting friendship and collaboration with Fritz Rohrlich, with whom he wrote his first book, Theory of Photons and Electrons. Jauch conceived of this book on quantum electrodynamics while on a Fulbright Program research fellowship at Trinity College, Cambridge from 1950 to 1951, and it became noted for its "uncommonly neat and painstaking treatment of details." Upon reading it, Pauli reportedly told Jauch, "Your book... oh, your book... I like better and better." Jauch was soon appointed Associate Professor, and then Full Professor at the University of Iowa, and continued working on scattering theory during his time there. During his years at the University of Iowa, Jauch accepted several summer teaching and research positions, including at the University of Chicago, Brandeis University, and Oak Ridge National Laboratory.
In 1958 Jauch and his family returned to Europe, where he spent one year working at CERN (The European Center for Nuclear Research) in Geneva. He spent the following year stationed in London as a scientific liaison officer for the U.S. Office of Naval Research (from 1959 to 1960), where he wrote reports on the state of physics around Europe.
In 1960, the University of Geneva offered Jauch the directorship of the Institute of Theoretical Physics, which he accepted, and where he remained until his death in 1974. Jauch's work at Geneva focused on the foundations of quantum theory. With his student Constantin Piron he proved an important no-go result for hidden variables, now known as the Jauch-Piron theorem. While giving a lecture at CERN on the impossibility of hidden variables in 1963, Jauch met John Stewart Bell, with whom he had "some intense discussion". Jauch pointed out to Bell that Gleason's theorem could be used to rule out a certain class of hidden variables on the basis of only quantum logic, which led to Bell's "other theorem", discovered independently by Kochen and Specker and now known as the Kochen-Specker theorem. Indeed, in his famous paper of 1964 on hidden variables, Bell writes of Gleason's theorem, "I am much indebted to Professor Jauch for drawing my attention to this work." In 1964 Jauch went on to prove what is now known as Jauch's theorem, that electromagnetic gauge invariance can be recovered in quantum theory from an assumption of Galilei covariance. His work on quantum foundations culminated with a book, The Foundations of Quantum Mechanics, published in 1968.
Jauch became a founding member of the European Physical Society at its inception in 1968. In his later work, he turned his attention to the mathematical foundations of equilibrium thermodynamics, producing a novel derivation of the thermodynamic entropy function on the basis of energy conservation. His third and final book, a popular work called Are Quanta Real? A Galilean Dialogue, with a preface by Douglas Hofstadter, was published in 1973. Jauch’s interest in Galileo also inspired him to research Galileo’s trial, and he delivered a lecture at CERN on February 20, 1964, called “The Trial of Galileo Galilei.”
Jauch and his wife divorced in 1969, and he remarried to Mercédès Viviane France Sabine de Cambourg in 1969 (whom he later divorced). He died suddenly of a stroke on August 30, 1974, and was buried in Cimetière de Saint-Georges, Geneva. His final work was the first of a two-part treatise on the mathematical foundations of equilibrium thermodynamics, published posthumously in 1975. No second part was ever published. Among his doctoral students were Gérard Emch, Marcel André Guenin, Andrew Lenard, Constantin Piron, and Kenneth Watson. He was the author of three books and over 80 scientific papers.
Books
The Theory of Photons and Electrons. The Relativistic Quantum Field Theory of Charged Particles with Spin One-half (with Fritz Rohrlich) (Addison-Wesley Publishing Company, 1955)
Foundations of Quantum Mechanics (Addison-Wesley Publishing Company, 1968)
Are Quanta Real? A Galilean Dialogue (Indiana University Press, 1973)
References
Further reading
Obituary written by Fritz Rohrlich, Jauch's co-author and colleague.
1914 births
1974 deaths
Quantum physicists
Swiss physicists
Princeton University faculty
People from Lucerne
ETH Zurich alumni
University of Minnesota alumni
People associated with CERN
Swiss expatriates in the United States | Josef-Maria Jauch | [
"Physics"
] | 1,917 | [
"Quantum physicists",
"Quantum mechanics"
] |
44,388,548 | https://en.wikipedia.org/wiki/%C3%89coscience | Écoscience is a quarterly peer-reviewed scientific journal originally published by Université Laval (1994–2014), and by Taylor & Francis since 2015. It was founded by Serge Payette, and it covers all aspects of ecology. In 2021 it had an impact factor of 1.344.
References
External links
Ecology journals
Quarterly journals
Multilingual journals
Academic journals established in 1994
Taylor & Francis academic journals | Écoscience | [
"Environmental_science"
] | 81 | [
"Environmental science journals",
"Ecology journals",
"Environmental science journal stubs"
] |
44,388,923 | https://en.wikipedia.org/wiki/Intel%20Binary%20Compatibility%20Standard | The Intel Binary Compatibility Standard (iBCS) is a standardized application binary interface (ABI) for Unix operating systems on Intel-386-compatible computers, published by AT&T, Intel and SCO in 1988, and updated in 1990. It extends source-level standards such as POSIX and XPG3 by standardizing various operating system interfaces, including the filesystem hierarchy layout (i.e., the locations of system files and installed programs), so that Unix programs would run on the various vendor-specific Unix implementations for Intel hardware (such as Xenix, SCO Unix and System V implementations). The second edition, announced in 1990, added an interface specification for VGA graphics.
iBCS, edition 2, was supported by various Unix versions, such as UnixWare and third-party implementations. A Linux implementation was developed ca. 1994, enabling Linux to run commercial Unix applications such as WordPerfect.
There have been several security issues in various iBCS implementations over the years.
See also
Filesystem Hierarchy Standard (FHS)
Linux Standard Base (LSB)
References
Unix history
Unix standards | Intel Binary Compatibility Standard | [
"Technology"
] | 231 | [
"Computer standards",
"Unix standards"
] |
44,389,424 | https://en.wikipedia.org/wiki/Varley%20F.%20Sears | Varley Fullerton Sears (born 1937, died at age 81 in Deep River, Ontario on June 9, 2019) was a Canadian physicist, notable for his contributions to the methodological foundations of neutron scattering.
In 1960, Sears obtained a Ph.D. from the University of Toronto with a thesis on The rotational absorption spectrum of solid and liquid parahydrogen. From 1963 to 1965, the National Research Council of Canada sent him as an Overseas Postdoctoral Fellow to the Clarendon Laboratory in Oxford where he was hosted by Roger James Elliott and worked on Raman scattering by semiconductors. Back in Canada, he became a staff scientist in the Theoretical Physics Branch of Chalk River Laboratories. In 1966/67, he published seminal papers on neutron spectra of molecular rotors. By the 1980s, he had become a leading expert in neutron optics, publishing a review and a textbook on the subject. Based on these foundations, he compiled authoritative tables of neutron scattering lengths. In 1997, he published a generic solution of the Darwin-Hamilton equations that provide an approximative description of multiple Bragg reflection by a mosaic crystal.
He was elected a Fellow of the American Physical Society in 1990.
References
Obituary: Neutron News 30 (4) 16 (2020).
1937 births
2019 deaths
Canadian physicists
Neutron scattering
University of Toronto alumni
Fellows of the American Physical Society | Varley F. Sears | [
"Chemistry"
] | 269 | [
"Scattering",
"Neutron scattering"
] |
44,390,229 | https://en.wikipedia.org/wiki/Circuit%20Scribe | Circuit Scribe is a ball-point pen containing silver conductive ink one can use to draw circuits instantly on flexible substrates like paper. Circuit Scribe made its way onto Kickstarter (an online site where people can fund projects) on November 19, 2013, with its goal of raising $85,000 for the manufacturing of the first batch of pens. By December 31, 2013, Circuit Scribe was able to raise a total of $674,425 with 12,277 'backers' or donors.
Similarly to drawing a picture, users can use a Circuit Scribe pen to draw lines on a simple piece of paper. They can then attach special electrical components on the drawn lines which allows the electrical currents to run through the components. This replaces the use of breadboards and wires.
Development
A team of researchers in Electroninks Incorporated, a startup company located at Research Park of the University of Illinois at Urbana-Champaign, created a water-based, non-toxic conductive ink that was noted as the Invention of the Month by Popular Science. The team began by developing a prototype using pens from a different company and replacing the ink with their special silver ink. Once completed, they started a Kickstarter campaign to earn funding for a mass production of the final form of the pens.
Team
The researching team consists of S. Brett Walker, Jennifer A. Lewis, Michael Bell, Analisa Russo, and Nancy Beardsly. Walker is the CEO of Electroninks and the co-founder along with Lewis, Bell, and the director of product development, Russo. Bell is also the chief operating officer while Beardsley is the technical support and user experience.
Prototype
The prototype pens are hand-cleaned Sakura Gelly Roll Metallic pens. The ink is replaced with the researchers' silver conductive ink. In order to have the right amount of ink flow to make smooth lines, the ink is precisely tuned.
Ink
The ink is created by placing an aqueous solution of silver nitrate into a flask of water combined with polyacrylic acid (PAA) and diethanolamine (DEA), the capping agent and reducing agent, respectively. After about twenty hours, the silver nitrate is dissolved, forming particles with a diameter of about 5 nanometers. In order to enlarge the size of the particle to an average diameter of about 400 nanometers, the flask is placed on a heated sonicator, a device that produces high-intensity ultrasound. Once cooled, the solution is poured into a larger flask and the thick precipitate, an insoluble solid which is formed is scraped out. From there, ethanol is added to coagulate the particles, or change the particles to a solid state. Most of the supernatant, the liquid lying above a layer of the precipitate, is then poured out so the remaining liquid can be centrifuged, or separated. After the process of centrifugation, the particles are placed back in water and forced through a syringe filter to remove unnecessary particles in the solution. Next, hydroxyethyl cellulose (HEC) is added as a binder and the entire mixture is homogenized. The solvents are allowed to evaporate until the ink has a desired viscosity or thickness.
Once the ink is created, a roller ball pen is dismantled and cleaned so the ink can be placed inside using a flat tip spatula. After replacing the roller ball tip, a couple blasts of compressed air is shot from the back end to force the ink into the tip. The outer cover of the pen is replaced and the prototype of the Circuit Scribe is created. From there, the team launched its Kickstarter campaign.
Kickstarter Campaign
Circuit Scribe launched its campaign on Kickstarter to receive funding and included a list of pledges which people could donate a certain amount and get a corresponding gift:
Pledge $5+: STEM Education Workbook
Pledge $20+: Circuit Scribe
Pledge $25+: Early Bird Basic Kit
Pledge $30+: Basic Kit
Pledge $35+: Early Bird Basic Kit + Book
Pledge $40+: Basic Kit + Book
Pledge $45+: Early Bird Maker Kit
Pledge $50+: Maker Kit
Pledge $90+: Gift Pack
Pledge $100+: Developer Kit
Pledge $175+: Circuit Scribe Bundle
Pledge $190+: Early Bird Classroom Kit
Pledge $200+: Classroom Kit
Pledge $500+: Component Designer
Pledge $5,000+: Electroninks Show & Tell
They also included stretch goals which include:
$250,000: Circuit Scribe Edu Platform & STEM Outreach
$650,000: Magnetic Sheet for Kit Activity Books & Maker Notebooks
$1,000,000: Resistor Pen
Modules
Circuit Scribe can be used to draw circuits that connect different types of modules, or individual components, such as:
Power
USB Power Adapter: Allows user to power the drawn circuits with either a USB port or a wall outlet.
9V Battery Adapter: Supplies a nine-volt power to the circuits.
Input
SPST Switch: An on/off switch that allows users to control the electrical circuit.
DPDT Switch: Two switches that direct the flow of current through the circuit.
Light Sensor: Shines light on the phototransistor to control an output.
Potentiometer 10k Ohm: A knob that controls the dimness, volume, and speed of circuit.
Connect
2-Pin Adapter: Allows user to connect resistors, capacitors, or sensors to circuits.
NPN Transistor: An electrical amplifier that converts small signals into large currents.
Blinker: Blinks output components on and off at adjustable rates.
DIY Boards: Allows user to solder 2, 4, 6, or 8 pin components to the board.
Connector Cables: Connects the paper circuit to DIY hardware platform.
Output
Bi-LED: Two LEDs in one that can flip directions to change the color.
Buzzer: Vibrates in response to the voltage.
Motor: Rotates with an applied voltage.
RGB LED: A red, blue, and green LED.
Uses
The Circuit Scribe allows the user to draw electrical circuits in any shape with its silver ink. With this aspect and its ability to connect different types of modules, it is possible to produce simple designs like an Arduino, an open-source electronic platform based on hardware and software.
Arduino
Circuit Scribe allows users to create a paper Arduino (or a ‘paperduino’), which is demonstrated by the research team. The team first found the schematics on the Arduino website and modified them so that they would work on a pen plotter. With a few modifications, they arranged the components and traces so that the board could be printed in a single layer. The alignments are set to 0.6 millimeters to match the width of the pen traces with a minimum distance of 0.1 millimeters. The pen plotter only prints lines and does not fill the patterns, so they designed large pads out of concentric circles and built up the pads for the components with some extra line features. This allows for a stronger conductivity. It is important to put chips close together to minimize the line resistance between them, but not so close that it is difficult to place the components. They used components from the 1206 package which are a bit larger than the original components from the Arduino. Before exporting the layout, they deselected every layer of the file except the top layer of traces. After exporting the file in .dxf format, both the wire width and the fill area options were deselected and the files were saved. Finally, the team measured the size of the board layout and dragged it onto a new sheet on Silhouette Studio. From there, the vertical height was adjusted to 2.945 inches and the speed was set to 1 in order to lay down the most ink when printed. The team went on to place components like resistors, capacitors, and LEDs on the printed silver ink. Components can be attached using tweezers and super glue, but can be reinforced using conductive epoxy.
Extras
Resistor Pen
As posted on their Kickstarter campaign, the creators of Circuit Scribe planned to develop resistor pens one can use to draw resistors the way one can use the Circuit Scribe to draw circuits. Although their stretch goal of $1,000,000 was not met by the deadline on the campaign, the team still managed to create them.
References
Electrical engineering
Pens | Circuit Scribe | [
"Engineering"
] | 1,745 | [
"Electrical engineering"
] |
44,390,248 | https://en.wikipedia.org/wiki/Nanoelectromechanical%20relay | A nanoelectromechanical (NEM) relay is an electrically actuated switch that is built on the nanometer scale using semiconductor fabrication techniques. They are designed to operate in replacement of, or in conjunction with, traditional semiconductor logic. While the mechanical nature of NEM relays makes them switch much slower than solid-state relays, they have many advantageous properties, such as zero current leakage and low power consumption, which make them potentially useful in next generation computing.
A typical NEM relay requires a potential on the order of the tens of volts in order to "pull in" and have contact resistances on the order of gigaohms. Coating contact surfaces with platinum can reduce achievable contact resistance to as low as 3 kΩ. Compared to transistors, NEM relays switch relatively slowly, on the order of nanoseconds.
Operation
A NEM relay can be fabricated in two, three, or four terminal configurations. A three terminal relay is composed of a source (input), drain (output), and a gate (actuation terminal). Attached to the source is a cantilevered beam that can be bent into contact with the drain in order to make an electrical connection. When a significant voltage differential is applied between the beam and gate, and the electrostatic force overcomes the elastic force of the beam enough to bend it into contact with the drain, the device "pulls in" and forms an electrical connection. In the off position, the source and drain are separated by an air gap. This physical separation allows NEM relays to have zero current leakage, and very sharp on/off transitions.
The nonlinear nature of the electric field, and adhesion between the beam and drain cause the device to "pull out" and lose connection at a lower voltage than the voltage at which it pulls in. This hysteresis effect means there is a voltage between the pull in voltage, and the pull out voltage that will not change the state of the relay, no matter what its initial state is. This property is very useful in applications where information needs to be stored in the circuit, such as in static random-access memory.
Fabrication
NEM relays are usually fabricated using surface micromachining techniques typical of microelectromechanical systems (MEMS). Laterally actuated relays are constructed by first depositing two or more layers of material on a silicon wafer. The upper structural layer is photolithographically patterned in order to form isolated blocks of the uppermost material. The layer below is then selectively etched away, leaving thin structures, such as the relay's beam, cantilevered above the wafer, and free to bend laterally. A common set of materials used in this process is polysilicon as the upper structural layer, and silicon dioxide as the sacrificial lower layer.
NEM relays can be fabricated using a back end of line compatible process, allowing them to be built on top of CMOS. This property allows NEM relays to be used to significantly reduce the area of certain circuits. For example, a CMOS-NEM relay hybrid inverter occupies 0.03 μm2, one-third the area of a 45 nm CMOS inverter.
History
The first switch made using silicon micro-machining techniques was fabricated in 1978. Those switches were made using bulk micromachining processes and electroplating. In the 1980s, surface micromachining techniques were developed and the technology was applied to the fabrication of switches, allowing for smaller, more efficient relays.
A major early application of MEMS relays was for switching radio frequency signals at which solid-state relays had poor performance. The switching time for these early relays was above 1 μs. By shrinking dimensions below one micrometer, and moving into the nano scale, MEMS switches have achieved switching times in the ranges of hundreds of nanoseconds.
Applications
Mechanical computing
Due to transistor leakage, there is a limit to the theoretical efficiency of CMOS logic. This efficiency barrier ultimately prevents continued increases in computing power in power-constrained applications. While NEM relays have significant switching delays, their small size and fast switching speed when compared to other relays means that mechanical computing utilizing NEM Relays could prove a viable replacement for typical CMOS based integrated circuits, and break this CMOS efficiency barrier.
A NEM relay switches mechanically about 1000 times slower than a solid-state transistor takes to switch electrically. While this makes using NEM relays for computing a significant challenge, their low resistance would allow many NEM relays to be chained together and switch all at once, performing a single large calculation. On the other hand, transistor logic has to be implemented in small cycles of calculations, because their high resistance does not allow many transistors to be chained together while maintaining signal integrity. Therefore, it would be possible to create a mechanical computer using NEM relays that operates at a much lower clock speed than CMOS logic, but performs larger, more complex calculations during each cycle. This would allow a NEM relay based logic to perform to standards comparable to current CMOS logic.
There are many applications, such as in the automotive, aerospace, or geothermal exploration businesses, in which it would be beneficial to have a microcontroller that could operate at very high temperatures. However, at high temperatures, semiconductors used in typical microcontrollers begin to fail as the electrical properties of the materials they are made of degrade, and the transistors no longer function. NEM relays do not rely on the electrical properties of materials to actuate, so a mechanical computer utilizing NEM relays would be able to operate in such conditions. NEM relays have been successfully tested at up to 500 °C, but could theoretically withstand much higher temperatures.
Field-programmable gate arrays
The zero leakage current, low energy usage, and ability to be layered on top of CMOS properties of NEM relays make them a promising candidate for usage as routing switches in field-programmable gate arrays (FPGA). A FPGA utilizing a NEM relay to replace each routing switch and its corresponding static random-access memory block could allow for a significant reduction in programming delay, power leakage, and chip area compared to a typical 22nm CMOS based FPGA. This area reduction mainly comes from the fact that the NEM relay routing layer can be built on top of the CMOS layer of the FPGA.
See also
Nanoelectromechanical systems
References
Relays
Microelectronic and microelectromechanical systems
Nanoelectronics | Nanoelectromechanical relay | [
"Materials_science",
"Engineering"
] | 1,380 | [
"Microtechnology",
"Materials science",
"Nanoelectronics",
"Nanotechnology",
"Microelectronic and microelectromechanical systems"
] |
44,390,249 | https://en.wikipedia.org/wiki/Hydrogeophysics | Hydrogeophysics is a cross-disciplinary area of research that uses geophysics to determine parameters (characteristics; measurements of limitations or boundaries) and monitor processes for hydrological studies of matters such as water resources, contamination, and ecological studies. The field uses knowledge and researchers from geology, hydrology, physics, geophysics, engineering, statistics, and rock physics. It uses geophysics to provide quantitative information about hydrogeological parameters, using minimally invasive methods. Hydrogeophysics differs from geophysics in its specific uses and methods. Although geophysical knowledge and methods have existed and grown over the last half century for applications in mining and petroleum industries, hydrogeological study sites have different subsurface conditions than those industries. Thus, the geophysical methods for mapping subsurface properties combine with hydrogeology to use proper, accurate methods to map shallow hydrological study sites.
Background
The field of hydrogeophysics developed out of a need to use minimally invasive methods for determining and studying hydrogeological parameters and processes. Determination of hydrogeological parameters is important for finding water resources, which is a growing need, and learning about water contamination, which has become relevant with the growing use of potentially hazardous chemicals.
The methods and knowledge of geophysics had been developed for mining and petroleum industries, which involve consolidated subsurface environments with high pressure and temperature. Since the subsurface environments in hydrogeological studies are less consolidated and have low temperature and pressure, combining geophysics with hydrogeology was necessary to develop proper geophysical methods that work for hydrological purposes.
Traditional hydrogeological methods for characterizing the subsurface usually involved drilling and taking soil samples from the site, which can disturb the study site, cost too much time or money, or expose researchers and people to harmful chemicals and contaminants. They also only provide localized information, rather than the necessary field-scale information. Using geophysical methods and digital technology allows hydrogeologists to more quickly study hydrological characteristics on a larger scale with a lower cost and less invasive techniques.
A Hydrogeophysics Advanced Study Institute was held at the Trest Castle in the Czech Republic in July 2002 and funded by NATO when they acknowledged the necessity for fully developed, minimally invasive procedures for investigating and monitoring hydrogeological processes and parameters in shallow subsurface conditions. The institute brought together geophysicists working in hydrogeological characterization with hydrogeologists interested in using geophysical methods and data for characterization. This group, plus other international researchers, discussed the possibilities and challenges of using geophysical methods for investigating hydrogeological parameters.
They determined the main obstacles of hydrogeophysics are gaps in the knowledge and understanding of the correlation between hydrogeological parameters and geophysical characteristics, and difficulty in being able to integrate those different sets of information. One of the biggest challenges is using an organized, methodical, and efficient way to combine geophysical and hydrogeological data sets that measure different parameters over different spatial scales. This is the largest obstacle because the foundation of hydrogeophysics is integrating hydrogeology with geophysics.
Methods
There are many different methods for determining subsurface properties and features that can be done from different locations/ proximities to the study sites:
Electric and electromagnetic methods (surface, airborne) - measuring the resistivity of the subsurface
Remote sensing (airborne)- mapping bedrock, water interfaces, and water quality assessment
Seismic refraction (surface)- mapping top of bedrock, faults, and water table
Seismic reflection (surface)- mapping top of bedrock, boundaries of faults and fracture zones, and stratigraphy
Ground-penetrating radar (surface)- mapping stratigraphy and water table; monitoring water content
Hydraulic tomography (crosshole)- measuring hydraulic conductivity
Neutron probe (wellbore)- monitoring water content
Permeameter (laboratory)- measuring hydraulic conductivity
Sieves (laboratory)- estimation of hydraulic conductivity
Time-domain reflectometer (laboratory)- measuring water content
Applications
Geophysics helps to learn about many hydrogeological matters such as:
Determining aquifer geometry
Determining fractured rock characteristics- faults/fissures and fluid circulation characteristics
Gaining knowledge of an aquifer's hydraulic properties- transimissivity (rate at which groundwater flows through aquifer horizontally), porosity, and permeability (measure of the ability of a porous material to allow fluid to flow through)
Determining water quality
Monitoring dynamic processes- seepage through the vadose zone
These parameters are then used to investigate matters including searching for underground water resources, aquifer control or contamination from sea water or industrial sources, and storing harmful substances underground. Having a good measurement of these hydrogeological parameters helps to better understand water contamination transport and develop more sustainable water resources.
References
Geophysics
Hydrology | Hydrogeophysics | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 986 | [
"Environmental engineering",
"Hydrology",
"Applied and interdisciplinary physics",
"Geophysics"
] |
44,390,264 | https://en.wikipedia.org/wiki/OrCam%20device | OrCam devices such as OrCam MyEye are portable, artificial vision devices that allow visually impaired people to understand text and identify objects through audio feedback, describing what they are unable to see.
Reuters described an important part of how it works as "a wireless smartcamera" which, when attached outside eyeglass frames, can read and verbalize text, and also supermarket barcodes. This information is converted to spoken words and entered "into the user’s ear." Face-recognition is also part of OrCam's feature set.
Devices
OrCam Technologies Ltd has created three devices; OrCam MyEye 2.0, OrCam MyEye 1, and OrCam MyReader.
OrCam My Eye 2.0:
OrCam debuted the second-generation model, the OrCam MyEye 2.0 in December 2017.
About the size of a finger, the MyEye 2.0 is battery-powered, and has been compressed into a self-contained device.
The device snaps onto any eyeglass frame magnetically.
Orcam 2.0 is small and light (22.5 grams/0.8 ounces) with functionality to restore independence to the visually impaired.
It comes in two versions. The basic model can read text, and a more advanced one adds features such as face recognition and barcode reading.
As of July 2023, the retail cost is between $4000 and $6000 (USD).
Clinical Studies
JAMA Ophthalmology:
In 2016 JAMA Ophthalmology conducted a study involving 12 legally blind participants to evaluate the usefulness of a portable artificial vision device (OrCam) for patients with low vision. The results showed that the OrCam device improved the patient's ability to perform tasks simulating those of daily living, such as reading a message on an electronic device, a newspaper article or a menu.
Wills Eye:
Wills Eye was a clinical study designed to measure the impact of the OrCam device on the quality of life of patients with End-stage Glaucoma. The conclusion was that OrCam, a novel artificial vision device using a mini-camera mounted on eyeglasses, allowed legally blind patients with end-stage glaucoma to read independently, subsequently improving their quality of life.
Employee testing
The New York Times described how a pre-release OrCam device was used by a Coloboma-impaired employee of the device's developer in 2013 for grocery shopping. It was the small size of the prototype rather than the functionality that gave her added mobility in an Israeli store's aisles.
Added life-enhancement was described: "to both recognize and speak .. bus numbers ..
traffic lights."
Social aspects
In contrast to an early version of Google Glass, which "failed ... because .. Glass wearers were ..mocked", early OrCam devices used designs that "clip unobtrusively on your shirt or perhaps your belt."
In addition, it does not record sounds or images, what was called "the privacy puzzle that stumped Google.
One 2018 technology reviewer wrote that he wished it had a headphone jack "so it would be less disruptive in places where others are working." An attempt was made to use bone conduction.
USA introduction
In 2018 a team headed by New York Assemblyman Dov Hikind introduced use of OrCam devices to ten individuals screened for what he termed "new Israeli technology that really makes a difference to the blind."
Although not the first USA success, it was more focused than a publicly funded project that was authorized in 2016 by a California government agency. Also in 2016 the Chicago Lighthouse for the Blind demonstrated its use.
Technology
In the area of hardware, miniaturization has been quite important, but one major area, software, was mentioned by Assemblyman Hikind, and reported by The Times of Israel
is the "AI-driven algorithms" that "reports .. how many people are in a room.
In addition to reading printed text, it can also aid in "seeing" what is on a television or computer screen. Although OrCam can't help with handwritten information, it can reuse information, the basis of recognizing "US currency, and even faces."
Features
While early language support was for English, French, German, Hebrew and Spanish, others now available include Danish, Dutch, Finnish, Italian, Norwegian, Portuguese and Swedish.
History
OrCam Technologies Ltd was founded in 2010 by Professor Amnon Shashua and Ziv Aviram. Before co-founding OrCam, the two in 1999 co-founded Mobileye, an Israeli company that develops vision-based advanced driver-assistance systems (ADAS) providing warnings for collision prevention and mitigation, which was acquired by Intel for $15.3 billion in 2017.
OrCam launched OrCam MyEye in 2013 after years of development and testing, and began selling it commercially in
2015.
In its early years, the company raised $22 million, $6 million of which came from Intel Capital. By 2014, Intel, which was also investing in Google Glass, had invested $15 million in Orcam. In March 2017, OrCam had raised $41 million in capital, making it worth $600 million.
Marketing
One outcome of initial marketing in the USA was that they "reached a deal with the California Department of Rehabilitation, ...qualifying blind and visually impaired state residents."
OrCam Technologies Ltd
OrCam Technologies Ltd. is the Israeli-based company producing these OrCam devices, which are wearable artificial intelligence space. The company develops and manufactures assistive technology devices for individuals who are visually impaired, partially sighted, blind, print disabilities, or have other disabilities. OrCam headquarters is located in Jerusalem, operating under the company name OrCam Technologies Ltd.
OrCam has over 150 employees, is headquartered in Jerusalem, and has offices in New York, Toronto, and London.
Awards
2018 Last Gadget Standing Winner
2018 CES Innovation Awards Honoree in Accessible Tech
2017 NAIDEX Innovation Award
2016 Louise Braille Corporate Recognition Award
2016 Silmo-d-Or Award
References
External links
OrCam Facebook page
Foundation Fighting Blindness website
National Center for Health Statistics website
Blindness equipment
Computer vision
Products introduced in 2013
Medical device manufacturers
Companies based in Jerusalem
Israeli inventions
Hebrew University of Jerusalem
Israeli companies established in 2010
Wearable computers | OrCam device | [
"Engineering"
] | 1,282 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
44,390,320 | https://en.wikipedia.org/wiki/List%20of%20arbitrary-precision%20arithmetic%20software | This article lists libraries, applications, and other software which enable or support arbitrary-precision arithmetic.
Libraries
Stand-alone application software
Software that supports arbitrary precision computations:
bc the POSIX arbitrary-precision arithmetic language that comes standard on most Unix-like systems.
dc: "Desktop Calculator" arbitrary-precision RPN calculator that comes standard on most Unix-like systems.
KCalc, Linux based scientific calculator
Maxima: a computer algebra system which bignum integers are directly inherited from its implementation language Common Lisp. In addition, it supports arbitrary-precision floating-point numbers, bigfloats.
Maple, Mathematica, and several other computer algebra software include arbitrary-precision arithmetic. Mathematica employs GMP for approximate number computation.
PARI/GP, an open source computer algebra system that supports arbitrary precision.
Qalculate!, an open-source free software arbitrary precision calculator with autocomplete.
SageMath, an open-source computer algebra system
SymPy, a CAS
Symbolic Math toolbox (MATLAB)
Windows Calculator, since Windows 98, uses arbitrary precision for basic operations (addition, subtraction, multiplication, division) and 32 digits of precision for advanced operations (square root, transcendental functions).
SmartXML, a free programming language with integrated development environment (IDE) for mathematical calculations. Variables of BigNumber type can be used, or regular numbers can be converted to big numbers using conversion operator # (e.g., #2.3^2000.1). SmartXML big numbers can have up to 100,000,000 decimal digits and up to 100,000,000 whole digits.
Languages
Programming languages that support arbitrary precision computations, either built-in, or in the standard library of the language:
Ada: the upcoming Ada 202x revision adds the Ada.Numerics.Big_Numbers.Big_Integers and Ada.Numerics.Big_Numbers.Big_Reals packages to the standard library, providing arbitrary precision integers and real numbers.
Agda: the BigInt datatype on Epic backend implements arbitrary-precision arithmetic.
Common Lisp: The ANSI Common Lisp standard supports arbitrary precision integer, ratio, and complex numbers.
C#: System.Numerics.BigInteger, from .NET 5
ColdFusion: the built-in PrecisionEvaluate() function evaluates one or more string expressions, dynamically, from left to right, using BigDecimal precision arithmetic to calculate the values of arbitrary precision arithmetic expressions.
D: standard library module std.bigint
Dart: the built-in int datatype implements arbitrary-precision arithmetic.
Emacs Lisp: supports integers of arbitrary size, starting with Emacs 27.1.
Erlang: the built-in Integer datatype implements arbitrary-precision arithmetic.
Go: the standard library package math/big implements arbitrary-precision integers (Int type), rational numbers (Rat type), and floating-point numbers (Float type)
Guile: the built-in exact numbers are of arbitrary precision. Example: (expt 10 100) produces the expected (large) result. Exact numbers also include rationals, so (/ 3 4) produces 3/4. One of the languages implemented in Guile is Scheme.
Haskell: the built-in Integer datatype implements arbitrary-precision arithmetic and the standard Data.Ratio module implements rational numbers.
Idris: the built-in Integer datatype implements arbitrary-precision arithmetic.
ISLISP: The ISO/IEC 13816:1997(E) ISLISP standard supports arbitrary precision integer numbers.
J: built-in extended precision
Java: Class (integer), Class (decimal)
JavaScript: as of ES2020, BigInt is supported in most browsers; the gwt-math library provides an interface to java.math.BigDecimal, and libraries such as DecimalJS, BigInt and Crunch support arbitrary-precision integers.
Julia: the built-in BigFloat and BigInt types provide arbitrary-precision floating point and integer arithmetic respectively.
newRPL: integers and floats can be of arbitrary precision (up to at least 2000 digits); maximum number of digits configurable (default 32 digits)
Nim: bigints and multiple GMP bindings.
OCaml: The Num library supports arbitrary-precision integers and rationals.
OpenLisp: supports arbitrary precision integer numbers.
Perl: The bignum and bigrat pragmas provide BigNum and BigRational support for Perl.
PHP: The BC Math module provides arbitrary precision mathematics.
PicoLisp: supports arbitrary precision integers.
Pike: the built-in int type will silently change from machine-native integer to arbitrary precision as soon as the value exceeds the former's capacity.
Prolog: ISO standard compatible Prolog systems can check the Prolog flag "bounded". Most of the major Prolog systems support arbitrary precision integer numbers.
Python: the built-in int (3.x) / long (2.x) integer type is of arbitrary precision. The Decimal class in the standard library module decimal has user definable precision and limited mathematical operations (exponentiation, square root, etc. but no trigonometric functions). The Fraction class in the module fractions implements rational numbers. More extensive arbitrary precision floating point arithmetic is available with the third-party "mpmath" and "bigfloat" packages.
Racket: the built-in exact numbers are of arbitrary precision. Example: (expt 10 100) produces the expected (large) result. Exact numbers also include rationals, so (/ 3 4) produces 3/4. Arbitrary precision floating point numbers are included in the standard library math/bigfloat module.
Raku: Rakudo supports Int and FatRat data types that promote to arbitrary-precision integers and rationals.
Rexx: variants including Open Object Rexx and NetRexx
RPL (only on HP 49/50 series in exact mode): calculator treats numbers entered without decimal point as integers rather than floats; integers are of arbitrary precision only limited by the available memory.
Ruby: the built-in Bignum integer type is of arbitrary precision. The BigDecimal class in the standard library module bigdecimal has user definable precision.
Scheme: R5RS encourages, and R6RS requires, that exact integers and exact rationals be of arbitrary precision.
Scala: Class BigInt and Class BigDecimal.
Seed7: bigInteger and bigRational.
Self: arbitrary precision integers are supported by the built-in bigInt type.
Smalltalk: variants including Squeak, Smalltalk/X, GNU Smalltalk, Dolphin Smalltalk, etc.
SmartXML, a free programming language with integrated development environment (IDE) for mathematical calculations. Variables of BigNumber type can be used, or regular numbers can be converted to big numbers using conversion operator # (e.g., #2.3^2000.1). SmartXML big numbers can have up to 100,000,000 decimal digits and up to 100,000,000 whole digits.
Standard ML: The optional built-in IntInf structure implements the INTEGER signature and supports arbitrary-precision integers.
Tcl: As of version 8.5 (2007), integers are arbitrary-precision by default. (Behind the scenes, the language switches to using an arbitrary-precision internal representation for integers too large to fit in a machine word. Bindings from C should use library functions such as Tcl_GetLongFromObj to get values as C-native data types from Tcl integers.)
Wolfram Language, like Mathematica, employs GMP for approximate number computation.
Online calculators
For one-off calculations. Runs on server or in browser. No installation or compilation required.
1. https://www.mathsisfun.com/calculator-precision.html 200 places
2. http://birrell.org/andrew/ratcalc/ arbitrary; select rational or fixed-point and number of places
3. PARI/GP online calculator - https://pari.math.u-bordeaux.fr/gp.html (PARI/GP is a widely used computer algebra system designed for fast computations in number theory (factorizations, algebraic number theory, elliptic curves, modular forms, L functions...), but also contains a large number of other useful functions to compute with mathematical entities such as matrices, polynomials, power series, algebraic numbers etc., and a lot of transcendental functions. PARI is also available as a C library to allow for faster computations.)
4.1. AutoCalcs - allow users to Search, Create, Store and Share multi-step calculations using explicit expressions featuring automated Unit Conversion. It is a platform that allows users to go beyond unit conversion, which in turn brings in significantly improved efficiency. A lot of sample calculations can be found at AutoCalcs Docs site. Calculations created with AutoCalcs can be embedded into 3rd party websites.
4.2. AutoCalcs Docs - considering above mentioned AutoCalcs as the calculation engine, this Docs site is a library with a host of calculations, where each calculation is essentially a web app that can run online, be further customized, and much more. Imaging reading a book with a lot of calculations, then this is the book/manual with all calculations that can be used on the fly. It is worthwhile to mention - when units are involved in the calculations, the unit conversion can be automated.
References
Lists of software
Computer arithmetic | List of arbitrary-precision arithmetic software | [
"Mathematics",
"Technology"
] | 2,007 | [
"Computing-related lists",
"Computer arithmetic",
"Arithmetic",
"Lists of software"
] |
44,390,326 | https://en.wikipedia.org/wiki/Arizona%20Journal%20of%20Environmental%20Law%20and%20Policy | The Arizona Journal of Environmental Law and Policy is a biannual student-run open access law journal covering environmental issues from legal, scientific, economic, and public policy perspectives. It was established in 2010 and was originally sponsored by the Udall Center for Studies in Public Policy (University of Arizona). It is now published by the University of Arizona College of Law.
Impact
In 2017, Washington and Lee University's Law Journal Rankings placed the journal 14th out of 81 environmental, natural resources, and land use law journals for Impact-Factor. Articles published in the journal have been cited by the Arizona Court of Appeals. Articles published in the journal have also been cited by many other legal treatises and journals, including American Jurisprudence, Law of Independent Power, Rogers' Hornbook on Environmental Law (Second Edition), Virginia Law Review, UCLA Law Review, George Washington Law Review, and Southern California Law Review.
References
External links
American law journals
Academic journals established in 2010
Biannual journals
English-language journals
University of Arizona
Open access journals
Environmental law journals
2010 establishments in Arizona | Arizona Journal of Environmental Law and Policy | [
"Environmental_science"
] | 215 | [
"Environmental science journals",
"Environmental social science stubs",
"Environmental social science",
"Environmental science journal stubs"
] |
44,390,392 | https://en.wikipedia.org/wiki/Log%20Gabor%20filter | In signal processing it is useful to simultaneously analyze the space and frequency characteristics of a signal. While the Fourier transform gives the frequency information of the signal, it is not localized. This means that we cannot determine which part of a (perhaps long) signal produced a particular frequency. It is possible to use a short time Fourier transform for this purpose, however the short time Fourier transform limits the basis functions to be sinusoidal. To provide a more flexible space-frequency signal decomposition several filters (including wavelets) have been proposed. The Log-Gabor filter is one such filter that is an improvement upon the original Gabor filter. The advantage of this filter over the many alternatives is that it better fits the statistics of natural images compared with Gabor filters and other wavelet filters.
Applications
The Log-Gabor filter is able to describe a signal in terms of the local frequency responses. Because this is a fundamental signal analysis technique, it has many applications in signal processing. Indeed, any application that uses Gabor filters, or other wavelet basis functions may benefit from the Log-Gabor filter. However, there may not be any benefit depending on the particulars of the design problem. Nevertheless, the Log-Gabor filter has been shown to be particularly useful in image processing applications, because it has been shown to better capture the statistics of natural images.
In image processing, there are a few low-level examples of the use of Log-Gabor filters. Edge detection is one such primitive operation, where the edges of the image are labeled. Because edges appear in the frequency domain as high frequencies, it is natural to use a filter such as the Log-Gabor to pick out these edges. These detected edges can be used as the input to a segmentation algorithm or a recognition algorithm. A related problem is corner detection. In corner detection the goal is to find points in the image that are corners. Corners are useful to find because they represent stable locations that can be used for image matching problems. The corner can be described in terms of localized frequency information by using a Log-Gabor filter.
In pattern recognition, the input image must be transformed into a feature representation that is easier for a classification algorithm to separate classes. Features formed from the response of Log-Gabor filters may form a good set of features for some applications because it can locally represent frequency information. For example, the filter has been successfully used in face expression classification. There is some evidence that the human visual system processes visual information in a similar way.
There are a host of other applications that require localized frequency information. The Log-Gabor filter has been used in applications such as image enhancement, speech analysis, contour detection, texture synthesis and image denoising among others.
Existing approaches
There are several existing approaches for computing localized frequency information. These approaches are advantageous because unlike the Fourier transform, these filters can more easily represent discontinuities in the signal. For example, the Fourier transform can represent an edge, but only by using an infinite number of sine waves.
Gabor filters
When considering filters that extract local frequency information, there is a relationship between the frequency resolution and the time/space resolution. When more samples are taken the resolution of the frequency information is higher, however the time/space resolution will be lower. Likewise taking only a few samples means a higher spatial/temporal resolution, but this is at the cost of less frequency resolution. A good filter should be able to obtain the maximum frequency resolution given a set time/space resolution, and vice versa. The Gabor filter achieves this bound. Because of this, the Gabor filter is a good method for simultaneously localizing spatial/temporal and frequency information. A Gabor filter in the space (or time) domain is formulated as a Gaussian envelope multiplied by a complex exponential. It was found that the cortical responses in the human visual system can be modeled by the Gabor filter. The Gabor filter was modified by Morlet to form an orthonormal continuous wavelet transform.
Although the Gabor filter achieves a sense of optimality in terms of the space-frequency tradeoff, in certain applications it might not be an ideal filter. At certain bandwidths, the Gabor filter has a non-zero DC component. This means that the response of the filter depends on the mean value of the signal. If the output of the filter is to be used for an application such as pattern recognition, this DC component is undesirable because it gives a feature that changes with the average value. As we will soon see, the Log-Gabor filter does not exhibit this problem. Also the original Gabor filter has an infinite length impulse response. Finally, the original Gabor filter, while optimum in the sense of uncertainty, does not properly fit the statistics of natural images. As shown in, it is better to choose a filter with a longer sloping tail in an image coding task.
In certain applications, other decompositions have advantages. Although there are many such decompositions possible, here we briefly present two popular methods: Mexican hat wavelets and the steerable pyramid.
Mexican Hat wavelet
The Ricker wavelet, commonly called the Mexican hat wavelet is another type of filter that is used to model data. In multiple dimensions this becomes the Laplacian of a Gaussian function. For reasons of computational complexity, the Laplacian of a Gaussian function is often approximated using a difference of Gaussians. This difference of Gaussian function has found use in several computer vision applications such as keypoint detection. The disadvantage of the Mexican hat wavelet is that it exhibits some aliasing and does not represent oblique orientations well.
Steerable pyramid
The steerable pyramid decomposition was presented as an alternative to the Morlet (Gabor) and Ricker wavelets. This decomposition ignores the orthogonality constraint of the wavelet formulation, and by doing this is able to construct a set of filters which are both translation and rotation independent. The disadvantage of the steerable pyramid decomposition is that it is overcomplete. This means that more filters than truly necessary are used to describe the signal.
Definition
Field introduced the Log-Gabor filter and showed that it is able to better encode natural images compared with the original Gabor filter. Additionally, the Log-Gabor filter does not have the same DC problem as the original Gabor filter. A one dimensional Log-Gabor function has the frequency response:
where and are the parameters of the filter. will give the center frequency of the filter. affects the bandwidth of the filter. It is useful to maintain the same shape while the frequency parameter is varied. To do this, the ratio should remain constant. The following figure shows the frequency response of the Gabor compared with the Log-Gabor:
Another definition of the Log-Gabor filter is to consider it as a probability distribution function, with a normal distribution, but considering the logarithm of frequencies. This makes sense in contexts where the Weber–Fechner law applies, such as in visual or auditive perception. Following the change of variable rule, a one dimensional Log-Gabor function has thus the modified frequency response:
Note that this extends to the origin and that we still have .
In both definitions, because of the zero at the DC value, it is not possible to derive an analytic expression for the filter in the space domain. In practice the filter is first designed in the frequency domain, and then an inverse Fourier transform gives the time domain impulse response.
Bi-dimensional Log-Gabor filter
Like the Gabor filter, the log-Gabor filter has seen great popularity in image processing. Because of this it is useful to consider the 2-dimensional extension of the log-Gabor filter. With this added dimension the filter is not only designed for a particular frequency, but also is designed for a particular orientation. The orientation component is a Gaussian distance function according to the angle in polar coordinates (see or ):
where here there are now four parameters: the center frequency, the width parameter for the frequency, the center orientation, and the width parameter of the orientation. An example of this filter is shown below.
The bandwidth in the frequency is given by:
Note that the resulting bandwidth is in units of octaves.
The angular bandwidth is given by:
In many practical applications, a set of filters are designed to form a filter bank. Because the filters do not form a set of orthogonal basis, the design of the filter bank is somewhat of an art and may depend upon the particular task at hand. The necessary parameters that must be chosen are: the minimum and maximum frequencies, the filter bandwidth, the number of orientations, the angular bandwidth, the filter scaling and the number of scales.
See also
Gabor transform
Gabor wavelet
Gabor filter
Gabor atom
Feature detection (computer vision) for other low-level feature detectors
Image derivative
Image noise reduction
Ridge detection for relations between edge detectors and ridge detectors
References
External links
(obsolete to this date)
A python implementation with examples for vision:
Signal processing
Linear filters | Log Gabor filter | [
"Technology",
"Engineering"
] | 1,848 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
42,936,953 | https://en.wikipedia.org/wiki/HR%205401 | HR 5401 is a possible astrometric binary star system in the southern constellation of Lupus. With an apparent visual magnitude of 5.83, it is just visible to the naked eye under good seeing conditions. The distance to HR 5401 can be estimated from its annual parallax shift of , yielding a range of 205 light years. It is moving closer to Earth with a heliocentric radial velocity of −30 km/s, and is expected to come within in ~524,000 years.
This is an Am star with a stellar classification of A1m A5/7-F2. Lu (1991) lists it as a likely dwarf barium star. It is radiating 13 times the Sun's luminosity from its photosphere at an effective temperature of 7,300 K. This system is a source of X-ray emission which may be coming from the companion.
HR 5401 has two visual companions. Component B is a magnitude 11.50 star at an angular separation of along a position angle (PA) of 114°, as of 1999. The second companion, designated component C, is magnitude 11.16 with a separation of at a PA of 164°, as of 2000.
References
Am stars
Astrometric binaries
Lupus (constellation)
Durchmusterung objects
126504
070663
5401 | HR 5401 | [
"Astronomy"
] | 275 | [
"Constellations",
"Lupus (constellation)"
] |
42,937,599 | https://en.wikipedia.org/wiki/HD%2097413 | HD 97413 is a binary star located in the southern constellation Centaurus. The system has a combined magnitude of 6.27, placing it near the limit for naked eye visibility. Based on parallax measurements from the Gaia spacecraft, the system is located 320 light years away from the Solar System.
The objects binarity was detected in a Hipparcos survey. The two components can't be distinguished because both stars have an angular separation of . Nevertheless, speckle interferometry revealed the components to have a 2.6 magnitude difference. They are located along a position angle of 250°.
The visible component – HD 97413 A – has a stellar classification of A1 V, indicating that it is an ordinary A-type main-sequence star. It has 1.94 times the mass of the Sun and a radius of . It radiates 19.6 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it a white hue. However, this is not typical for an A1 star. Parameters determined by Gaia's extinction reveal HD 97413 A to have an iron abundance half of the Sun's, making it metal deficient.
References
Centaurus
A-type main-sequence stars
CD-45 06771
097413
054718
Binary stars
Centauri, 7 | HD 97413 | [
"Astronomy"
] | 280 | [
"Centaurus",
"Constellations"
] |
42,937,704 | https://en.wikipedia.org/wiki/HD%2098176 | HD 98176, also designated as HIP 55133 and rarely 22 G. Centauri, is a solitary, white hued star located in the southern constellation Centaurus. It has an apparent magnitude of 6.44, placing it near the limit for naked eye visibility. Based on parallax measurements from Gaia DR3, the object is estimated to be 348 light years distant. At its current distance, its brightness is diminished by 0.32 magnitudes due to interstellar dust. Pauzen et al. (2001) lists it as a potential λ Boötis star.
This is an ordinary A-type main-sequence star with a stellar classification of A0 V. Pauzen et al. (2001) gives it a slightly cooler class of A1 Vn, which includes broad absorption lines due to rapid rotation. It has 2.5 times the mass of the Sun and double its radius. It radiates 28.3 times the luminosity of the Sun from its photosphere at an effective temperature of . Based on parameters derived from extinction in the Gaia passband, HD 98176 has an iron abundance 19% below solar levels.
References
Centaurus
A-type main-sequence stars
098176
Durchmusterung objects
055133 | HD 98176 | [
"Astronomy"
] | 263 | [
"Centaurus",
"Constellations"
] |
42,937,847 | https://en.wikipedia.org/wiki/HD%2063399 | HD 63399 is an orange hued star located in the southern constellation Puppis, the poop deck. It has an apparent magnitude of 6.45, placing it near the limit for naked eye visibility. Based on parallax measurements from Gaia DR3, the object is estimated to be 445 light years distant. It appears to be receding with a spectroscopic radial velocity of . At its current distance, HD 63399 is diminished by 0.29 magnitudes due to interstellar dust.
HD 63399 is a red giant star that is currently on the red giant branch, fusing hydrogen in a shell around its core. It has a stellar classification of K1 III. At present it has a mass ranging from 1.3 to 1.7 times the mass of the Sun, depending on the study. HD 63399 has expanded to 10.8 times its girth and now radiates 54.8 times the luminosity of the Sun from its photosphere at an effective temperature of . The star has an iron abundance 13% below solar levels, making it slightly metal deficient.
References
Puppis
K-type giants
CD-35 03874
063399
037996 | HD 63399 | [
"Astronomy"
] | 255 | [
"Puppis",
"Constellations"
] |
42,938,388 | https://en.wikipedia.org/wiki/Virtual%20collective%20consciousness | Virtual collective consciousness (VCC) is a term rebooted and promoted by two behavioral scientists, Yousri Marzouki and Olivier Oullier in their 2012 Huffington Post article titled: "Revolutionizing Revolutions: Virtual Collective Consciousness and the Arab Spring", after its first appearance in 1999-2000. VCC is now defined as an internal knowledge catalyzed by social media platforms and shared by a plurality of individuals driven by the spontaneity, the homogeneity, and the synchronicity of their online actions. VCC occurs when a large group of persons, brought together by a social media platform think and act with one mind and share collective emotions. Thus, they are able to coordinate their efforts efficiently, and could rapidly spread their word to a worldwide audience. When interviewed about the concept of VCC that appeared in the book - Hyperconnectivity and the Future of Internet Communication - he edited, Professor of Pervasive Computing, Adrian David Cheok mentioned the following: "The idea of a global (collective) virtual consciousness is a bottom-up process and a rather emergent property resulting from a momentum of complex interactions taking place in social networks. This kind of collective behaviour (or intelligence) results from a collision between a physical world and a virtual world and can have a real impact in our life by driving collective action."
Etymology
In 1999-2000, Richard Glen Boire provided a cursory mention and the only occurrence of the term "Virtual collective consciousness" in his text as follows:
The recent definition of VCC evolved from the first empirical study that provided a cyberpsychological insight into the contribution of Facebook to the 2011 Tunisian revolution. In this study, the concept was originally called "collective cyberconsciousness". The latter is an extension of the idea of "collective consciousness" coupled with "citizen media" usage. The authors of this study also made a parallel between this original definition of VCC and other comparable concepts such as Durkheim's collective representation, Žižek's "collective mind" or Boguta's "new collective consciousness" that he used to describe the computational history of the Internet shutdown during the Egyptian revolution. Since VCC is the byproduct of the network's successful actions, then these actions must be timely, acute, rapid, domain-specific, and purpose-oriented to successfully achieve their goal. Before reaching a momentum of complexity, each collective behavior starts by a spark that triggers a chain of events leading to a crystallized stance of a tremendous amount of interactions. Thus, VCC is an emergent global pattern from these individual actions.
In 2012, the term virtual collective consciousness resurfaced and was brought to light after extending its applications to the Egyptian case and the whole social networking major impact on the success of the so-called Arab Spring. Moreover, the acronym VCC was suggested to identify the theoretical framework covering on-line behaviors leading to a virtual collective consciousness. Hence, online social networks have provided a new and faster way of establishing or modifying "collective consciousness" that was paramount to the 2011 uprisings in the Arab world.
Theoretical underpinnings of VCC
Various theoretical references ranging from sociology to computer science were mentioned in order to account for the key features that render the framework for a virtual collective consciousness. The following list is not exhaustive, but the references it contains are often highlighted:
Émile Durkheim's collective representations are at the heart of VCC since collectivity taken decisions according to Durkheim's assumptions will approve or disapprove individuals’ actions and help them eventually reach their final goal.
Marshall McLuhan's global village: The shrinking of our big world to a small place called cyberspace is made possible by technological extensions of human consciousness.
Carl Jung's collective unconscious: When a society is witnessing significant changes, the anchoring of archetypal images (e.g., political leaders) seems to be deeply rooted in individuals' collective unconscious that is likely to bias their political choices. Individual memories of public events were also supposed to convey a "collective awareness" that can be subconsciously altered by the instantaneous spread of information through social networking around the world.
Daniel Wegner's transactive memory (TM): social networking platforms such as Facebook during the Tunisian revolution or Twitter during the Egyptian revolution served as placeholders of a VCC where information can be harnessed and steered to the highly specific revolutionary purpose. Although research on TM has been originally limited to couples, small groups, and organizations, recent studies strongly suggest that an effective TM can operate on a very large scale too.
James Surowiecki's wisdom of crowds
Collective influence algorithm: The CI (Collective influence) algorithm is effective in finding influential nodes in a variety of networks, including social networks, communication networks, and biological networks. It has been used to identify influencers on social media platforms, to identify key nodes in transportation networks, and to identify potential drug targets in biological networks.
Some illustrations of VCC
Besides the studied effect of social networking on the Tunisian and Egyptian revolutions, the former via Facebook and the latter via Twitter other applications were studied under the prism of VCC framework:
The Whitacre's virtual choir: A compelling example of the degree of autonomy and self identity members of a spontaneously created network through a VCC is Eric Whitacre's unique musical project that involved a collection of singers performing remotely to create a virtual Choir. The resulting effect of all the voices illustrated a genuine virtual collective empathy merging the artist mind with all the singers through his silent conducting gestures.
The Harlem Shake dance:
The Bitcoin protocol: It was questioned whether or not the Bitcoin protocol can morph into virtual collective consciousness. The Byzantine generals problem was used as an analogy to understand the behavioral complexity of the community of Bitcoin's users.
Artificial Social Networking Intelligence (ASNI): refers to the application of artificial intelligence within social networking services and social media platforms. It encompasses various technologies and techniques used to automate, personalize, enhance, improve, and synchronize user's interactions and experiences within social networks. ASNI is expected to evolve rapidly, influencing how we interact online and shaping their digital experiences. Transparency, ethical considerations, media influence bias, and user control over data will be crucial to ensure responsible development and positive impact.
See also
Algorithmic curation
Ambient awareness
Collective consciousness
Collective influence algorithm
Collective intelligence
Collective unconscious
Crowdsourcing
Hyperconnectivity
Media intelligence
Sentiment analysis
Social cloud computing
Social media intelligence
Social media optimization
Wisdom of the crowd
References
External links
VCC Entry in P2P Foundation
Learning Enhancement Center Blog entry on VCC and global empowerment
The 1999-2000 article of Richard Glen Boire
Special:WhatLinksHere/Viral phenomenon
Special:WhatLinksHere/Crowdsourcing
Social media
Information society
Crowd psychology
Collective intelligence
Open-source intelligence
Social information processing
Cybernetics | Virtual collective consciousness | [
"Technology"
] | 1,412 | [
"Computing and society",
"Information society",
"Social media"
] |
42,939,159 | https://en.wikipedia.org/wiki/7%20Cephei | 7 Cephei is a single star located approximately 820 light years away, in the northern circumpolar constellation of Cepheus. It is visible to the naked eye as a dim, blue-white hued star with an apparent visual magnitude of 5.42.
This is a B-type main-sequence star with a stellar classification of B7 V. It is a candidate variable star with an amplitude of 9 micromagnitudes and a period of . This object has 4.5 times the mass of the Sun and about three times the Sun's radius. It is spinning rapidly with a projected rotational velocity of 236 km/s. 7 Cephei is radiating 769 times the luminosity of the Sun from its photosphere at an effective temperature of 12,560 K.
References
B-type main-sequence stars
Cepheus (constellation)
Durchmusterung objects
Cephei, 07
204770
105972
8227 | 7 Cephei | [
"Astronomy"
] | 197 | [
"Constellations",
"Cepheus (constellation)"
] |
42,939,494 | https://en.wikipedia.org/wiki/66%20Eridani | 66 Eridani is a binary star in the constellation of Eridanus. The combined apparent magnitude of the system is 5.12 on average. Parallax measurements by Hipparcos put the system at some 309 light-years (95 parsecs) away.
This is a spectroscopic binary: the two stars cannot be individually resolved, but periodic Doppler shifts in its spectrum mean there must be orbital motion. The two stars orbit each other every 5.5226013 days. Their orbit is fairly eccentric, at 0.0844.
The combined spectrum of 66 Eridani matches that of a B-type main-sequence star, and the two stars have similar masses. The spectrum also shows excess of mercury and manganese, as it is a type of chemically peculiar star called a mercury-manganese star. 66 Eridani is an Alpha2 Canum Venaticorum variable. For this reason, it has been given the designation EN Eridani.
References
Eridanus (constellation)
Eridani, 66
B-type main-sequence stars
Spectroscopic binaries
Alpha2 Canum Venaticorum variables
Durchmusterung objects
032964
023794
1657
Eridani, EN | 66 Eridani | [
"Astronomy"
] | 257 | [
"Eridanus (constellation)",
"Constellations"
] |
42,939,954 | https://en.wikipedia.org/wiki/Gloiocephala%20lutea | Gloiocephala lutea is a species of fungus native to Ecuador. It was described as new to science by Rolf Singer in 1976.
References
Physalacriaceae
Fungi described in 1976
Fungi of Ecuador
Taxa named by Rolf Singer
Fungus species | Gloiocephala lutea | [
"Biology"
] | 52 | [
"Fungi",
"Fungus species"
] |
42,940,005 | https://en.wikipedia.org/wiki/NGC%20988 | NGC 988 is a spiral galaxy located in the constellation Cetus. It lies at a distance of 50 million light years from Earth, which, given its apparent dimensions, means that NGC 988 is about 75,000 light years across. The magnitude 7.1 star HD 16152 is superposed 52" northwest of the center of NGC 988. The galaxy was discovered by Édouard Jean-Marie Stephan in 1880. One ultraluminous X-ray source has been detected in NGC 988.
NGC 988 is the brightest galaxy in NGC 1052 group (also known as NGC 988 group), which includes the galaxies NGC 991, NGC 1022, NGC 1035, NGC 1042, NGC 1047, NGC 1051, NGC 1052, NGC 1084, and NGC 1110. It belongs in the same galaxy cloud as Messier 77.
One supernova has been observed in NGC 988: SN 2017gmr, a Type II supernova discovered on 4 September 2017.
References
External links
Spiral galaxies
0988
009843
02330-0934
-02-07-037
Astronomical objects discovered in 1880
035
Cetus
Discoveries by Édouard Stephan | NGC 988 | [
"Astronomy"
] | 244 | [
"Cetus",
"Constellations"
] |
42,940,879 | https://en.wikipedia.org/wiki/1%2C1-Diethoxyethane | 1,1-Diethoxyethane (acetaldehyde diethyl acetal) is a major flavoring component of distilled beverages, especially malt whisky and sherry. Although it is just one of many compounds containing an acetal functional group, this specific chemical is sometimes called simply acetal.
References
Acetals
Distilled drinks
Ethoxy compounds | 1,1-Diethoxyethane | [
"Chemistry"
] | 77 | [
"Acetals",
"Distillation",
"Functional groups",
"Distilled drinks"
] |
42,942,751 | https://en.wikipedia.org/wiki/Creative%20Writers%20and%20Day-Dreaming | Creative Writers and Day-Dreaming () was an informal talk given in 1907 by Sigmund Freud, and subsequently published in 1908, on the relationship between unconscious phantasy and creative art.
Freud's argument – that artists, reviving memories of childhood daydreams and play activities, succeeded in making them acceptable through their aesthetic technique – was to be widely influential for interwar modernism.
Artistic sources
Freud began his talk by raising the question of where writers drew their material from, suggesting that children at play, and adults day-dreaming, both provided cognate activities to those of the literary artist. Heroic and erotic daydreams or preconscious phantasies in both men and women were seen by Freud as providing substitute satisfactions for everyday deprivations; and the same phantasies were in turn turned into shareable (public) artistic constructs by the creative writer, where they could serve as cultural surrogates for the universal instinctual renunciations inherent in civilization.
Artistic technique
Freud saw the aesthetic principle as the ability to turn the private phantasy into a public artefact, using artistic pleasure to release a deeper pleasure founded on the release of forbidden (unconscious) material. The process allowed the writer him/herself to emerge from their introversion and return to the public world. If the phantasies came too close to the unconscious repressed, however, the process would fail, leading either to creative inhibition or to a rejection of the artwork itself.
Freud himself epitomised his essay's argument a decade later in his Introductory Lectures, stating of the true artist that: he understands how to work over his daydreams in such a way as to make them lose what is too personal in them and repels strangers, and to make it possible for others to share in the enjoyment of them. He understands, too, how to tone them down so that they do not easily betray their origin from proscribed sources....he has thus achieved through his phantasy what originally he had achieved only in his phantasy – honour, power and the love of women.
See also
D. W. Winnicott
Edmund Wilson
F. Scott Fitzgerald
Hanns Sachs
Sublimation
References
Further reading
J. J. Spector, The Aesthetics of Freud (1972)
Joseph J. Sandler ed, On Freud's Creative Writers and Daydreaming (2013)
External links
Creative Writers and Day-Dreaming, Notes
Literary Encyclopedia
Creativity
Essays by Sigmund Freud
Freudian psychology
1907 speeches
Aesthetics literature
Philosophy lectures | Creative Writers and Day-Dreaming | [
"Biology"
] | 517 | [
"Creativity",
"Behavior",
"Human behavior"
] |
42,943,213 | https://en.wikipedia.org/wiki/TAE%20Technologies | TAE Technologies, formerly Tri Alpha Energy, is an American company based in Foothill Ranch, California developing aneutronic fusion power. The company's design relies on an advanced beam-driven field-reversed configuration (FRC), which combines features from accelerator physics and other fusion concepts in a unique fashion, and is optimized for hydrogen-boron fuel, also known as proton-boron or p-11B. It regularly publishes theoretical and experimental results in academic journals with hundreds of publications and posters at scientific conferences and in a research library hosting these articles on its website. TAE has developed five generations of original fusion platforms with a sixth currently in development. It aims to manufacture a prototype commercial fusion reactor by 2030.
Organization
The company was founded in 1998, and is backed by private capital. It operated as a stealth company for many years, refraining from launching its website until 2015. It did not generally discuss progress nor any schedule for commercial production. However, it has registered and renewed various patents.
As of 2021, TAE Technologies reportedly had more than 250 employees and had raised over US$880 million.
Funding
Main financing has come from Goldman Sachs and venture capitalists such as Microsoft co-founder Paul Allen's Vulcan Inc., Rockefeller's Venrock, and Richard Kramlich's New Enterprise Associates. The Government of Russia, through the joint-stock company Rusnano, invested in Tri Alpha Energy in October 2012, and Anatoly Chubais, Rusnano CEO, became a board member. Other investors include the Wellcome Trust and the Kuwait Investment Authority. As of July 2017 the company reported that it had raised more than $500 million in backing. As of 2020, it had raised over $600 million, which rose to around $880 million in 2021 and $1.2 billion as of 2022.
Leadership and board of directors
TAE's technology was co-founded by physicist Norman Rostoker, as a spin-out of his work at the University of California, Irvine. Steven Specker, former CEO of the Electrical Power Research Institute (EPRI), was CEO from October 2016 to July 2018. Michl Binderbauer, who earned his PhD. in plasma physics under the guidance of Rostoker at UCI, moved from CTO to CEO following Specker's retirement. Specker remains an advisor. Additional board members include Jeff Immelt, former CEO of General Electric; John J. Mack, former CEO of Morgan Stanley; and Ernest Moniz, former United States Secretary of Energy at the US Department of Energy, who joined the company's board of directors in May 2017.
Collaborators
Since 2014 TAE Technologies has worked with Google to develop a process to analyze the data collected on plasma behavior in fusion reactors. In 2017, using a machine learning tool developed through the partnership and based on the "Optometrist Algorithm", it found significant improvements in plasma containment and stability over the previous C-2U machine. The study's results were published in Scientific Reports.
In November 2017 the company was admitted to a United States Department of Energy program, "Innovative and Novel Computational Impact on Theory and Experiment", that gave it access to the Cray XC40 supercomputer.
In 2021, TAE Technologies announced a joint research project with Japan’s Institute for Fusion Science (NIFS), a three year-long study on the effects of hydrogen-boron fuel reactions in the NIFS Large Helical Device (LHD).
Subsidiaries
TAE Life Sciences
In March 2018 TAE Technologies announced it had raised $40 million to create TAE Life Sciences, a subsidiary focused on refining boron neutron capture therapy (BNCT) for cancer treatment, with funding led by ARTIS Ventures. TAE Life Sciences also announced that it would partner with Neuboron Medtech, which would be the first to install the company's beam system. TAE Life Sciences shares common board members with TAE Technologies and is led by Bruce Bauer.
TAE Power Solutions
In September 2021, TAE Technologies announced the formation of a new division, Power Solutions, to commercialize the power management systems developed on the C-2W/Norman reactor for the electric vehicle, charging infrastructure, and energy storage markets, with veteran industrialist David Roberts as its CEO.
Design
Underlying theory
In mainline fusion approaches, the energy needed to allow reactions, the Coulomb barrier, is provided by heating the fusion fuel to millions of degrees. In such fuel, the electrons disassociate from their ions, to form a gas-like mixture known as a plasma. In any gas-like mixture, the particles will be found in a wide variety of energies, according to the Maxwell–Boltzmann distribution. In these systems, fusion occurs when two of the higher-energy particles in the mix randomly collide. Keeping the fuel together long enough for this to occur is a major challenge.
TAE's machines spin plasma up into a looped structure called a field-reversed configuration (FRC) which is a loop of hot, dense plasma. Material inside an FRC is self-contained by the fields the plasma creates. As the plasma current moves around the loop, it creates a magnetic field perpendicular to the direction of motion, much like current in a wire would do. This self-created field helps to hold in the plasma current and keeps the loop stable.
The challenge with field-reversed configurations is that they slow down over time, wobble, and eventually collapse. The company's innovation was to continuously apply particle beams along the surface of the FRC to keep it rotating. This beam and hoop system was key to increasing the machines' longevity, stability and performance.
TAE's design
The TAE design forms a field-reversed configuration (FRC), a self-stabilized rotating toroid of particles similar to a smoke ring. In the TAE system, the ring is made as thin as possible, about the same aspect ratio as an opened tin can. Particle accelerators inject fuel ions tangentially to the surface of the cylinder, where they either react or are captured into the ring as additional fuel.
Unlike other magnetic confinement fusion devices such as the tokamak, FRCs provide a magnetic field topology whereby the axial field inside the reactor is reversed by eddy currents in the plasma, as compared to the ambient magnetic field externally applied by solenoids. The FRC is less prone to magnetohydrodynamic and plasma instabilities than are other magnetic confinement fusion methods. The science behind the colliding beam fusion reactor is used in the company's C-2, C-2U and C-2W projects.
A key concept in the TAE system is that the FRC is kept in a useful state over an extended period. To do this, the accelerators inject the fuel such that when the particles scatter within the ring they cause the fuel already there to speed up in rotation. This process would normally slowly increase the positive charge of the fuel mass, so electrons are also injected to keep the charge roughly neutralized.
The FRC is held in a cylindrical, truck-sized vacuum chamber containing solenoids. It appears the FRC will then be compressed, either using adiabatic compression similar to those proposed for magnetic mirror systems in the 1950s, or by forcing two such FRCs together using a similar arrangement.
The design must achieve the "hot enough/long enough" (HELE) threshold to achieve fusion. The required temperature is 3 billion degrees Celsius (~250 keV), while the required duration (achieved with C2-U) is multiple milliseconds.
The 11B(p,α)αα aneutronic reaction
An essential component of the design is the use of "advanced fuels", i.e. fuels with primary reactions that do not produce neutrons, such as hydrogen and boron-11. FRC fusion products are all charged particles for which highly efficient direct energy conversion is feasible. Neutron flux and associated on-site radioactivity is virtually non-existent. So unlike other nuclear fusion research involving deuterium and tritium, and unlike nuclear fission, no radioactive waste is created. The hydrogen and boron-11 fuel used in this type of reaction is also much more abundant.
TAE Technologies relies on the clean 11B(p,α)αα reaction, also written 11B(p,3α), which produces three helium nuclei called α−particles (hence the name of the company) as follows:
A proton (identical to the most common hydrogen nucleus) striking boron-11 creates a resonance in carbon-12, which decays by emitting one high-energy primary α−particle. This leads to the first excited state of beryllium-8, which decays into two low-energy secondary α-particles. This is the model commonly accepted in the scientific community since the published results account for a 1987 experiment.
TAE claimed that the reaction products should release more energy than what is commonly envisaged. In 2010, Henry R. Weller and his team from the Triangle Universities Nuclear Laboratory (TUNL) used the high intensity γ-ray source (HIγS) at Duke University, funded by TAE and the U.S. Department of Energy, to show that the mechanism first proposed by Ernest Rutherford and Mark Oliphant in 1933, then Philip Dee and C. W. Gilbert from the Cavendish Laboratory in 1936, and the results of an experiment conducted by French researchers from IN2P3 in 1969, was correct. The model and the experiment predicted two high energy α-particles of almost equal energy. One was the primary α-particle and the other a secondary α-particle, both emitted at an angle of 155 degrees. A third secondary α-particle is also emitted, of lower energy.
Inverse cyclotron converter (ICC)
Direct energy conversion systems for other fusion power generators, involving collector plates and "Venetian blinds" or a long linear microwave cavity filled with a 10-Tesla magnetic field and rectennas, are not suitable for fusion with ion energies above 1 MeV. The company employed a much shorter device, an inverse cyclotron converter (ICC) that operated at 5 MHz and required a magnetic field of only 0.6 tesla. The linear motion of fusion product ions is converted to circular motion by a magnetic cusp. Energy is collected from the charged particles as they spiral past quadrupole electrodes. More classical collectors collect particles with energy less than 1 MeV.
The estimation of the ratio of fusion power to radiation loss for a 100 MW FRC has been calculated for different fuels, assuming a converter efficiency of 90% for α-particles, 40% for Bremsstrahlung radiation through photoelectric effect, and 70% for the accelerators, with 10T superconducting magnetic coils:
Q = 35 for deuterium and tritium
Q = 3 for deuterium and helium-3
Q = 2.7 for hydrogen and boron-11
Q = 4.3 for polarized hydrogen and boron-11.
The spin polarization enhances the fusion cross section by a factor of 1.6 for 11B. A further increase in Q should result from the nuclear quadrupole moment of 11B. And another increase in Q may also result from the mechanism allowing the production of a secondary high-energy α-particle.
TAE Technologies plans to use the p-11B reaction in their commercial FRC for safety reasons and because the energy conversion systems are simpler and smaller: since no neutron is released, thermal conversion is unnecessary, hence no heat exchanger or steam turbine.
The "truck-sized" 100 MW reactors designed in TAE presentations are based on these calculations.
Progression of Machines
Sewer Pipe
Developed in 1998, the company’s proof-of-concept machine was created using a common sewer pipe and first demonstrated the viability of forming a field-reverse configured magnetic field.
CBFR-SPS
The CBFR-SPS is a 100 MW-class, magnetic field-reversed configuration, aneutronic fusion rocket concept. The reactor is fueled by an energetic-ion mixture of hydrogen and boron (p-11B). Fusion products are helium ions (α-particles) expelled axially out of the system. α-particles flowing in one direction are decelerated and their energy directly converted to power the system; and particles expelled in the opposite direction provide thrust. Since the fusion products are charged particles and does not release neutrons, the system does not require the use of a massive radiation shield.
C-2
Various experiments have been conducted by TAE Technologies on the world's largest compact toroid device called "C-2". Results began to be regularly published in 2010, with papers including 60 authors. C-2 results showed peak ion temperatures of 400 Electron volts (5 million degrees Celsius), electron temperatures of 150 Electron volts, plasma densities of 1·1019 m−3 and 1·109 fusion neutrons per second for 3 milliseconds.
Budker Institute
The Budker Institute of Nuclear Physics, Novosibirsk, built a powerful plasma injector, shipped in late 2013 to the company's research facility. The device produces a neutral beam in the range of 5 to 20 MW, and injects energy inside the reactor to transfer it to the fusion plasma.
C-2U
In March 2015, the upgraded C-2U with edge-biasing beams showed a 10-fold improvement in lifetime, with FRCs heated to 10 million degrees Celsius and lasting 5 milliseconds with no sign of decay. The C-2U functions by firing two donut shaped plasmas at each other at 1 million kilometers per hour, the result is a cigar-shaped FRC as much as 3 meters long and 40 centimeters across. The plasma was controlled with magnetic fields generated by electrodes and magnets at each end of the tube. The upgraded particle beam system provided 10 megawatts of power.
C-2W/Norman
In 2017, TAE Technologies renamed the C-2W reactor "Norman" in honor of the company's co-founder Norman Rostoker who died in 2014. In July 2017, the company announced that the Norman reactor had achieved plasma. The Norman reactor is reportedly able to operate at temperatures between 50 million and 70 million°C. In February 2018, the company announced that after 4,000 experiments it had reached a high temperature of nearly 20 million°C. In 2018, TAE Technologies partnered with the Applied Science team at Google to develop the technology inside Norman to maximize electron temperature, aiming to demonstrate breakeven fusion. In 2021, TAE Technologies stated Norman was regularly producing a stable plasma at temperatures over 50 million degrees, meeting a key milestone for the machine and unlocking an additional $280 million in financing, bringing its total of funding raised up to $880 million. In 2023, the company published a peer-reviewed paper reporting the first measurement of p-11B fusion in magnetically confined plasma at the LHD in Japan.
Copernicus
The Copernicus device will operate using hydrogen and is expected to attain net energy gain around 2025. The approximate cost of the reactor is $200 million, and it is intended to reach temperatures of around 100 million°C to validate conditions needed for deuterium-tritium fusion while the company scales to p-11B fuel for its superior environmental and cost profile. TAE intends to start construction in 2022.
Da Vinci
The Da Vinci device is a proposed successor device to Copernicus, and a prototype for a commercially scalable reactor. It is scheduled to be developed in the second half of the 2020s and is expected to achieve 3 billion°C and produce fusion energy from the p-11B fuel cycle.
See also
China Fusion Engineering Test Reactor
Commonwealth Fusion Systems
Dense plasma focus
Fusion Industry Association
General Fusion
Helion Energy
Polywell
Spherical Tokamak for Energy Production
References
External links
Accelerator physics
Fusion power companies
Nuclear power companies of the United States
Nuclear technology companies of the United States | TAE Technologies | [
"Physics"
] | 3,288 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
42,943,431 | https://en.wikipedia.org/wiki/Developing%20Unconventional%20Gas | Developing Unconventional Gas or DUG is a series of annual regional conferences of the unconventional oil industry. Several notable key note speakers have visited DUG conferences including Leon Panetta at Pittsburgh's in 2014, T. Boone Pickens in 2011 and George W. Bush in 2013.
Annual Conferences include:
DUG East - held annually at the David L. Lawrence Convention Center in Pittsburgh.
DUG Australia - held annually in Brisbane, Australia.
DUG Eagle Ford - held annually in San Antonio, Texas.
DUG Midcontinent - held annually in Tulsa.
DUG Bakken and Niobrara - held annually in Denver.
DUG Permian - held annually in Fort Worth.
Executive Oil Conference - held annually in Midland, Texas.
Crude in Motion Conference - held annually in Houston.
Marcellus-Utica Midstream Conference - held annually in Pittsburgh.
Offshore Executive Conference - held annually in Houston.
Energy Capital Conference - held annually in Houston.
A&D Strategies and Opportunities Conference - held annually in Dallas.
External links
DUG conference list
References
Unconventional oil
Peak oil
Petroleum production
Recurring events established in 2009
Technology conventions | Developing Unconventional Gas | [
"Chemistry"
] | 218 | [
"Petroleum",
"Unconventional oil"
] |
42,943,508 | https://en.wikipedia.org/wiki/Triadimefon | Triadimefon is a fungicide used in agriculture to control various fungal diseases. As a seed treatment, it is used on barley, corn, cotton, oats, rye, sorghum, and wheat. In fruit it is used on pineapple and banana. Non-food uses include pine seedlings, Christmas trees, turf, ornamental plants, and landscaping.
References
Aromatase inhibitors
Fungicides
Triazoles
4-Chlorophenyl compounds | Triadimefon | [
"Chemistry",
"Biology"
] | 95 | [
"Fungicides",
"Organic compounds",
"Biocides",
"Organic compound stubs",
"Organic chemistry stubs"
] |
42,943,520 | https://en.wikipedia.org/wiki/Halobiforma%20haloterrestris | Halobiforma haloterrestris is an extremely halophilic member of the Halobacteria and the type species of the genus Halobiforma. H. haloterrestris is aerobic and motile. The cells are red-pigmented, neutrophilic and show rod, coccus and slightly pleomorphic morphology.
References
Further reading
Rothschild, Lynn J., and Rocco L. Mancinelli. "Life in extreme environments."Nature 409.6823 (2001): 1092–1101.
Rehm, Bernd, ed. Microbial bionanotechnology: biological self-assembly systems and biopolymer-based nanostructures. Horizon Scientific Press, 2006.
Seckbach, Joseph, Aharon Oren, and Helga Stan-Lotter, eds.Polyextremophiles: life under multiple forms of stress. Vol. 27. Springer, 2013.
Stan-Lotter, Helga, and Sergiu Fendrihan. Adaption of microbial life to environmental extremes. Springer Wien, New York, 2012.
Bej, Asim K., Jackie Aislabie, and Ronald M. Atlas, eds. Polar microbiology: the ecology, biodiversity and bioremediation potential of microorganisms in extremely cold environments. CRC Press, 2009.
External links
LPSN
Type strain of Halobiforma haloterrestris at BacDive - the Bacterial Diversity Metadatabase
Halobacteria
Archaea described in 2002 | Halobiforma haloterrestris | [
"Biology"
] | 320 | [
"Archaea",
"Archaea stubs"
] |
42,943,539 | https://en.wikipedia.org/wiki/Muscodor%20roseus | Muscodor roseus is an anamorphic fungus in the family Xylariaceae. It is an endophyte that colonizes the inner bark, sapwood and outer xylem of the plants Grevillea pteridifolia and Erythrophleum chlorostachys, found in the Northern Territory of Australia. It grows as a pinkish, felt-like mycelium on several media, and produces a mixture of volatile antibiotics. Cultures tend to have a musty odour. The specific epithet roseus means "pink".
References
Further reading
Grimme, Eva. (2004). Effects of mycofumigation using Muscodor albus and Muscodor roseus on diseases of sugar beet and chrysanthemum [electronic resource]/by Eva Grimme. Diss. Montana State University-Bozeman, College of Agriculture.
External links
Xylariales
Fungi described in 2002
Fungi of Australia
Fungus species | Muscodor roseus | [
"Biology"
] | 201 | [
"Fungi",
"Fungus species"
] |
42,943,848 | https://en.wikipedia.org/wiki/Grapevine%20leafroll-associated%20viruses | Grapevine leafroll-associated virus (GLRaV) is a name for a group of viruses that infect grapevine.
Obscure mealybugs (Pseudococcus viburni) feed on the phloem of vines and woody-stemmed plants, especially pear and apple trees and grape vines. Some individuals are vectors for infectious pathogens and can transmit them from plant to plant while feeding; mealybug-spread grapevine leafroll associated virus type III (GRLaV-3), in particular, has wreaked havoc among the grapes of New Zealand, reducing the crop yield of infected vineyards by up to 60%.
The biggest problems in Grapevine Leafroll Disease are reduced grape yield, altered grape ripening, and altered grape chemistry. Leafroll viruses are associated with rugose wood condition of grapevine.
References
Closteroviridae
Viral grape diseases | Grapevine leafroll-associated viruses | [
"Biology"
] | 179 | [
"Virus stubs",
"Viruses"
] |
42,944,052 | https://en.wikipedia.org/wiki/Causal%20fermion%20systems | The theory of causal fermion systems is an approach to describe fundamental physics. It provides a unification of the weak, the strong and the electromagnetic forces with gravity at the level of classical field theory. Moreover, it gives quantum mechanics as a limiting case and has revealed close connections to quantum field theory. Therefore, it is a candidate for a unified physical theory.
Instead of introducing physical objects on a preexisting spacetime manifold, the general concept is to derive spacetime as well as all the objects therein as secondary objects from the structures of an underlying causal fermion system. This concept also makes it possible to generalize notions of differential geometry to the non-smooth setting. In particular, one can describe situations when spacetime no longer has a manifold structure on the microscopic scale (like a spacetime lattice or other discrete or continuous structures on the Planck scale). As a result, the theory of causal fermion systems is a proposal for quantum geometry and an approach to quantum gravity.
Causal fermion systems were introduced by Felix Finster and collaborators.
Motivation and physical concept
The physical starting point is the fact that the Dirac equation in Minkowski space has solutions of negative energy which are usually associated to the Dirac sea. Taking the concept seriously that the states of the Dirac sea form an integral part of the physical system, one finds that many structures (like the causal and metric structures as well as the bosonic fields) can be recovered from the wave functions of the sea states. This leads to the idea that the wave functions of all occupied states (including the sea states) should be regarded as the basic physical objects, and that all structures in spacetime arise as a result of the collective interaction of the sea states with each other and with the additional particles and "holes" in the sea. Implementing this picture mathematically leads to the framework of causal fermion systems.
More precisely, the correspondence between the above physical situation and the mathematical framework is obtained as follows. All occupied states span a Hilbert space of wave functions in Minkowski space . The observable information on the distribution of the wave functions in spacetime is encoded in the local correlation operators which in an orthonormal basis have the matrix representation
(where is the adjoint spinor).
In order to make the wave functions into the basic physical objects, one considers the set as a set of linear operators on an abstract Hilbert space. The structures of Minkowski space are all disregarded, except for the volume measure , which is transformed to a corresponding measure on the linear operators (the "universal measure"). The resulting structures, namely a Hilbert space together with a measure on the linear operators thereon, are the basic ingredients of a causal fermion system.
The above construction can also be carried out in more general spacetimes. Moreover, taking the abstract definition as the starting point, causal fermion systems allow for the description of generalized "quantum spacetimes." The physical picture is that one causal fermion system describes a spacetime together with all structures and objects therein (like the causal and the metric structures, wave functions and quantum fields). In order to single out the physically admissible causal fermion systems, one must formulate physical equations. In analogy to the Lagrangian formulation of classical field theory, the physical equations for causal fermion systems are formulated via a variational principle, the so-called causal action principle. Since one works with different basic objects, the causal action principle has a novel mathematical structure where one minimizes a positive action under variations of the universal measure. The connection to conventional physical equations is obtained in a certain limiting case (the continuum limit) in which the interaction can be described effectively by gauge fields coupled to particles and antiparticles, whereas the Dirac sea is no longer apparent.
General mathematical setting
In this section the mathematical framework of causal fermion systems is introduced.
Definition
A causal fermion system of spin dimension is a triple where
is a complex Hilbert space.
is the set of all self-adjoint linear operators of finite rank on which (counting multiplicities) have at most positive and at most negative eigenvalues.
is a measure on .
The measure is referred to as the universal measure.
As will be outlined below, this definition is rich enough to encode analogs of the mathematical structures needed to formulate physical theories. In particular, a causal fermion system gives rise to a spacetime together with additional structures that generalize objects like spinors, the metric and curvature. Moreover, it comprises quantum objects like wave functions and a fermionic Fock state.
The causal action principle
Inspired by the Langrangian formulation of classical field theory, the dynamics on a causal fermion system is described by a variational principle defined as follows.
Given a Hilbert space and the spin dimension , the set is defined as above. Then for any , the product is an operator of rank at most . It is not necessarily self-adjoint because in general . We denote the non-trivial eigenvalues of the operator (counting algebraic multiplicities) by
Moreover, the spectral weight is defined by
The Lagrangian is introduced by
The causal action is defined by
The causal action principle is to minimize under variations of within the class of (positive) Borel measures under the following constraints:
Boundedness constraint: for some positive constant .
Trace constraint: is kept fixed.
The total volume is preserved.
Here on one considers the topology induced by the -norm on the bounded linear operators on .
The constraints prevent trivial minimizers and ensure existence, provided that is finite-dimensional.
This variational principle also makes sense in the case that the total volume is infinite if one considers variations of bounded variation with .
Inherent structures
In contemporary physical theories, the word spacetime refers to a Lorentzian manifold . This means that spacetime is a set of points enriched by topological and geometric structures. In the context of causal fermion systems, spacetime does not need to have a manifold structure. Instead, spacetime is a set of operators on a Hilbert space (a subset of ). This implies additional inherent structures that correspond to and generalize usual objects on a spacetime manifold.
For a causal fermion system ,
we define spacetime as the support of the universal measure,
With the topology induced by ,
spacetime is a topological space.
Causal structure
For , we denote the non-trivial eigenvalues of the operator (counting algebraic multiplicities) by .
The points and are defined to be spacelike separated if all the have the same absolute value. They are timelike separated if the do not all have the same absolute value and are all real. In all other cases, the points and are lightlike separated.
This notion of causality fits together with the "causality" of the above causal action in the sense that if two spacetime points are space-like separated, then the Lagrangian vanishes. This corresponds to the physical notion of causality that spatially separated spacetime points do not interact. This causal structure is the reason for the notion "causal" in causal fermion system and causal action.
Let denote the orthogonal projection on the subspace . Then the sign of the functional
distinguishes the future from the past. In contrast to the structure of a partially ordered set, the relation "lies in the future of" is in general not transitive. But it is transitive on the macroscopic scale in typical examples.
Spinors and wave functions
For every the spin space is defined by ; it is a subspace of of dimension at most . The spin scalar product defined by
is an indefinite inner product on of signature with .
A wave function is a mapping
On wave functions for which the norm defined by
is finite (where is the absolute value of the symmetric operator ), one can define the inner product
Together with the topology induced by the norm , one obtains a Krein space .
To any vector we can associate the wave function
(where is again the orthogonal projection to the spin space).
This gives rise to a distinguished family of wave functions, referred to as the
wave functions of the occupied states.
The fermionic projector
The kernel of the fermionic projector is defined by
(where is again the orthogonal projection on the spin space,
and denotes the restriction to ). The fermionic projector is the operator
which has the dense domain of definition given by all vectors satisfying the conditions
As a consequence of the causal action principle, the kernel of the fermionic projector has additional normalization properties which justify the name projector.
Connection and curvature
Being an operator from one spin space to another, the kernel of the fermionic projector gives relations between different spacetime points. This fact can be used to introduce a spin connection
The basic idea is to take a polar decomposition of . The construction becomes more involved by the fact that the spin connection should induce a corresponding metric connection
where the tangent space is a specific subspace of the linear operators on endowed with a Lorentzian metric.
The spin curvature is defined as the holonomy of the spin connection,
Similarly, the metric connection gives rise to metric curvature. These geometric structures give rise to a proposal for a quantum geometry.
The Euler–Lagrange equations and the linearized field equations
A minimizer of the causal action satisfies corresponding Euler–Lagrange equations. They state that the function
defined by
(with two Lagrange parameters and ) vanishes and is minimal on the support of ,
For the analysis, it is convenient to introduce jets consisting of a real-valued function on and a vector field on along , and to denote the combination of multiplication and directional derivative by . Then the Euler–Lagrange equations imply that the weak Euler–Lagrange equations
hold for any test jet .
Families of solutions of the Euler–Lagrange equations are generated infinitesimally by a jet which satisfies the linearized field equations
to be satisfied for all test jets , where the Laplacian is defined by
The Euler–Lagrange equations describe the dynamics of the causal fermion system, whereas small perturbations of the system are described by the linearized field equations.
Conserved surface layer integrals
In the setting of causal fermion systems, spatial integrals are expressed by so-called surface layer integrals. In general terms, a surface layer integral is a double integral of the form
where one variable is integrated over a subset , and the other variable is integrated over the complement of . It is possible to express the usual conservation laws for charge, energy, ... in terms of surface layer integrals. The corresponding conservation laws are a consequence of the Euler–Lagrange equations of the causal action principle and the linearized field equations. For the applications, the most important surface layer integrals are the current integral , the symplectic form , the surface layer inner product and the nonlinear surface layer integral .
Bosonic Fock space dynamics
Based on the conservation laws for the above surface layer integrals, the dynamics of a causal fermion system as described by the Euler–Lagrange equations corresponding to the causal action principle can be rewritten as a linear, norm-preserving dynamics on the bosonic Fock space built up of solutions of the linearized field equations. In the so-called holomorphic approximation, the time evolution respects the complex structure, giving rise to a unitary time evolution on the bosonic Fock space.
A fermionic Fock state
If has finite dimension , choosing an orthonormal basis of and taking the wedge product of the corresponding wave functions
gives a state of an -particle fermionic Fock space. Due to the total anti-symmetrization, this state depends on the choice of the basis of only by a phase factor. This correspondence explains why the vectors in the particle space are to be interpreted as fermions. It also motivates the name causal fermion system.
Underlying physical principles
Causal fermion systems incorporate several physical principles in a specific way:
A local gauge principle: In order to represent the wave functions in components, one chooses bases of the spin spaces. Denoting the signature of the spin scalar product at by , a pseudo-orthonormal basis of is given by
Then a wave function can be represented with component functions,
The freedom of choosing the bases independently at every spacetime point corresponds to local unitary transformations of the wave functions,
These transformations have the interpretation as local gauge transformations. The gauge group is determined to be the isometry group of the spin scalar product. The causal action is gauge invariant in the sense that it does not depend on the choice of spinor bases.
The equivalence principle: For an explicit description of spacetime one must work with local coordinates. The freedom in choosing such coordinates generalizes the freedom in choosing general reference frames in a spacetime manifold. Therefore, the equivalence principle of general relativity is respected. The causal action is generally covariant in the sense that it does not depend on the choice of coordinates.
The Pauli exclusion principle: The fermionic Fock state associated to the causal fermion system makes it possible to describe the many-particle state by a totally antisymmetric wave function. This gives agreement with the Pauli exclusion principle.
The principle of causality is incorporated by the form of the causal action in the sense that spacetime points with spacelike separation do not interact.
Limiting cases
Causal fermion systems have mathematically sound limiting cases that give a connection to conventional physical structures.
Lorentzian spin geometry of globally hyperbolic spacetimes
Starting on any globally hyperbolic Lorentzian spin manifold with spinor bundle , one gets into the framework of causal fermion systems by choosing as a subspace of the solution space of the Dirac equation. Defining the so-called local correlation operator for by
(where is the inner product on the fibre ) and introducing the universal measure as the push-forward of the volume measure on ,
one obtains a causal fermion system. For the local correlation operators to be well-defined, must consist of continuous sections, typically making it necessary to introduce a regularization on the microscopic scale . In the limit , all the intrinsic structures on the causal fermion system (like the causal structure, connection and curvature) go over to the corresponding structures on the Lorentzian spin manifold. Thus the geometry of spacetime is encoded completely in the corresponding causal fermion systems.
Quantum mechanics and classical field equations
The Euler–Lagrange equations corresponding to the causal action principle have a well-defined limit if the spacetimes of the causal fermion systems go over to Minkowski space. More specifically, one considers a sequence of causal fermion systems (for example with finite-dimensional in order to ensure the existence of the fermionic Fock state as well as of minimizers of the causal action), such that the corresponding wave functions go over to a configuration of interacting Dirac seas involving additional particle states or "holes" in the seas. This procedure, referred to as the continuum limit, gives effective equations having the structure of the Dirac equation coupled to classical field equations. For example, for a simplified model involving three elementary fermionic particles
in spin dimension two, one obtains an interaction via a classical axial gauge field described by the coupled Dirac– and Yang–Mills equations
Taking the non-relativistic limit of the Dirac equation, one obtains the Pauli equation or the Schrödinger equation, giving the correspondence to quantum mechanics. Here and depend on the regularization and determine the coupling constant as well as the rest mass.
Likewise, for a system involving neutrinos in spin dimension 4, one gets effectively a massive gauge field coupled to the left-handed component of the Dirac spinors. The fermion configuration of the standard model can be described in spin dimension 16.
The Einstein field equations
For the just-mentioned system involving neutrinos, the continuum limit also yields the Einstein field equations coupled to the Dirac spinors,
up to corrections of higher order in the curvature tensor. Here the cosmological constant is undetermined, and denotes the energy-momentum tensor of the spinors and the gauge field. The gravitation constant depends on the regularization length.
Quantum field theory in Minkowski space
Starting from the coupled system of equations obtained in the continuum limit and expanding in powers of the coupling constant, one obtains integrals which correspond to Feynman diagrams on the tree level. Fermionic loop diagrams arise due to the interaction with the sea states, whereas bosonic loop diagrams appear when taking averages over the microscopic (in generally non-smooth) spacetime structure of a causal fermion system (so-called microscopic mixing). The detailed analysis and comparison with standard quantum field theory is work in progress.
References
Further reading
Web platform on causal fermion systems
Quantum gravity
Mathematical physics
Quantum field theory | Causal fermion systems | [
"Physics",
"Mathematics"
] | 3,486 | [
"Quantum field theory",
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Mathematical physics",
"Physics beyond the Standard Model"
] |
42,944,845 | https://en.wikipedia.org/wiki/First-magnitude%20star | First-magnitude stars are the brightest stars in the night sky, with apparent magnitudes lower (i.e. brighter) than +1.50. Hipparchus, in the 1st century BC, introduced the magnitude scale. He allocated the first magnitude to the 20 brightest stars and the sixth magnitude to the faintest stars visible to the naked eye.
In the 19th century, this ancient scale of apparent magnitude was logarithmically defined, so that a star of magnitude 1.00 is exactly 100 times as bright as one of 6.00. The scale was also extended to even brighter celestial bodies such as Sirius (-1.5), Venus (-4), the full Moon (-12.7), and the Sun (-26.7).
Hipparchus
Hipparchus ranked his stars in a very simple way. He listed the brightest stars as "of the first magnitude", which meant "the biggest." Stars less bright Hipparchus called "of the second magnitude", or second biggest. The faintest stars visible to the naked eye he called "of the sixth magnitude".
Naked-eye magnitude system
During a series of lectures given in 1736 at the University of Oxford, its then Professor of Astronomy explainedː
Distribution on the Sky
In the modern scale, the 20 brightest stars of Hipparchos have magnitudes between -1.5 (Sirius) and +1.6 (Bellatrix, γ Orionis). The table below shows 22 stars brighter than +1.5 mag, but 5 of them the Greek astronomers probably didn't know for their far southern position.
Epsilon Canis Majoris has an apparent magnitude of almost exactly 1.5, so it may be considered a first magnitude sometimes due to minor variations.
Twelve of the 22 brightest stars are on the actual Northern sky, ten on Southern sky. But on the seasonal evening sky, they are unevenly distributed: In Europe and USA 12–13 stars are visible in winter, but only 6–7 in summer. Nine of the brightest winter stars are part of the Winter Hexagon or surrounded by it.
Table of the 22 first-magnitude stars
(18 of them visible in Hipparchos' Greece)
First-magnitude deep-sky objects
Beside stars there are also deep-sky objects that are first-magnitude objects, accumulatively brighter than +1.50, such as the Large Magellanic Cloud, Milky Way, Carina Nebula, Hyades, Pleiades and the Alpha Persei Cluster (with Eta Carinae, Theta Tauri, Alcyone and Mirfak as the brightest stars of the latter four).
See also
Absolute magnitude
List of brightest stars
Literature
Jeffrey Bennett et al., 2010: Astronomie. Die kosmische Perspektive (Ed. Harald Lesch), Chapter 15.1 (p. 735–737). Pearson Studium Verlag, München,
H.Bernhard, D.Bennett, H.Rice, 1948: New Handbook of the Heavens, Chapter 5 (Stars of the Southern Sky). MaGraw-Hill, New York
Patrick Moore, 1996: Brilliant Stars Cassell Publishers Limited
James. B Kahler, "First Magnitude: A Book of the Bright Sky". World Scientific, 2013. 239 pages. , 9789814417426
References
Stellar astronomy
Observational astronomy
Photometry | First-magnitude star | [
"Astronomy"
] | 693 | [
"Observational astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
42,946,330 | https://en.wikipedia.org/wiki/Interpersonal%20theory%20of%20suicide | The interpersonal theory of suicide attempts to explain why individuals engage in suicidal behavior and to identify individuals who are at risk. It was developed by Thomas Joiner and is outlined in Why People Die By Suicide. The theory consists of three components that together lead to suicide attempts. According to the theory, the simultaneous presence of thwarted belongingness and perceived burdensomeness produce the desire for suicide. While the desire for suicide is necessary, it alone will not result in death by suicide. Rather, Joiner asserts that one must also have acquired capability (that is, the acquired ability) to overcome one's natural fear of death.
A number of risk factors have been linked to suicidal behavior, and there are many theories of suicide that integrate these established risk factors, but few are capable of explaining all of the phenomena associated with suicidal behavior as the interpersonal theory of suicide does. Another strength of this theory lies in its ability to be tested empirically. It is constructed in a way that allows for falsifiability. A number of studies have found at least partial support for the interpersonal theory of suicide. Specifically, a systematic review of 66 studies using the interpersonal theory of suicide found that the effect of perceived burdensomeness on suicide ideation was the most tested and supported relationship. The theory’s other predictions, particularly in terms of critical interaction effects, are less strongly supported.
Desire for suicide
Thwarted belongingness
Belongingness—feeling accepted by others—is believed to be a fundamental need, something that is essential for an individual's psychological health and well-being. Increased social connectedness—a construct related to belongingness—has been shown to lower risk for suicide. More specifically, being married, having children, and having more friends are associated with a lower risk of suicidal behavior. Additionally, "pulling together" (e.g., gathering for sporting events, celebrations) with others has been shown to have a preventive effect. For example, suicide rates have been lower on Super Bowl Sundays than other Sundays, and it is believed that the social connectedness that occurs from being a fan of a sport's team increases one's feeling of belongingness. In contrast, social isolation is frequently reported by those who die by suicide prior to death.
Perceived burdensomeness
Perceived is the extent to which an individual perceives themselves as a burden on others or society. Joiner describes perceived burdensomeness as the belief that "my death is worth more than my life". Unemployment, medical or health problems, and incarceration are examples of situations in which a person may feel like they are a burden to others. It is important to note that the burdensomeness is "perceived", and is often a false belief. According to the theory, thwarted belongingness and perceived burdensomeness together, when perceived as stable and unchanging (one is experiencing hopelessness regarding these states), is enough to give rise to active suicidal desire.
Acquired capability
Joiner terms this "acquired" capability because it is not an ability with which humans are born. Rather, this ability to engage in suicidal behaviors is only acquired through life experiences. Fear of death is a natural and powerful instinct. According to the theory, one's fear of death is weakened when one is exposed to physical pain or provocative life experiences as these experiences often lead to fearlessness and pain insensitivity. These experiences could include childhood trauma, witnessing a traumatic event, suffering from a severe illness, or engaging in self-harm behaviors.
These behaviors are thought to result in the desensitization to painful stimuli and to increase one's ability to engage in suicidal behaviors. This component is important in identifying individuals who are likely to attempt or die by suicide. For example, certain professions (e.g., soldiers, surgeons, and police officers) are exposed to physical pain or provocative experiences. More specifically, soldiers with a history of combat have likely been exposed to grave injuries, witnessing the death of others, and are habituated to fear of painful experiences. This is consistent with data indicating an increased rate of suicide in soldiers. Additionally, past attempts of suicide has been found to be the number one predictor of future attempts. This is consistent with Joiner's theory; individuals who attempt suicide will habituate to the fear of death, and this weakened fear will make an individual more likely to make a subsequent attempt.
Implications
A survey study of a large population-based cohort provides support for the interpersonal theory in that the interaction between thwarted belongingness and perceived burdensomeness predicted suicidal ideation, and suicidal ideation and ability predicted plans to attempt suicide and actual attempts.
The interpersonal theory of suicide identifies factors clinicians should assess for increased suicide risk and factors that should be targeted in prevention and treatment. Furthermore, the theory provides avenues of future research for scientists.
See also
Anomie
Workism
References
Interpersonal conflict
Suicide | Interpersonal theory of suicide | [
"Biology"
] | 992 | [
"Behavior",
"Human behavior",
"Suicide"
] |
42,946,792 | https://en.wikipedia.org/wiki/Phillipsia%20subpurpurea | Phillipsia subpurpurea is a species of fungus in the family Sarcoscyphaceae. It is found in Australia where it grows as a saprophyte on wood. The fungus was first described scientifically by English mycologists Miles Joseph Berkeley and Christopher Edmund Broome. Its cup-shaped fruit bodies lack stipes and have purplish interior surfaces.
References
External links
Fungi described in 1883
Fungi of Australia
Sarcoscyphaceae
Taxa named by Miles Joseph Berkeley
Taxa named by Christopher Edmund Broome
Fungus species | Phillipsia subpurpurea | [
"Biology"
] | 108 | [
"Fungi",
"Fungus species"
] |
42,946,915 | https://en.wikipedia.org/wiki/Phillipsia%20lutea | Phillipsia lutea is a species of fungus in the family Sarcoscyphaceae. It was originally described in 1969 by William Clark Denison from collections made in Costa Rica.
References
External links
Fungi described in 1969
Fungi of Central America
Sarcoscyphaceae
Fungus species | Phillipsia lutea | [
"Biology"
] | 56 | [
"Fungi",
"Fungus species"
] |
42,947,694 | https://en.wikipedia.org/wiki/Cell%20lineage | Cell lineage denotes the developmental history of a tissue or organ from the fertilized egg. This is based on the tracking of an organism's cellular ancestry due to the cell divisions and relocation as time progresses, this starts with the originator cells and finishing with a mature cell that can no longer divide.
This type of lineage can be studied by marking a cell (with fluorescent molecules or other traceable markers) and following its progeny after cell division. Some organisms, such as C. elegans, have a predetermined pattern of cell progeny and the adult male will always consist of 1031 cells, this is because cell division in C. elegans is genetically determined and known as eutely. This causes the cell lineage and cell fate to be highly correlated. Other organisms, such as humans, have variable lineages and somatic cell numbers.
C. elegans: model organism
As one of the first pioneers of cell lineage, in the 1960s Dr. Sydney Brenner first began observing cell differentiation and succession in the nematode Caenorhabditis elegans. Dr. Brenner chose this organism due to its transparent body, quick reproduction, ease of access, and small size which made it ideal for following cell lineage under a microscope.
By 1976, Dr. Brenner and his associate, Dr. John Sulston, had identified part of the cell lineage in the developing nervous system of C. elegans. Initial results showed that the nematode was eutelic (each individual experiences the same differentiation pathways), however work by Sulston and Richard Horvitz showed that several cells necessary for reproduction differentiate after hatching. These cells include vulval cells as well as muscle and neurons. This research also led to the initial observations of programmed cell death, or apoptosis.
After mapping various sections of the C. elegans' cell lineage, Dr. Brenner and his associates were able to piece together the first complete and reproducible fate map of cell lineage. They later received the 2002 Nobel prize for their work in genetic regulation of organ development and programmed cell death. Being that C. elegans are hermaphrodites, there consist of both male and female organs, where they store sperm and are able to self fertilize. C. elegans contain 302 neurons and 959 somatic cells, where they begin with 1031, where 72 undergo apoptosis which is programmed cell death. This makes C. elegans a model organism for studying cell lineage, and being able to observe the cell divisions due to their transparent phenotype.
History of cell lineage
One of the first studies of cell lineages took place in the 1870s by Whitman who studied cleavage patterns in leeches and small invertebrates. He found that some groups, such as nematode worms and ascidians form a pattern of cell division which is identical between individuals and invariable. This high correlation between cell lineage and cell fate was thought to be determined by segregating factors within the dividing cells. Other organisms had stereotyped patterns of cell division and produced sublineages which were the progeny of particular precursor cells. These more variable cell fates are thought to be due to the cells' interaction with the environment. Due to new breakthroughs in tracking cells with greater accuracy, this aided the biological community since a variety of colors are now used in showing the original cells and able to track easily. These colors are fluorescent and marked on the proteins by administering injections to trace such cells.
Techniques of fate mapping
Cell lineage can be determined by two methods, either through direct observation or through clonal analysis. During the early 19th century direct observation was used however it was highly limiting as only small transparent samples could be studied. With the invention of the confocal microscope this allowed larger more complicated organisms to be studied.
Perhaps the most popular method of cell fate mapping in the genetic era is through site-specific recombination mediated by the Cre-Lox or FLP-FRT systems. By utilizing the Cre-Lox or FLP-FRT recombination systems, a reporter gene (usually encoding a fluorescent protein) is activated and permanently labels the cell of interest and its offspring cells, thus the name cell lineage tracing. With the system, researchers could investigate the function of their favorite gene in determining cell fate by designing a genetic model where within a cell one recombination event is designed for manipulating the gene of interest and the other recombination event is designed for activating a reporter gene. One minor issue is that the two recombination events may not occur simultaneously thus the results need to be interpreted with caution. Furthermore, some fluorescent reporters have such an extremely low recombination threshold that they may label cell populations at undesired time-points in the absence of induction.
Synthetic biology approaches and the CRISPR/Cas9 system to engineer new genetic systems that enable cells to autonomously record lineage information in their own genome have been developed. These systems are based on engineered, targeted mutation of defined genetic elements. By generating new, random genomic alterations in each cell generation these approaches facilitate reconstruction of lineage trees. These approaches promise to provide more comprehensive analysis of lineage relationships in model organisms. Computational tree reconstruction methods are also being developed for datasets generated by such approaches.
Early developmental asymmetries
In humans after fertilization, the zygote divides into two cells. Somatic mutations that arise directly after the formation of the zygote, as well as later in development, can be used as markers to trace cell lineages throughout the body. Beginning with cleavages of the zygote, lineages were observed to contribute unequally to blood cells. As much as 90% of blood cells were found to be derived from just one of the first two blastomeres. In addition, normal development may result in unequal characteristics of symmetrical organs, such as between the left and right frontal and occipital cerebral cortex. It was proposed that the efficiency of DNA repair contributes to lineage imbalance, as additional time spent by a cell on DNA repair may decrease proliferation rate.
See also
Cell potency
GESTALT
References
Cell biology
Developmental biology concepts | Cell lineage | [
"Biology"
] | 1,280 | [
"Developmental biology concepts",
"Cell biology"
] |
47,474,831 | https://en.wikipedia.org/wiki/List%20of%20mechanical%20keyboards | Mechanical keyboards (or mechanical-switch keyboards) are computer keyboards which have an individual switch for each key.
The following table is a compilation list of mechanical keyboard models, brands, and series:
Mechanical keyboards
References
Computer keyboards
Mechanical keyboards | List of mechanical keyboards | [
"Technology"
] | 47 | [
"Computing-related lists",
"Lists of computer hardware"
] |
47,476,753 | https://en.wikipedia.org/wiki/Andrey%20Terekhov | Andrey Nikolaevich Terekhov (; 3 September 1949) is a Russian IT developer who created the Algol 68 LGU Telecommunication systems.
Education
Terekhov studied Computer Science at Leningrad State University, graduating with Honors. He has a Doctorate in Physical Mathematical Sciences.
Memberships
Terekhov is a member of ACM and the IEEE Computer Society. In 2004 he became Chairman of the Board of Directors of RUSSOFT.
Research positions
In 1971, Terekhov began working at Leningrad State University as a junior research associate. He was ultimately promoted to head of System-Programming there. In 1984 he was appointed Deputy Director at Zvezda and Krasnaya Zarya. Seven years later he founded and became Director of a smart software solutions company Lanit-Tercom and in 1996 founded and led the Software Engineering Chair of St. Petersburg State University.
In 2002 Terekhov was behind the organization and guidance of the Scientific Research Institute of Information Technology of St. Petersburg State University.
References
Soviet computer scientists
1949 births
Computing in the Soviet Union
Saint Petersburg State University alumni
Academic staff of Saint Petersburg State University
Living people
Russian businesspeople in information technology
ALGOL 68 | Andrey Terekhov | [
"Technology"
] | 239 | [
"Computing in the Soviet Union",
"History of computing"
] |
47,476,755 | https://en.wikipedia.org/wiki/Theory%20of%20fructification | In economics, the theory of fructification is a theory of the interest rate which was proposed by French economist and finance minister Anne Robert Jacques Turgot. The term theory of fructification is due to Eugen von Böhm-Bawerk who considered Turgot as the first economist who tried to develop a scientific explanation of the interest rate.
According to Turgot, a capitalist can either lend his money, or employ it in the purchase of a plot of land. Because fruitful land yields an annual rent forever, its price is given by the formula of a perpetual annuity: If A denotes the land's annual rent and r denotes the interest rate, the land price is simply A/r. From this formula, Turgot concluded that "the lower the interest rate, the more valuable is the land." Specifically, if the interest rate approached zero, the land price would become infinite. Because land prices must be finite, it follows that the interest rate is strictly positive. Turgot argued also that the mechanism which keeps interest rates above zero crowds out inefficient capital formation.
Böhm-Bawerk, who sponsored a different interest theory, considered Turgot's approach as circular. However, according to Joseph Schumpeter, the eminent economic historian, "Turgot's contribution is not only by far the greatest performance in the field of interest theory the eighteenth century produced but it clearly foreshadowed much of the best thought of the last decades of the nineteenth."
Much later, economists demonstrated that the theory of fructification can be stated rigorously in a general equilibrium model. They also generalized Turgot's proposition in two respects. First, land which is useful for residential or industrial purposes can be substituted for agricultural land. Second, in a growing economy, the existence of land implies that the interest rate exceeds the growth rate if the land's income share is bounded away from zero. The latter result is notable because it states that land ensures dynamic efficiency.
References
Interest
Finance theories
Exponentials
Mathematical finance
Actuarial science
Economic history studies | Theory of fructification | [
"Mathematics"
] | 430 | [
"Applied mathematics",
"E (mathematical constant)",
"Actuarial science",
"Exponentials",
"Mathematical finance"
] |
47,477,895 | https://en.wikipedia.org/wiki/Cape%20Provinces | The Cape Provinces of South Africa is a biogeographical area used in the World Geographical Scheme for Recording Plant Distributions (WGSRPD). It is part of the WGSRPD region 27 Southern Africa. The area has the code "CPP". It includes the South African provinces of the Eastern Cape, the Northern Cape and the Western Cape, together making up most of the former Cape Province.
The area includes the Cape Floristic Region, the smallest of the six recognised floral kingdoms of the world, an area of extraordinarily high diversity and endemism, home to more than 9,000 vascular plant species, of which 69 percent are endemic.
See also
Northern Provinces
References
Bibliography
Biogeography | Cape Provinces | [
"Biology"
] | 148 | [
"Biogeography"
] |
47,478,045 | https://en.wikipedia.org/wiki/EGSY8p7 |
EGSY8p7 (EGSY-2008532660) is a distant galaxy in the constellation of Boötes, with a spectroscopic redshift of z = 8.68 (photometric redshift 8.57), a light travel distance of 13.2 billion light-years from Earth. Therefore, at an age of 13.2 billion years, it is observed as it existed 570 million years after the Big Bang, which occurred 13.8 billion years ago, using the W. M. Keck Observatory. In July 2015, EGSY8p7 was announced as the oldest and most-distant known object, surpassing the previous record holder, EGS-zs8-1, which was determined in May 2015 as the oldest and most distant object. In March 2016, Pascal Oesch, one of the discoverers of EGSY8p7, announced the discovery of GN-z11, an older and more distant galaxy.
The galaxy contains a supermassive black hole, CEERS 1019.
Detection
The light of the EGSY8p7 galaxy appears to have been magnified twofold by gravitational lensing in the light's travel to Earth, enabling the detection of EGSY8p7, which would not have been possible without the magnification. EGSY8p7's distance from Earth was determined by measuring the redshift of Lyman-alpha emissions. EGSY8p7 is the most distant known detection of hydrogen's Lyman-alpha emissions. The distance of this detection was surprising, because neutral hydrogen (atomic hydrogen) clouds filling the early universe should have absorbed these emissions, even by some hydrogen cloud sources closer to Earth, according to the standard cosmological model. A possible explanation for the detection would be that reionization progressed in a "patchy" manner, rather than homogeneously throughout the universe, creating patches where the EGSY8p7 hydrogen Lyman-alpha emissions could travel to Earth, because there were no neutral hydrogen clouds to absorb the emissions.
After studying the galaxy with James Webb Space Telescope, researchers "have concluded that the intense star-forming activity within these interacting galaxies energised hydrogen emission and cleared swathes of gas from their surroundings, allowing the unexpected hydrogen emission to escape." The NASA/ESA/CSA James Webb Space Telescope, through its NIRCam instrument as part of the CEERS survey, has made groundbreaking discoveries regarding hydrogen emission in the early Universe. This includes capturing a detailed image of the galaxy EGSY8p7 and its companions, revealing intense star-forming activity within a cluster of interacting galaxies. Moreover, Webb's unprecedented sensitivity uncovered not only EGSY8p7 but also its two companion galaxies, revolutionizing our understanding of this cosmic region. This observation provides crucial insights into the visibility of hydrogen emission, indicating the impact of intense star-forming activity in clearing gas from the surroundings and addressing long-standing astronomical puzzles.
See also
GN-z11 (z=11.1)
MACS0647-JD (z=10.7)
UDFy-38135539 (z=8.55)
EGS-zs8-1 (z=7.73)
References
External links
Galaxies
Ursa Major
Boötes | EGSY8p7 | [
"Astronomy"
] | 679 | [
"Ursa Major",
"Galaxies",
"Boötes",
"Constellations",
"Astronomical objects"
] |
47,478,742 | https://en.wikipedia.org/wiki/Penicillium%20rudallense | Penicillium rudallense is a species of fungus in the genus Penicillium isolated from the Karlamilyi National Park in Western Australia.
References
rudallense
Fungi described in 2014
Fungus species | Penicillium rudallense | [
"Biology"
] | 46 | [
"Fungi",
"Fungus species"
] |
47,479,125 | https://en.wikipedia.org/wiki/Marek%20Sikora%20%28astronomer%29 | Marek Sikora is a Polish astronomer.
He achieved his Habilitation of astrophysics in 1990 from University of Warsaw. He received the title of professor in 1999. Currently he works as a professor in the Centrum Astronomiczne im. Mikołaja Kopernika PAN, Polish Academy of Sciences in Warsaw. He is interested mainly high energy astrophysics, astrophysical jets, the nuclei of active galaxies, and sources of cosmic radiation.
Published works
2008, 3C 454.3 Reveals the structure and physics of its Blazar zone, The Astrophysical Journal, 675, pp. 71, 2008 Marek Jan Sikora, Rafal Moderski, Gregory Maria Madejski
2008 Multiwavelength Observations of the Powerful Gamma-Ray Quasar PKS 1510-089: Clues on the Jet Composition, The Astrophysical Journal, 672, pp. 787, 2008 Marek Jan Sikora, Rafal Moderski, Kataoka et al. (Sikora, M .; Moderski, R.)
2008 Radio-loudness of Active Galaxies and the Black Hole Evolution, New Astronomy Reviews, 51, pp. 891, 2008 Marek Jan Sikora, Lukasz Stawarz, JP Lasota
2007 Radio-loudness of active galactic nuclei: observational facts and Theoretical implications, The Astrophysical Journal, 658, pp. 815, 2007 Marek Jan Sikora, Lukasz Stawarz, JP Lasota
2007 On Magnetic Field in Broad-line Blazars, Proceedings "Recontre de Moriond" 2007 Marek Jan Sikora, Rafal Moderski,
2007, Radio loudness of AGNs: host galaxy morphology and the spin paradigm Proceedings of "Extragalactic Jets: Theory and Observations from Radio is Gamma Rays", 2007, Marek Jan Sikora, Lukasz Stawarz, JP Lasota
2006 Dynamics and high-energy emission of the flaring HST-1 wick in the M87 jet, Monthly Noticies of the RAS, 370, p. 981, 2006 Marek Jan Sikora, Lukasz Stawarz, F. Aharonian, Kataoka J. Ostrowski M., Siemiginowska A.
2005 Klein-Nishina effects in the spectra of non-thermal sources immersed in external radiation fields, Monthly Noticies of the RAS, 363, p. 954, 2005 Marek Jan Sikora, Rafal Moderski, Coppi PS Aharonian, F.
References
Living people
21st-century Polish astronomers
Year of birth missing (living people)
20th-century Polish astronomers | Marek Sikora (astronomer) | [
"Astronomy"
] | 551 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
47,479,941 | https://en.wikipedia.org/wiki/Appellation%20d%27origine%20prot%C3%A9g%C3%A9e%20%28Switzerland%29 | In Switzerland, the appellation d'origine protégée (, ; abbr. AOP ) is a geographical indication (see also Appellation) protecting the origin and the quality of traditional food products other than wines (wines have another label called appellation d'origine contrôlée, AOC, 'controlled designation of origin').
In the past, the appellation d'origine contrôlée certification was used for both wines and other food products. In 2013, to match the system of the European Union, the appellation d'origine contrôlée was replaced by the appellation d'origine protégée for agricultural products other than wine.
Geographical indications and traditional specialities in Switzerland
The appellation d'origine protégée (AOP, protected designation of origin) certifies that "everything, from the raw material to the processing and the final product, comes from one clearly defined region of origin".
The indication géographique protégée (IGP, protected geographical indication) certifies that products were "either manufactured, processed or prepared at their place of origin".
The appellation d'origine contrôlée (AOC, controlled designation of origin) certifies wines.
Products
Appellation d'origine protégée (AOP)
Abricotine / Eau-de-vie d’abricot du Valais
Berner Alpkäse / Berner Hobelkäse
Boutefas
Cardon épineux genevois
Cuchaule
Damassine
Eau-de-vie de poire du Valais
Huile de noix vaudoise
Jambon de la Borne
Munder Safran
Pain de seigle valaisan
Poire à Botzi
Rheintaler Ribel
Zuger / Rigi Kirsch
Cheeses
Berner Alpkäse/Berner Hobelkäse
Emmentaler
L'Etivaz
Formaggio d'alpe ticinese
Glarner Alpkäse
Gruyère
Raclette du Valais / Walliser Raclette
Sbrinz
Tête de Moine, Fromage de Bellelay
Vacherin Fribourgeois
Vacherin Mont d'Or
Werdenberger Sauerkäse, Liechtensteiner Sauerkäse und Bloderkäse
AOP candidates
Jambon de la borne
Grappa Ticino
Indication géographique protégée (IGP)
Appenzeller Mostbröckli
Appenzeller Siedwurst
Appenzeller Pantli
Berner Zungenwurst
Bündnerfleisch
Glarner Kalberwurst
Longeole
Saucisse d'Ajoie
Saucisson neuchâtelois et Saucisse neuchâteloise
Saucisson vaudois
Saucisse aux choux vaudoise
St. Galler Bratwurst
Walliser Rohschinken
Walliser Trockenfleisch
Walliser Trockenspeck
Zuger Kirschtorte
IGP candidates
Absinthe de Val-de-Travers
See also
Geographical indications and traditional specialities in the European Union
Agriculture in Switzerland
Culinary Heritage of Switzerland
Notes and references
Bibliography
Stéphane Boisseaux and Dominique Barjolle, La bataille des AOC en Suisse. Les appellations d'origine contrôlées et les nouveaux terroirs, collection « Le savoir suisse », Presses polytechniques et universitaires romandes, 2004 ().
Certification marks
Agriculture in Switzerland
Appellations | Appellation d'origine protégée (Switzerland) | [
"Mathematics"
] | 732 | [
"Symbols",
"Certification marks"
] |
47,480,399 | https://en.wikipedia.org/wiki/Laurence%20D.%20Marks | Laurence Daniel Marks is an American emeritus professor of materials science and engineering at Northwestern University. He has contributed to the study of nanoparticles and worked in the fields of electron microscopy, diffraction, and crystallography.
Early life and education
Marks attended Trinity School of John Whitgift in Croydon; he played chess competitively for the school and won the British Chess Championship Under 21 in 1973.
Marks attended King's College at the University of Cambridge and graduated in 1976 with a B.A. in chemistry. From 1976 to 1980, he was a research student at the Cavendish Laboratory at Cambridge, where he worked with Archibald Howie on electron microscopy and the structure of metal crystals. He received his Ph.D. in physics from Cambridge in 1980. His dissertation topic was The Structure of Small Silver Particles.
Career
From 1980 to 1983, Marks was a post-doctoral research assistant at the Cavendish Laboratory. From 1983 to 1985, he was a post-doctoral research assistant with the Department of Physics at Arizona State University in Tempe, Arizona. He studied nanotwinning, leading toward a way to directly image the atomic scale of nano-surfaces.In March 1985, Marks joined the faculty of Northwestern University as an assistant professor in the Department of Materials Science & Engineering. He received a Sloan Research Fellowship for physics in 1987. One of his early research efforts led to the discovery of a type of nanoparticle now known as the Marks decahedron.
Marks was promoted to professor in June 1992. In 2019, he was a senior visiting scientist with the Suzhou Institute of Nano-tech and Nano-bionics (SINANO) of Chinese Academy of Sciences(CAS). In July 2023, Marks was selected for a Fulbright U.S. Scholar Program that allowed him to study triboelecticity in Australia. As of September 2023, Marks is an emeritus professor at Northwestern University.
Awards and honors
In 1989, Marks received the Burton Award from the Microscopy Society of America for achievements in the fields of microscopy and microanalysis by a scientist under 40 years of age. He received the Bertram E. Warren Award from the American Crystallographic Association in 2015 and the International Conference on the Structure of Surfaces Prize in 2017.
Marks was elected as a fellow of the American Physical Society in 2001, for his "contributions to quantitative imaging and diffraction methods for determining the atomic structure of surfaces and bulk materials", and a fellow of the Microscopy Society of America in 2017.
Selected publications
References
External links
1954 births
Living people
Immigrants to the United States
21st-century American physicists
Fellows of the American Physical Society
Tribologists
American crystallographers
Alumni of the University of Cambridge
Northwestern University faculty | Laurence D. Marks | [
"Materials_science"
] | 550 | [
"Tribology",
"Tribologists"
] |
47,480,525 | https://en.wikipedia.org/wiki/Distinguishing%20coloring | In graph theory, a distinguishing coloring or distinguishing labeling of a graph is an assignment of colors or labels to the vertices of the graph that destroys all of the nontrivial symmetries of the graph. The coloring does not need to be a proper coloring: adjacent vertices are allowed to be given the same color. For the colored graph, there should not exist any one-to-one mapping of the vertices to themselves that preserves both adjacency and coloring. The minimum number of colors in a distinguishing coloring is called the distinguishing number of the graph.
Distinguishing colorings and distinguishing numbers were introduced by , who provided the following motivating example, based on a puzzle previously formulated by Frank Rubin: "Suppose you have a ring of keys to different doors; each key only opens one door, but they all look indistinguishable to you. How few colors do you need, in order to color the handles of the keys in such a way that you can uniquely identify each key?" This example is solved by using a distinguishing coloring for a cycle graph. With such a coloring, each key will be uniquely identified by its color and the sequence of colors surrounding it.
Examples
A graph has distinguishing number one if and only if it is asymmetric. For instance, the Frucht graph has a distinguishing coloring with only one color.
In a complete graph, the only distinguishing colorings assign a different color to each vertex. For, if two vertices were assigned the same color, there would exist a symmetry that swapped those two vertices, leaving the rest in place. Therefore, the distinguishing number of the complete graph is . However, the graph obtained from by attaching a degree-one vertex to each vertex of has a significantly smaller distinguishing number, despite having the same symmetry group: it has a distinguishing coloring with colors, obtained by using a different ordered pair of colors for each pair of a vertex and its attached neighbor.
For a cycle graph of three, four, or five vertices, three colors are needed to construct a distinguishing coloring. For instance, every two-coloring of a five-cycle has a reflection symmetry. In each of these cycles, assigning a unique color to each of two adjacent vertices and using the third color for all remaining vertices results in a three-color distinguishing coloring. However, cycles of six or more vertices have distinguishing colorings with only two colors. That is, Frank Rubin's keyring puzzle requires three colors for rings of three, four or five keys, but only two colors for six or more keys or for two keys. For instance, in the ring of six keys shown, each key can be distinguished by its color and by the length or lengths of the adjacent blocks of oppositely-colored keys: there is only one key for each combination of key color and adjacent block lengths.
Hypercube graphs exhibit a similar phenomenon to cycle graphs. The two- and three-dimensional hypercube graphs (the 4-cycle and the graph of a cube, respectively) have distinguishing number three. However, every hypercube graph of higher dimension has distinguishing number only two.
The Petersen graph has distinguishing number 3. However other than this graph and the complete graphs, all Kneser graphs have distinguishing number 2. Similarly, among the generalized Petersen graphs, only the Petersen graph itself and the graph of the cube have distinguishing number 3; the rest have distinguishing number 2.
Computational complexity
The distinguishing numbers of trees, planar graphs, and interval graphs can be computed in polynomial time.
The exact complexity of computing distinguishing numbers is unclear, because it is closely related to the still-unknown complexity of graph isomorphism. However, it has been shown to belong to the complexity class AM. Additionally, testing whether the distinguishing chromatic number is at most three is NP-hard, and testing whether it is at most two is "at least as hard as graph automorphism, but no harder than graph isomorphism".
Additional properties
A coloring of a given graph is distinguishing for that graph if and only if it is distinguishing for the complement graph. Therefore, every graph has the same distinguishing number as its complement.
For every graph , the distinguishing number of is at most proportional to the logarithm of the number of automorphisms of . If the automorphisms form a nontrivial abelian group, the distinguishing number is two, and if it forms a dihedral group then the distinguishing number is at most three.
For every finite group, there exists a graph with that group as its group of automorphisms, with distinguishing number two. This result extends Frucht's theorem that every finite group can be realized as the group of symmetries of a graph.
Variations
A proper distinguishing coloring is a distinguishing coloring that is also a proper coloring: each two adjacent vertices have different colors. The minimum number of colors in a proper distinguishing coloring of a graph is called the distinguishing chromatic number of the graph.
References
Graph coloring | Distinguishing coloring | [
"Mathematics"
] | 1,002 | [
"Graph coloring",
"Mathematical relations",
"Graph theory"
] |
47,480,824 | https://en.wikipedia.org/wiki/Mary%20Mulvihill | Mary Mulvihill (1 September 1959 – 11 June 2015) was an Irish scientist, radio television presenter, author and educator. She founded and served as the first chairperson of Women in Technology and Science (WITS), and is viewed as a pioneer of science communication in Ireland. She was featured in Silicon Republic's 100 Top Women in STEM list.
Early life
Mulvihill studied at Trinity College Dublin, where she was elected a Scholar in Natural Science in 1979, and graduated in 1981 with a degree in genetics. She then went on to complete a master's degree in statistics in 1982 at Trinity.
Until 1987, she worked as a Research Officer for An Foras Taluntais (now Teagasc). She later attended Dublin City University to study journalism, earning a diploma in 1988.
Career
Mulvihill worked primarily as a self-employed freelancer, as a writer, broadcaster, and developing the online resource of Ingenious Ireland with its accompanying walking tours. She served on the Irish Council for Bioethics, and as a council member of Industrial Heritage Association of Ireland.
Broadcasting
Mulvihill was the creator and host of a number of popular science series for RTÉ Radio 1 and Lyric FM. Two of the radio series she developed centred on the collections of the National Botanic Gardens, Washed, Pressed and Dried (2007), and of the Natural History Museum, Chopped, Pickled, and Stuffed (2006).
Her work in broadcasting led her to develop a series of walking tours of Dublin, which took in the scientific history. These tours were also available as podcasts. One of the trails she developed was Dublin by Numbers, in conjunction with Institution of Engineers of Ireland, which focused on the places in Dublin relating to mathematics. The accompanying website maps places of historic interest linked to STEM in Ireland, as well as sites of ecological and archaeological interest. A similar set of audio tours were developed by Mulvihill, in collaboration with Matthew Jebb for the National Botanic Gardens.
Women in Technology and Science
Mulvihill was an advocate for science, technology, engineering, and mathematics (STEM), in particular the history and biographies of women involved in STEM. She founded the group Women in Technology and Science (WITS) in 1990, and served as the organisation's first chairperson. WITS is an advocacy and networking group for women in STEM fields in Ireland. One is the resources WITS provides is a register of Irish women in STEM interested in serving on boards and professional or conference panels.
In 2014, she launched the exhibition SeaScience and Exploration Zone at the Galway City Museum.
Writing
Mulvihill served as the co-editor of Enterprise Ireland’s bi-monthly magazine Technology Ireland. She was also a regular contributor to The Irish Times. She wrote a number of books, and edited two volumes of historical biographies of women in STEM for WITS. For her book, Ingenious Ireland: A County-by-County Exploration of Irish Mysteries and Marvels., she received the Irish National Science and Technology Journalist of the Year 2002-3, which the judges described as "a meticulously researched and hugely impressive book." With this book she also won the IBM Science Journalist of the Year award.
Mulvihill, M. (ed) (1997). Stars, Shells, & Bluebells: Women Scientists and Pioneers. Dublin: Women in Technology and Science (WITS).
Mulvihill. M. (2002). Ingenious Ireland: A County-by-County Exploration of Irish Mysteries and Marvels. Dublin: Town House.
Mulvihill, M. (ed) (2009). Lab Coats and Lace. Dublin: Women in Technology and Science (WITS).
Mulvihill, M. (2009). Drive Like a Woman, Shop Like a Man. Dublin: New Island Books.
Mulvihill, M. (2012). Ingenious Dublin. e-book: Ingenious Ireland.
Mulvihill was also a blogger, and was involved in Silicon Republic's Women Invent initiative and curated their list of Ireland's Greatest Women Inventors, in which younger people were encouraged to vote for their favourite. For 15 years Mulvihill published a science communications email newsletter (1995–2010) which in 2008 she titled Science@Culture Bulletin. The Mary Mulvihill Association plans to introduce a Science@Culture talk series in June 2022.
Personal life
Mulvihill was married to Scottish theoretical physicist Brian Dolan of Maynooth University. She died on 11 June 2015, aged 55. WITS celebrated its 25th anniversary on 3 November 2015 with a lecture in her memory and a lecture at the 2015 Robert Boyle Summer School in Lismore, County Waterford, was also dedicated to her.
Legacy
In 2016 the family and friends of Mary Mulvihill established the Mary Mulvihill Memorial Award to commemorate her work in science journalism and science communication. The award will go to a student at an Irish higher education institution who best represents the "curiosity, creativity and storytelling imagination"
In June 2020 Dublin City University announced a posthumous DCU Alumni Award for Mulvihill for Outstanding Achievement in the area of Societal Impact.
References
External links
Ingenious Ireland
Women in Technology and Science
Dictionary of Irish Biography entry for Mulvihill, Mary Rita
1959 births
2015 deaths
Alumni of Trinity College Dublin
Irish women scientists
Scientists from Dublin (city)
Radio personalities from the Republic of Ireland
RTÉ Radio 1 presenters
Scholars of Trinity College Dublin
Science communicators
Women science writers
20th-century women scientists
20th-century Irish women writers
Alumni of Dublin City University
Broadcasters from County Dublin | Mary Mulvihill | [
"Technology"
] | 1,149 | [
"Women science writers",
"Women in science and technology"
] |
47,480,994 | https://en.wikipedia.org/wiki/Project%20Piaba | Project Piaba is a fishery initiative located on the Rio Negro tributary of the Amazon River. The program both promotes and researches sustainable aquarium pet fish collection and its impact on the environment. The name of the project comes from the Brazilian Portuguese word, piaba , which means "little fish", referring specifically to the cardinal tetra (Paracheirodon axelrodi). Project Piaba is an ongoing project with annual research expeditions to the Rio Negro region. Because of the sustainable nature of the project, its slogan is "Buy a Fish, Save a Tree!"
Background
Many ornamental freshwater aquarium fish, including the cardinal tetra and the discus (Symphysodon ssp.), are sourced from the Amazon River Basin area. The Rio Negro region is the home of more than 100 different species of fish that are important to the pet fish trade. In fact, several species, including cardinal tetras, show the adaptive trait of iridescence which may provide lower visibility in a blackwater environment.
Project Piaba started with an ecological baseline study of the region which was conducted in 1989 by a group of researchers and students from the Universidade do Amazonas (UA) and the National Institute of Amazon Research (INPA). This initial survey discovered and documented the importance of the fish trade to the local economy, and it led the researchers to wonder about the impact the fishing had on the environment.
The ornamental fish trade in the Rio Negro region is considered "substantial by local standards, representing approximately US$ 3 million per year with over 30 million live fish exported annually." About 40,000 people in the region, many of them caboclos (river-dwelling families) are dependent on the income from their fisheries.
Development
In the 1950s, Herbert R. Axelrod and Willi Schwarz had begun shipping aquarium fish out of Barcelos in Brazil. In 1991, Ning Labbish Chao and Gregory Prang founded Project Piaba in order to support the local fisheries and in concert with them, help protect the habitat of collected fishes. Because of the "gentle" way the fish are caught and most of the fish caught for the aquarium trade are short-lived and would naturally die out during the dry season, the ecological impact of catching the fish is considered minimal. The fish are also not caught by fish farmers during their breeding season. The cardinal tetra, especially, is considered a renewable resource. Project Piaba assesses the sustainability of the species farmed in the Rio Negro area by using the "F value" which estimates the portion of the catch from the total biomass.
The center of the Rio Negro aquarium trade, Barcelos, now celebrates ornamental fish in a festival held every January in conjunction with the annual research expedition of Project Piaba. A stadium, known as Piabodrome, was even built for the festival. The first festival took place in 1994, and a permanent exhibit highlighting the fish was installed in Barcelos by Project Piaba that same year. Money donated by ichthyologist Herbert Axelrod helped support a lab and then later, the Centre for Aquatic Conservation, which has helped educate, support research and awareness of the project. The Centre was first opened in 1997. Other funds have come from the Association of Ornamental Fish Breeders and Exporters of Amazonas (ACEPOAM) for research on both the fish and the welfare of the fish farmers.
Structure
Scott Dowd is, as of 2015, the director of Project Piaba. He leads the yearly expeditions with experts from around the world, volunteers, and even family visiting the Amazon region.
The project has acted as a case study for other, similar projects. Areas such as the Western Ghats in India and areas of Bali are beginning to use similar practices to make money from the fish trade and sustain the environment of the fish. Project Piaba is often used to show how groups can support the environment while providing economic stimulus to a poor region of the world. In addition, the sustainable sourcing of fish is also a stimulus to the idea of "beneficial home fishkeeping", which emphasizes proper fish care, which, in turn, supports those who catch the fish in the wild. When no incentive exists to fish, individuals in the Rio Negro area turn to less environmentally friendly means of support, such as logging or cattle ranching. In fact, Project Piaba aims to actively discourage domestic farming of fish that are also sustainable resources, like the cardinal tetra, because it takes the financial incentive away from protecting the rain forest of the Rio Negro area.
Legacy
The project has the support of aquaria and zoos around the world and also from the International Union for Conservation of Nature.
References
External links
Official Page
Project Piaba: For Ornamental Fish (video)
Fishkeeping
Ecological experiments
Environmental research
Rio Negro (Amazon)
Sustainable fishery
Fisheries science
Nature conservation in Brazil
Amazon rainforest | Project Piaba | [
"Environmental_science"
] | 980 | [
"Environmental research"
] |
47,481,239 | https://en.wikipedia.org/wiki/Herta%20Regina%20Leng | Herta Regina Leng (24 February 1903 – 17 July 1997) was an Austrian-American physicist and educator.
Leng was born on 24 February 1903 in Vienna, Austria. She was the daughter of Arthur Leng and Paula Leng, and sister of Leopold Ignaz Leng. Leng fled Austria in 1939 and eventually emigrated to the United States in 1940. She died on 17 July 1997 in Troy, New York.
Purdue and RPI
Dr. Karl Lark-Horovitz, professor of physics at Purdue, had a keen interest in the development of the cyclotron and the application of physical techniques to solve biological problems, and sought to develop methods that utilized radioactive tracers produced from the cyclotron. With the assistance of Leng and Donald Tendam, radioactive tracers were employed following an intense regimen to develop these methods. Key studies concerned sodium and potassium in the human body and their uptake, distribution and excretion; sodium and potassium distribution in human blood cells; and the analysis of enteric coatings for medications. Leng was awarded an American Association of University Women fellowship for work at Purdue. The fellowship permitted her the freedom to pursue the pioneer research on radioactive tracer materials.
In 1943, Leng moved to New York City to accept a faculty appointment in physics at Rensselaer Polytechnic Institute (RPI) and in 1966 was promoted to become RPI's first female full professor.
Professional associations
Sigma Xi, Rensselaer Polytechnic Institute Chapter
Awards and honors
Herta Leng Memorial Lecture Series, Rensselaer Polytechnic Institute
Every year, RPI honors Leng with the Herta Leng Memorial Lecture Series.
Select publications
Adsorptionsversuche an Gläsern and Filtersubstanzen nach der Methode der radioaktiven Indikatoren. (Adsorption experiments on glasses and filter substances according to the method of radioactive indicators.)
Radioactive indicators, enteric coatings and intestinal absorption.
A new method of testing enteric coatings.
On the Existence of Single Magnetic Poles.
Pioneer woman in nuclear science.
References
1903 births
1997 deaths
20th-century American physicists
20th-century Austrian physicists
20th-century American women scientists
American women physicists
20th-century Austrian women scientists
Scientists from Vienna
Austrian emigrants to the United States
Purdue University faculty
Rensselaer Polytechnic Institute faculty
Fellows of the American Association of University Women
Radioactivity
Particle accelerators | Herta Regina Leng | [
"Physics",
"Chemistry"
] | 490 | [
"Radioactivity",
"Nuclear physics"
] |
47,482,405 | https://en.wikipedia.org/wiki/Stephen%20L.%20Buchwald | Stephen L. Buchwald (born 1955) is an American chemist and the Camille Dreyfus Professor of Chemistry at MIT. He is known for his involvement in the development of the Buchwald-Hartwig amination and the discovery of the dialkylbiaryl phosphine ligand family for promoting this reaction and related transformations. He was elected as a fellow of the American Academy of Arts and Sciences and as a member of the National Academy of Sciences in 2000 and 2008, respectively.
Early life and education
Stephen Buchwald was born in Bloomington, Indiana. He credits his "young and dynamic" high school chemistry teacher, William Lumbley, for infecting him with his enthusiasm.
In 1977 he received his Sc.B. from Brown University where he worked with Kathlyn A. Parker and David E. Cane as well as Gilbert Stork from Columbia University. In 1982 he received his Ph.D from Harvard University working under Jeremy R. Knowles.
Career
Buchwald was a postdoctoral fellow at Caltech with Robert H. Grubbs. In 1984, he joined MIT faculty as an assistant professor of chemistry. He was promoted to associate professor in 1989 and to Professor in 1993. He was named the Camille Dreyfus Professor in 1997. He has coauthored over 435 accepted academic publications and 47 accepted patents.
He is known for his involvement in the development of the Buchwald-Hartwig amination and the discovery of the dialkylbiaryl phosphine ligand family for promoting this reaction and related transformations. He was elected as a fellow of the American Academy of Arts and Sciences and as a member of the National Academy of Sciences in 2000 and 2008, respectively.
, he served as an associate editor for the academic journal, Advanced Synthesis & Catalysis.
Notable awards
Awards received by Buchwald include:
2005 - CAS Science Spotlight Award
2005 - Bristol-Myers Squibb Distinguished Achievement Award
2006 – American Chemical Society Award for Creative Work in Synthetic Organic Chemistry
2006 – Siegfried Medal Award in Chemical Methods which Impact Process Chemistry
2010 – Gustavus J. Esselen Award for Chemistry in the Public Interest
2013 – Arthur C. Cope Award
2014 – Ulysses Medal, University College Dublin
2014 – Linus Pauling Award
2014 – BBVA Foundation Frontiers of Knowledge Award in Basic Sciences
2015 – Honorary Doctorate, University of South Florida
2016 - William H. Nichols Medal
2019 – Wolf Prize in Chemistry
2019 – Roger Adams Award, American Chemical Society
2020 – Clarivate Citation Laureate
References
External links
21st-century American chemists
Massachusetts Institute of Technology School of Science faculty
Living people
Harvard University alumni
Brown University alumni
1955 births
American organic chemists
California Institute of Technology fellows
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Sciences | Stephen L. Buchwald | [
"Chemistry"
] | 559 | [
"Organic chemists",
"American organic chemists"
] |
47,482,411 | https://en.wikipedia.org/wiki/Matroid%20girth | In matroid theory, a mathematical discipline, the girth of a matroid is the size of its smallest circuit or dependent set. The cogirth of a matroid is the girth of its dual matroid. Matroid girth generalizes the notion of the shortest cycle in a graph, the edge connectivity of a graph, Hall sets in bipartite graphs, even sets in families of sets, and general position of point sets. It is hard to compute, but fixed-parameter tractable for linear matroids when parameterized both by the matroid rank and the field size of a linear representation.
Examples
The "girth" terminology generalizes the use of girth in graph theory, meaning the length of the shortest cycle in a graph: the girth of a graphic matroid is the same as the girth of its underlying graph.
The girth of other classes of matroids also corresponds to important combinatorial problems. For instance, the girth of a co-graphic matroid (or the cogirth of a graphic matroid) equals the edge connectivity of the underlying graph, the number of edges in a minimum cut of the graph. The girth of a transversal matroid gives the cardinality of a minimum Hall set in a bipartite graph: this is a set of vertices on one side of the bipartition that does not form the set of endpoints of a matching in the graph.
Any set of points in Euclidean space gives rise to a real linear matroid by interpreting the Cartesian coordinates of the points as the vectors of a matroid representation.
The girth of the resulting matroid equals one plus the dimension of the space when the underlying set of point is in general position, and is smaller otherwise.
Girths of real linear matroids also arise in compressed sensing, where the same concept is referred to as the spark of a matrix.
The girth of a binary matroid gives the cardinality of a minimum even set, a subcollection of a family of sets that includes an even number of copies of each set element.
Computational complexity
Determining the girth of a binary matroid is NP-hard.
Additionally, determining the girth of a linear matroid given by a matrix representing the matroid is W[1]-hard when parameterized by the girth or by the rank of the matroid, but fixed-parameter tractable when parameterized by a combination of the rank and the size of the underlying field.
For an arbitrary matroid, given by an independence oracle, it is impossible to find the girth using a subexponential number of matroid queries. Similarly, for a real linear matroid of rank , with elements, described by an oracle that gives the orientation of any -tuple of elements, it requires oracle queries to determine the girth.
Computations using a girth oracle (an oracle that reports the smallest dependent subset of a given set of elements) have also been considered.
References
Girth | Matroid girth | [
"Mathematics"
] | 614 | [
"Matroid theory",
"Combinatorics"
] |
47,482,565 | https://en.wikipedia.org/wiki/Wolfiporia%20castanopsis | Wolfiporia castanopsis is a species of wood-decay fungus in the order Polyporales. It is found in Yunnan, China, where it grows on the rotten wood of Castanopsis orthacantha. The type locality was the Zixishan Nature Reserve in Chuxiong. The fungus, described as new to science in 2011 by mycologist Yu-Cheng Dai, is named for the tree with which it associates.
References
Polyporaceae
Fungi described in 2011
Fungi of China
Taxa named by Yu-Cheng Dai
Fungus species | Wolfiporia castanopsis | [
"Biology"
] | 112 | [
"Fungi",
"Fungus species"
] |
47,482,566 | https://en.wikipedia.org/wiki/Wolfiporia%20curvispora | Wolfiporia curvispora is a species of fungus in the order Polyporales. It is found in Jilin, China, where it grows on the rotting wood of Pinus koraiensis. The fungus was described as new to science in 1998 by mycologist Yu-Cheng Dai. The fruitbodies of the fungus are resupinate, meaning they lie flat on the substrate, and have dimensions of up to long by wide by thick. They are creamy white (buff when dry), soft, and light. The hyphal system is dimitic, comprising generative and skeletal hyphae. The specific epithet curvispora refers to the curved spores.
References
Polyporaceae
Fungi described in 1998
Fungi of China
Taxa named by Yu-Cheng Dai
Fungus species | Wolfiporia curvispora | [
"Biology"
] | 161 | [
"Fungi",
"Fungus species"
] |
47,482,567 | https://en.wikipedia.org/wiki/Wolfiporia%20dilatohypha | Wolfiporia dilatohypha is a species of fungus in the order Polyporales. Although it was first described as Poria inflata by Lee Oras Overholts, he neglected to include a Latin description of the species, (then required by the International Code of Nomenclature for algae, fungi, and plants), and so the name was not validly published. Mycologists Leif Ryvarden and Robert Lee Gilbertson published the species validly in 1984 in a revision of Overholts' work. The type collection was made in Oxford, Ohio in 1911.
References
Polyporaceae
Fungi described in 1984
Fungi of the United States
Taxa named by Leif Ryvarden
Fungi without expected TNC conservation status
Fungus species | Wolfiporia dilatohypha | [
"Biology"
] | 152 | [
"Fungi",
"Fungus species"
] |
47,482,568 | https://en.wikipedia.org/wiki/Wolfiporia%20sulphurea | Wolfiporia sulphurea is a species of fungus in the family Polyporaceae. First described in 1917 as Merulius sulphureus by Edward Angus Burt, it was transferred to the genus Wolfiporia by James Herbert Ginns in 1984.
References
Polyporaceae
Fungi described in 1917
Fungi of North America
Fungus species | Wolfiporia sulphurea | [
"Biology"
] | 69 | [
"Fungi",
"Fungus species"
] |
47,482,569 | https://en.wikipedia.org/wiki/Wolfiporia%20cartilaginea | Wolfiporia cartilaginea is a species of fungus in the order Polyporales. Found in northeastern China, it was described as new to science by Norwegian mycologist Leif Ryvarden in 1986. The type locality was the Changbaishan National Nature Reserve in Jilin province. Fruitbodies of the fungus are resupinate, with tiny pores measuring 3–4 per millimetre. The ellipsoidal spores are hyaline (translucent), non-amyloid, and measure 4–5 by 2–2.5 μm.
References
Polyporaceae
Fungi described in 1986
Fungi of China
Taxa named by Leif Ryvarden
Fungus species | Wolfiporia cartilaginea | [
"Biology"
] | 140 | [
"Fungi",
"Fungus species"
] |
47,482,811 | https://en.wikipedia.org/wiki/Spiral%20Dynamics | Spiral Dynamics (SD) is a model of the evolutionary development of individuals, organizations, and societies. It was initially developed by Don Edward Beck and Christopher Cowan based on the emergent cyclical theory of Clare W. Graves, combined with memetics. A later collaboration between Beck and Ken Wilber produced Spiral Dynamics Integral (SDi). Several variations of Spiral Dynamics continue to exist, both independently and incorporated into or drawing on Wilber's Integral theory. Spiral Dynamics has applications in management theory and business ethics, and as an example of applied memetics. However, it lacks mainstream academic support.
Overview
Spiral Dynamics describes how value systems and worldviews emerge from the interaction of "life conditions" and the mind's capacities. The emphasis on life conditions as essential to the progression through value systems is unusual among similar theories, and leads to the view that no level is inherently positive or negative, but rather is a response to the local environment, social circumstances, place and time. Through these value systems, groups and cultures structure their societies and individuals integrate within them. Each distinct set of values is developed as a response to solving the problems of the previous system. Changes between states may occur incrementally (first order change) or in a sudden breakthrough (second order change). The value systems develop in a specific order, and the most important question when considering the value system being expressed in a particular behavior is why the behavior occurs.
Overview of the levels
Development of the theory
University of North Texas (UNT) professor Don Beck sought out Union College psychology professor Clare W. Graves after reading about his work in The Futurist. They met in person in 1975, and Beck, soon joined by UNT faculty member Chris Cowan, worked closely with Graves until his death in 1986. Beck made over 60 trips to South Africa during the 1980s and 1990s, applying Graves's emergent cyclical theory in various projects. This experience, along with others Beck and Cowan had applying the theory in North America, motivated the development of Spiral Dynamics.
Beck and Cowan first published their extension and adaptation of Graves's emergent cyclical theory in Spiral Dynamics: Mastering Values, Leadership, and Change (Exploring the New Science of Memetics) (1996). They introduced a simple color-coding for the eight value systems identified by Graves (and a predicted ninth) which is better known than Graves's letter pair identifiers. Additionally, Beck and Cowan integrated ideas from the field of memetics as created by Dawkins and further developed by Csikszentmihalyi, identifying memetic attractors for each of Graves's levels. These attractors, which they called "VMemes", are said to bind memes into cohesive packages which structure the world views of both individuals and societies.
Diversification of views
While Spiral Dynamics began as a single formulation and extension of Graves's work, a series of disagreements and shifting collaborations have produced three distinct approaches. By 2010, these had settled as Christopher Cowan and Natasha Todorovic advocating their trademarked "SPIRAL DYNAMICS®" as fundamentally the same as Graves's emergent cyclical theory, Don Beck advocating Spiral Dynamics Integral (SDi) with a community of practice around various chapters of his Centers for Human Emergence, and Ken Wilber subordinating SDi to his similarly but-not-identically colored Integral AQAL "altitudes", with a greater focus on spirituality.
This state of affairs has led to practitioners noting the "lineage" of their approach in publications.
Timeline
The following timeline shows the development of the various Spiral Dynamics factions and the major figures involved in them, as well as the initial work done by Graves. Splits and changes between factions are based on publications or public announcements, or approximated to the nearest year based on well-documented events.
Vertical bars indicate notable publications, which are listed along with a few other significant events after the timeline.
Bolded years indicate publications that appear as vertical bars in the chart above:
1966: Graves: first major publication (in The Harvard Business review)
1970: Graves: peer reviewed publication in Journal of Humanistic Psychology
1974: Graves: article in The Futurist (Beck first becomes aware of Graves's theory; Cowan a bit later)
1977: Graves abandons manuscript of what would later become The Never Ending Quest
1979: Beck and Cowan found National Values Center, Inc. (NVC)
1981: Beck and Cowan resign from UNT to work with Graves; Beck begins applying theory in South Africa
1986: Death of Clare Graves
1995: Wilber: Sex, Ecology, Spirituality (introduces quadrant model, first mention of Graves's ECLET)
1996: Beck and Cowan: Spiral Dynamics: Mastering Values, Leadership, and Change
1998: Cowan and Todorovic form NVC Consulting (NVCC) as an "outgrowth" of NVC
1998: Cowan files for "Spiral Dynamics" service mark, registered to NVC
1999: Beck (against SD as service mark) and Cowan (against Wilber's Integral theory) cease collaborating
1999: Wilber: The Collected Works of Ken Wilber, Vol. 4: Integral Psychology (first Spiral Dynamics reference)
2000: Cowan and Todorovic: "Spiral Dynamics: The Layers of Human Values in Strategy" in Strategy & Leadership (peer reviewed)
2000: Wilber: A Theory of Everything (integrates SD with AQAL, defines MGM: "Mean Green Meme")
2000: Wilber founds the Integral Institute with Beck as a founding associate around this time
2002: Beck: "SDi: Spiral Dynamics in the Integral Age" (launches SDi as a brand)
2002: Todorovic: "The Mean Green Hypothesis: Fact or Fiction?" (refutes MGM)
2002: Graves; William R. Lee (annot.); Cowan and Todorovic (eds.): Levels of Human Existence, transcription of Graves's 1971 three-day seminar
2004: Beck founds the Center for Human Emergence (CHE),
2005: Beck, Elza S. Maalouf and Said E. Dawlabani found the Center for Human Emergence Middle East
2005: Graves; Cowan and Todorovic (eds.): The Never Ending Quest
2005: Beck and Wilber cease collaborating around this time, disagreeing on Wilber's changes to SDi
2006: Wilber: Integral Spirituality (adds altitudes colored to align with both SDi and chakras)
2009: NVC dissolved as business entity, original SD service mark (officially registered to NVC) canceled
2010: Cowan and Todorovic re-file for SD service mark and trademark, registered to NVC Consulting
2015: Death of Chris Cowan
2017: Wilber: Religion of Tomorrow (further elaborates on the altitude concept and coloring)
2018: Beck et al.: Spiral Dynamics in Action
2022: Death of Don Beck
Cowan and Todorovic's "Spiral Dynamics"
Chris Cowan's decision to trademark "Spiral Dynamics" in the US and form a consulting business with Natasha Todorovic contributed to the split between Beck and him in 1999. Cowan and Todorovic subsequently published an article on Spiral Dynamics in the peer-reviewed journal Strategy & Leadership, edited and published Graves's unfinished manuscript, and generally took the position that the distinction between Spiral Dynamics and Graves's ECLET is primarily one of terminology. Holding this view, they opposed interpretations seen as "heterodox."
In particular, Cowan and Todorovic's view of Spiral Dynamics stands in opposition to that of Ken Wilber. Wilber biographer Frank Visser describes Cowan as a "strong" critic of Wilber and his Integral theory, particularly the concept of a "Mean Green Meme." Todorovic produced a paper arguing that research refutes the existence of the "Mean Green Meme" as Beck and particularly Wilber described it.
Beck's "Spiral Dynamics integral" (SDi)
By early 2000, Don Beck was corresponding with integral philosopher Ken Wilber about Spiral Dynamics and using a "4Q/8L" diagram combining Wilber's four quadrants with the eight known levels of Spiral Dynamics. Beck officially announced SDi as launching on January 1, 2002, aligning Spiral Dynamics with integral theory and additionally citing the influence of John Petersen of the Arlington Institute and Ichak Adizes. By 2006, Wilber had introduced a slightly different color sequence for his AQAL "altitudes", diverging from Beck's SDi and relegating it to the values line, which is one of many lines within AQAL.
Later influences on SDi include the work of Muzafer Sherif and Carolyn Sherif in the fields of realistic conflict and social judgement, specifically their Assimilation Contrast Effect model and Robber's Cave study
SD/SDi and Ken Wilber's Integral Theory
Ken Wilber briefly referenced Graves in his 1986 book (with Jack Engler and Daniel P. Brown) Transformations of Consciousness, and again in 1995's Sex, Ecology, Spirituality which also introduced his four quadrants model. However, it was not until the "Integral Psychology" section of 1999's Collected Works: Volume 4 that he integrated Gravesian theory, now in the form of Spiral Dynamics. Beck and Wilber began discussing their ideas with each other around this time.
AQAL "altitudes"
By 2006, Wilber was using SDi only for the values line, one of many lines in his All Quadrants, All Levels/Lines (AQAL) framework. In the book Integral Spirituality published that year, he introduced the concept of "altitudes" as an overall "content-free" system to correlate developmental stages across all of the theories on all of the lines integrated by AQAL.
The altitudes used a set of colors that were ordered according to the rainbow, which Wilber explained was necessary to align with color energies in the tantric tradition. This left only Red, Orange, Green, and Turquoise in place, changing all of the other colors to greater or lesser degrees. Furthermore, where Spiral Dynamics theorizes that the 2nd tier would have six stages repeating the themes of the six stages of the 1st tier, in the altitude system the 2nd tier contains only two levels (corresponding to the first two SD 2nd tier levels) followed by a 3rd tier of four spiritually-oriented levels inspired by the work of Sri Aurobindo. Beck and Cowan each consider this 3rd tier to be non-Gravesian.
Wilber critic Frank Visser notes that while Wilber gives a correspondence of his altitude colors to chakras, his correspondence does not actually match any traditional system for coloring chakras, despite Wilber's assertion that using the wrong colors would "backfire badly when any actual energies were used." He goes on to note that Wilber's criticism of the SD colors as "inadequate" ignores that they were not intended to correlate with any system such as chakras. In this context, Visser expresses sympathy for Beck and Cowan's dismay over what Visser describes as "vandalism" regarding the color scheme, concluding that the altitude colors are an "awkward hybrid" of the SD and rainbow/chakra color systems, both lacking the expressiveness of the former and failing to accurately correlate with the latter.
Criticism and limitations
As an extension of Graves's theory, most criticisms of that theory apply to Spiral Dynamics as well. Likewise, to the extent that Spiral Dynamics Integral incorporates Ken Wilber's integral theory, criticism of that theory, and the lack of mainstream academic support for it are also relevant.
In addition, there have been criticisms of various aspects of SD and/or SDi that are specific to those extensions. Nicholas Reitter, writing in the Journal of Conscious Evolution, observes:
On the other hand, the SD authors seem also to have magnified some of the weaknesses in Graves' approach. The occasional messianism, unevenness of presentation and constant business-orientation of Graves' (2005) manuscript is transmuted in the SD authors' book (Beck and Cowan 1996) into a sometimes- bewildering array of references to world history, pop culture and other topics, often made in helter-skelter fashion.
Spiral Dynamics has been criticized by some as appearing to be like a cult, with undue prominence given to the business and intellectual property concerns of its leading advocates.
Metamodernists Daniel Görtz and Emil Friis, writing as Hanzi Freinacht, who created a multi-part system combining aspects of SD with other developmental measurements dismissed the Turquoise level, saying that while there will eventually be another level, it does not currently exist. They argue that attempts to build Turquoise communities are likely to lead to the development of "abusive cults"
Psychologist Keith Rice, discussing his application of SDi in individual psychotherapy, notes that it encounters limitations in accounting for temperament and the unconscious. However, regarding SDi's "low profile among academics," he notes that it can easily be matched to more well-known models "such as Maslow, Loevinger, Kohlberg, Adorno, etc.," in order to establish trust with clients.
Patrick Vermeren, author of A Skeptic's HR Dictionary - The ultimate self-defense Guide for CEO's, HR Professionals, I/O Students and Employees, HR expert and journalist sees Spiral Dynamics as an ideological construct that is in blatant contradiction to scientific facts and has no theoretical or empirical validity. His points of criticism are:
Scientific untenability: Vermeren criticizes the fact that Spiral Dynamics is not a scientifically sound theory. Clare W. Graves' speculative assumptions about human developmental stages contradict established findings in biology, physics and evolutionary psychology. In particular, Vermeren considers the idea that humans could overcome their competitive nature to be unscientific and unrealistic.
Incorrect dating: The dating of the various developmental stages of human existence is incorrect and contradicts the findings of evolutionary biology. Vermeren criticizes the fact that the chronological assignments of the theory have no scientific basis.
and disseminated.
Arbitrary color assignments: The color codes assigned to the various stages of development in Spiral Dynamics have been arbitrarily determined without deeper meaning. This further undermines the credibility of the theory.
Lack of empirical evidence: Vermeren emphasizes that there is no robust empirical data to support Spiral Dynamics. Much of Graves' purported research has been lost, and there is no way to independently verify this data.
Metaphysical and esoteric elements: The theory contains increasingly metaphysical and esoteric aspects, further distancing it from a scientific basis. These elements contradict the principles of modern science and seem like an esoteric ideology.
Contradiction to the theory of evolution: Spiral Dynamics contradicts the theory of evolution and the findings of the theory of evolution, for example on the development of competition and selfish behavior in humans. According to Vermeren, the assumption that humans will reach a completely new stage of evolution in the near future is implausible.
Pseudoscientific propagation: Vermeren sees Spiral Dynamics as a pseudoscience that is falsely sold as a progressive theory. He criticizes the fact that, despite its obvious weaknesses, the theory is being adopted and disseminated by HR professionals and even business schools.
Influence and applications
Spiral Dynamics has influenced management theory, which was the primary focus of the 1996 Spiral Dynamics book. John Mackey and Rajendra Sisodia write that the vision and values of conscious capitalism as they articulate it are consistent with the "2nd tier" VMEMES of Spiral Dynamics. Rica Viljoen's case study of economic development in Ghana demonstrates how understanding the Purple VMEME allows for organizational storytelling that connects with diverse (non-Western) worldviews.
Spiral Dynamics has also been noted as an example of applied memetics. In his chapter, "'Meme Wars': A Brief Overview of Memetics and Some Essential Context" in the peer-reviewed book Memetics and Evolutionary Economics, Michael P. Schlaile includes Spiral Dynamics in the "organizational memetics" section of his list of "enlightening examples of applied memetics." Schlaile also notes Said Dawlabani's SDi-based "MEMEnomics" as an alternative to his own "economemetics" in his chapter examining memetics and economics in the same book. Elza Maalouf argues that SDi provides a "memetic" interpretation of non-Western cultures that Western NGOs often lack, focusing attention on the "indigenous content" of the culture's value system.
One of the main applications of Spiral Dynamics is to inform more nuanced and holistic systems change strategies. Just like categories in any other framework, the various levels can be seen as memetic lenses to look at the world through in order to help those leading change take a bird's eye view in understanding the diverse perspectives on a singular topic. At best, Spiral Dynamics can help us to synthesize these perspectives and recognize the strength in having a diversity of worldviews and aim to create interventions that take into consideration the needs and values of individuals at every level of the spiral.
Spiral Dynamics continues to influence integral philosophy and spirituality, and the developmental branch of metamodern philosophy. Both integralists and metamodernists connect their philosophies to SD's Yellow VMEME. Integralism also identifies with Turquoise and eventually added further stages not found in SD or SDi, while metamodernism dismisses Turquoise as nonexistent.
SDi has also been referenced in the fields of education,
urban planning,
and cultural analysis.
Notes
Works cited
(Note on page ii: "This study was approved by Indiana University Institutional Review Board (IRB)." Note also that a previous report was published as: Nasser, Ilham (June 2020). "Mapping the Terrain of Education 2018–2019: A Summary Report". Journal of Education in Muslim Societies. Indiana University Press. 1 (2): 3–21. doi:10.2979/jems.1.2.08, but is not freely downloadable.)
Developmental psychology | Spiral Dynamics | [
"Biology"
] | 3,739 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
47,482,842 | https://en.wikipedia.org/wiki/Surface%20plasmon%20resonance%20microscopy | Surface plasmon resonance microscopy (SPRM), also called surface plasmon resonance imaging (SPRI), is a label free analytical tool that combines the surface plasmon resonance of metallic surfaces with imaging of the metallic surface.
The heterogeneity of the refractive index of the metallic surface imparts high contrast images, caused by the shift in the resonance angle. SPRM can achieve a sub-nanometer thickness sensitivity and lateral resolution achieves values of micrometer scale. SPRM is used to characterize surfaces such as self-assembled monolayers, multilayer films, metal nanoparticles, oligonucleotide arrays, and binding and reduction reactions. Surface plasmon polaritons are surface electromagnetic waves coupled to oscillating free electrons of a metallic surface that propagate along a metal/dielectric interface. Since polaritons are highly sensitive to small changes in the refractive index of the metallic material, it can be used as a biosensing tool that does not require labeling. SPRM measurements can be made in real-time, such as measuring binding kinetics of membrane proteins in single cells, or DNA hybridization.
History
The concept of classical SPR has been since 1968 but the SPR imaging technique was introduced in 1988 by Rothenhäusler and Knoll. Capturing a high resolution image of low contrast samples for optical measuring techniques is a near impossible task until the introduction of SPRM technique that came into existence in the year 1988. In SPRM technique, plasmon surface polariton (PSP) waves are used for illumination. In simple words, SPRI technology is an advanced version of classical SPR analysis, where the sample is monitored without label through the use of a CCD camera. The SPRI technology with the aid of CCD camera gives advantage of recording the sensograms and SPR images, and simultaneously analyzes hundreds of interactions.
Principles
Surface plasmons or surface plasmon polaritons are generated by coupling of electrical field with free electrons in a metal. SPR waves propagate along the interface between dielectrics and a conducting layer rich in free electrons.
As shown in Figure 2, when light passes from a medium of high refractive index to a second medium with a lower refractive index, the light is totally reflected under certain conditions.
In order to get total internal reflection (TIR), the θ1 and θ2 should be within a certain range that can be explained through the Snell's law. When light passes through a high refractive index media to a lower refractive media, it is reflected at an angle θ2, which is defined in Equation 1.
In the TIR process some portion of the reflected light leaks a small portion of electrical field intensity into medium 2 (η1 > η2). The light leaked into the medium 2 penetrates as an evanescent wave. The intensity and penetration depth of the evanescent wave can be calculated according to Equations 2 and 3, respectively.
Figure 3 shows a schematic representation of surface plasmons coupled to electron density oscillations. The light wave is trapped on the surface of the metal layer by collective coupling to the electrons of the metal surface. When the electron's plasma and the electric field of the wave light couple their frequency oscillations they enters into resonance.
Recently, the leakage light inside of the metal surface had been imaged.
Radiation of different wavelengths (green, red and blue) was converted into surface plasmon polaritons, through the interaction of the photons at the metal/dielectric interface. Two different metal surfaces were used; gold and silver. The propagation length of the SPP along the x-y plane (metal plane) in each metal and photon wavelength were compared. The propagation length is defined as the distance traveled by the SPP along the metal before its intensity decreases by a factor of 1/e, as defined in Equation 4.
Figure 4 shows the leakage light captured by a color CCD camera, of the green, red and blue photons in gold (a) and silver (b) films. In part c) of Figure 4, the intensity of the surface plasmon polaritons with the distance is shown. It was determined that the leakage light intensity is proportional to the intensity in the waveguide.
where δSPP is the propagation length; ε’m and ε’’m are the relative permittivity of the metal and λ0 is the free space wavelength.
The metallic film is capable of absorbing light due to the coherent oscillation of the conduction band electrons induced by the interaction with an electromagnetic field.
Electrons in the conduction band induce polarization after interaction with the electric field of the radiation. A net charge difference is created in the surface of the metal film, creating a collective dipolar oscillation of electrons with the same phase.
When the electron motion matches the frequency of the electromagnetic field, the absorption of incident radiation occurs. The oscillation frequency of gold surface plasmons is found in the visible region of the electromagnetic spectrum, giving a red color while silver gives yellow color.
Nanorods exhibit two absorption peaks in the UV-vis region due to longitudinal and transversal oscillation, for gold nanorods the transverse oscillation generates a peak at 520 nm, while the longitudinal oscillation generates absorption at longer wavelengths, within a range of 600 to 800 nm. Silver nanoparticles shift their light absorption wavelengths to higher energy levels, where the blue shifting goes from 408 nm to 380 nm, and 372 nm, when they change from sphere to rod and wire, respectively.
The absorption intensity and wavelength of gold and silver depends on the size and shape of the particles.
In Figure 5, the size and shape of silver nanoparticles influenced the intensity of the scattered light and maximum wavelength of silver nanoparticles. The triangular shaped particles appear red with a maximum scattered light at 670–680 nm, the pentagonal particles appear in green (620–630 nm) and the spherical particles have higher absorption energies (440–450 nm), appear in blue.
Plasmon excitation methods
Surface plasmon polaritons are quasiparticles, composed by electromagnetic waves-coupled to free electrons of the conduction band of metals.
One of widely used methods uses to couple p-polarized light with the metal-dielectric interface is prism-based coupling.
Prism couplers are the most widely used to excite surface plasmon polaritons. This method is also called Kretschmann–Raether configuration, where TIR creates an evanescent wave that couples the free electrons of the metal surface.
High numerical aperture objective lenses have been explored as a variant of prism-coupling to excite surface plasmon polaritons. Waveguide coupling is also used to create surface plasmons.
Prism coupling
Kretschmann–Raether configuration is used to achieve resonance between light and free electrons of the metal surface. In this configuration a prism with high refractive index is interfaced with a metal film. Light from a source propagates through the prism is made incident on the metal film. As a consequence of the TIR, some leaked through metal film, forming evanescent wave in the dielectric medium as in Figure 6.
The evanescent wave penetrates a characteristic distance into the less optically dense medium where it is attenuated.
Figure 6 shows the Kretschmann–Raether configuration, where a prism with refractive index of η1 is coupled to a dielectric surface with a refractive index η2, the incidence angle of the light is θ.
The interaction between the light and the surface polaritons in the TIR can be explained by using the Fresnel multilayer reflection; the amplitude reflection coefficient (rpmd) is expressed as follows in Equation 5.
The power reflection coefficient R is defined as follows:
In Figure 7, a schematic representation of the Otto prism coupling prism is shown. In the Figure 7, the air gap was shown a little thick just to explain the schematic although in reality, the air gap is so thin between prism and metal layer.
Waveguide coupling
The electromagnetic waves are conducted through an optical waveguide. When light enters to the region with a thin metal layer, it evanescently penetrates through the metal layer exciting a Surface Plasmon Wave (SPW). In waveguide coupling configuration, the waveguide is created when the refraction index of the grating is greater than that of substrate. Incident radiation propagates along the waveguide layer with high refractive index.
In Figure 8, electromagnetic waves are guided through a wave-guiding layer, once the optical waves reached the interface wave-guiding layer metal an evanescent wave is created. The evanescent wave excites the surface plasmon at the metal-dielectric interface.
Grating coupling
Due to the periodic grating, the phase matching between the incident light and the guide mode is easy to obtain.
According to Equation 7, the propagation vector (Kz) in the z direction can be tuned by changing the periodicity Λ. The grating vector can be modified, and the angle of resonant excitation can be controlled.
In Figure 9, q is the diffraction order it can have values of any integer (positive, negative or zero).
Resonance measurement methods
The propagation constant of a monochromatic beam of light parallel to the surface is defined by Equation 8.
where θ is the angle of incidence, ksp is the propagation constant of the surface plasmon, and n(p) is the refractive index of the prism. When the wave vector of the SPW, ksp matches the wave vector of the incident light , SPW is expressed as:
Here εd and εm represent the dielectric constant of dielectrics and the metal while the wavelength of the incident light corresponds to λ. kx and ksp can be represented as:
The surface plasmons are evanescent waves that have their maximum intensity at the interface and decay exponentially away from the phase boundary to a penetration depth.
The propagation of the surface plasmons is intensely affected by a thin film coating on the conducting layer. The resonance angle θ shifts, when the metal surface is coated with a dielectric material, due to the change of the propagation vector k of the surface plasmon.
This sensitivity is due to the shallow penetration depth of the evanescent wave. Materials with a high amount of free electrons are used. Metal films of roughly 50 nm made of copper, titanium, chromium and gold are used. However, Au is the most common metal used in SPR as well as in SPRM.
Scanning angle SPR is the most widely used method for detecting biomolecular interactions.
It measures the reflectance percentage (%R) from a prism/metal film assembly as a function of the incident angle at a fixed excitation wavelength. When the angle of incidence matches the propagation constant of the interface, this mode is excited at expenditure of the reflected light. As a consequence, the reflectivity value at the resonance angle is dumped.
The propagation constant of the polaritons can be modified by varying the dielectric material. This modification causes resonance angle shifting as in the example shown in Figure 10, from θ1 to θ2 due to the change on the surface plasmon propagation constant.
The resonance angle can be found by using Equation 11.
where n1 is n2 and ng are the refractive index of medium 1, 2 and the metal layer, respectively.
Using TIR two-dimensional imaging is possible to achieve spatial differences in %R at a fixed angle θ. A beam of monochromatic light is used to irradiate the sample at a fixed incident angle. The SPR image is created from the reflected light detected by a CCD camera.
The minimum value of %R at the resonance angle provides SPRM.
Huang and collaborators developed a microscope with an objective with high numerical aperture (NA), which improve the lateral resolution at expense of the longitudinal resolution.
Lateral resolution
The resolution of a conventional light microscopy is limited by the light diffraction limit. In SPRM, the excited surface plasmons adopt a horizontal configuration from the incident beam light. The polaritons will travel along the metal-dielectric interface, for a determined period, until they decay back into photons. Therefore, the resolution achieved by SPRM is determined by the propagation length ksp of the surface plasmons parallel to the incident plane.
The separation between two areas should be approximately the magnitude of ksp in order to be resolved. Berger, Kooyman and Greve showed that the lateral resolution can be tuned by changing the excitation wavelength, the better resolution is achieved when the excitation energy increases. Equations 4 and 12 defines the magnitude of the wave vector of the surface plasmons.
where n2 is the refractive index of medium 2, ng is the refractive index of the metal film, and λ is the excitation wavelength.
Instrumentation
The surface plasmon resonance microscopy is based on surface plasmon resonance and recording desired images of the structures present on the substrate using an instrument equipped with a CCD camera. In the past decade, SPR sensing has been demonstrated to be an exceedingly powerful technique and used quite extensively in the research and development of materials, biochemistry and pharmaceutical sciences.
The SPRM instrument works with the combination of the following main components: source light (typically He-Ne laser), that further travels through a prism that is attached to a glass side, coated with a thin metal film (typically gold or silver), where the light beam reflects at the gold/solution interface at an angle greater than the critical angle. The reflected light from the interface surface area is recorded by a CCD detector, and an image is recorded. Although the above-mentioned components are some important for SPRM, additional accessories such as polarizers, filters, beam expanders, focusing lenses, rotating stage, etc., similar to several imaging methods are installed and used in the instrumentation for an effective microscopic technique as demanded by the application. Figure 12 shows a typical SPRM. Depending on the applications, and to optimize the imaging technique, the researchers modify this basic instrumentation with some design changes that even include altering the source beam. One of such design changes that resulted in a different SPRM is an objective-type as shown in Figure 11 with some modification in the optical configuration.
The SPRi systems are currently manufactured by well known biomedical instrumentation manufacturers such as GE Life Sciences, HORIBA, Biosensing USA, etc. The cost of SPRi's ranger from, USD 100k-250k, although simple demonstration prototypes can be made for USD2000.
Sample preparation
To perform measurements for SPRM, the sample preparation is a critical step. There are two factors that can be affected by the immobilization step: one is the reliability and reproducibility of the acquire data. It is important to ensure stability to the recognition element; such as antibodies, proteins, enzymes, under the experiment conditions. Moreover, the stability of the immobilized specimens will affect the sensitivity, and/or the limit of detection (LOD).
One of the most popular immobilization methods used is Self-Assembled Monolayer (SAM) on gold surface. Jenkins and collaborators 2001, used mercaptoethanol patches surrounded by SAM composed of octadecanethiol (ODT) to study the adsorption of egg-phosphatidylcholine on the ODT SAM.
A pattern of ODT-mercaptoethanol was made onto a 50 nm gold film. The gold film was obtained through thermal evaporation on a LaSFN 9 glass. The lipid vesicles were deposited on the ODT SAM through adsorption, giving a final multilayer thickness greater than 80 Å.
11-Mercaptoundecanoic acid-Self assembled monolayer (MUA-SAM) were formed on Gold coated BK7 slides. A PDMS plate was masked on the MUA-SAM chip. Clenbuterol (CLEN) was attached to BSA molecules through amide bond, between the carboxylic group of BSA and the amine group of CLEN molecules. In order to immobilize BSA on the gold surface, the spots created through PDMS making were functionalized with sulfo-NHS and EDC, subsequently 1% BSA solution was poured in the spots and incubated for 1 hour. Non-immobilized BSA was rinsed out with PBS and CLEN solution was poured on the spots, unimmobilized CLEN was removed through PBS rinse.
An alkanethiol-SAM was prepared in order to simultaneously measure the concentration of horseradish peroxidase (Px), Human Immunoglobulin E (IgE), Human choriogonadotropin (hCG) and Human immunoglobulin G (IgG), through SPR. The alkanethiols made of carbon chains composed by 11 and 16 carbons were self-assembled on the sensor chip. The antibodies were attached to the C16 alkanethiol, which had a terminal carboxylic group.
The micro patterned electrode was fabricated by gold deposition on microscope slides. PDMS stamping was used to produce an array of hydrophilic/hydrophobic surface; ODT treatment followed by immersion in 2-mercaptoethanol solutions rendered a functionalized surface for lipid membranes deposition. The patterned electrode was characterized through SPRM. In the Figure 14 B, the SPRM image reveals the size of the pockets, which was 100 um x 100 um, and they were 200 um apart. As is seen in the image the remarkable contrast of the image is due to the high sensitivity of the technique.
Applications
SPRM is a useful technique for measuring concentration of biomolecules in the solution, detection of binding molecules and real time monitoring of molecular interactions. It can be used as biosensor for surface interactions of biological molecules: antigen-antibody binding, mapping and sorption kinetics. For example, one of the possible reason of Type 1 diabetes of children is the high-level presence of Cow's milk antibodies IgG, IgA, IgM (mainly due to IgA) in their serum.
Cow's milk antibodies can be detected in the milk and serum sample using SPRM.
SPRM is also advantageous to detect the site-specific attachment of lymphocyte B or T on antibody array. This technique is convenient to study the label free and real time interactions of cells on the surface. So SPRM can be served as diagnostic tool for cell surface adhesion kinetics.
Besides its merits, there are limitations of SPRM though. It's not applicable for detecting low molecular weight molecules. Although it's label free but will need to have crystal clean experimental conditions. Sensitivity of SPRM can be improved with coupling of MALDI-MS.
There are a number of applications of SPRM from which some of them are being described here.
Membrane proteins
Membrane proteins are responsible for the regulation of cellular responses to extracellular signals. It has been the challenging thing to investigate the involvement of membrane proteins in disease biomarkers and therapeutic targets and its binding kinetics with their ligands. Traditional approaches could not reflect clear structures and functions of membrane proteins.
In order to understand the structural details of membrane proteins, there is a need of alternate analytical tool, which can provide three-dimensional and sequential resolutions that can monitor membrane proteins. Atomic force microscopy (AFM) is an excellent method for obtaining high spatial resolution images of membrane proteins,
but it might not be helpful to investigate its binding kinetics. Fluorescence-based microscopy (FLM) can be used to study the interactions of membrane proteins in individual cells but it requires development of proper labels and needs tactics for different target proteins.
Furthermore, host protein may be affected by the labeling.
Binding kinetics of MP's in the single living cells can be studied via label free imaging method based on SPR Microscopy without extracting the proteins from the cell membranes, which help scientists to work with the actual conformations of the membrane proteins. Furthermore, distribution and local binding activities of membrane proteins in each cell can be mapped and calculated. SPR microscopy (SPRM) makes possible to simultaneously optical and fluorescence imaging of the same sample, which prove to get the advantages of both label-based and label-free detection methods in the single setup.
Detection of DNA hybridization
SPR imaging is used to study the multiple adsorption interactions in an array format under same experimental conditions. Nelson and his coworkers introduced a multistep procedure to create DNA arrays on gold surfaces for use with SPR imaging.
Affinity interactions can be studied for a variety of target molecules e.g. proteins and nucleic acids. Mismatching of bases in the DNA sequence leads to the number of lethal diseases like lynch syndrome which has high risk of colon cancer.
SPR imaging is useful to monitor adsorption of molecules on the gold surface which is possible because of the change in the reflectivity from the surface. First G-G mismatch pair is stabilized by attaching it with the ligand, naphthyridine dimer, through hydrogen bonding which make the hairpin structures in double stranded DNA on gold surface. Binding of Dimer with DNA enhances the free energy of hybridization, which causes change in index of refraction.
DNA array is fabricated to test the G–G mismatch stabilizing properties of the naphthyridine dimer. Each of the four immobilized sequences in the array differed by one base. The position of this base is indicated by an X in sequence 1 as shown in Figure 16. The SPR difference image is only detected for the sequence having cytosine (C) base at the X position in sequence 1, the complementary sequence to sequence 2. However, the SPR difference image corresponding to the addition of sequence 2 in the presence of the naphthyridine dimer shows that, in addition to its complement, sequence 2 also hybridizes to the sequence that forms a G–G mismatch. These results demonstrate that SPR imaging is a promising tool for monitoring single base mismatches and screen out the hybridized molecules.
Antibody binding to protein arrays
SPR imaging can be used to study the binding of antibodies to protein array. Amine functionalities on the gold surface with proteins array, is used to study binding of antibodies. Immobilization of the protein was done by flowing protein solutions through the PDMS micro channels. Then PDMS was removed from the surface and solutions of antibody were flowed over the array. Three-component protein array containing the proteins human fibrinogen, ovalbumin, and bovine IgG is shown in Figure 17, SPR images obtained by Kariuki and co-workers. This contrast in the array is due to difference of refractive index which is outcome of local binding of antibodies. These images show that there is a high degree of antibody binding specificity and a small degree of non-specific adsorption of the antibody to the array background, which can be improved to modify the array background. Based on these results, SPR imaging technique can be opted as diagnostic tool for studying the antibody interactions to protein arrays.
Coupled with mass spectrometry
Discovery and validation of protein biomarkers are crucial for diseases diagnosis. Coupling of SPRM with MALDI-mass spectrometer (SUPRA-MS) enables the multiplex quantification of binding and molecular characterization on the basis of different masses. SUPRA-MS is used to detect, identify and characterize the potential breast cancer biomarker, LAG3 protein, introduced in the human plasma. Glass slides were taken to prepare gold chips via coating with thin layers of chromium and gold by sputtering process. Gold surface was functionalized using solution of 11-Mercapto-1-undecanol (11-MUOH) and 16-mercapto-1-hexadecanoic acid (16-MHA). This self-assembled monolayer was activated with sulfo-NHS and EDC. Pattern of sixteen droplets was deposited on the macroarray. Immunoglobin G antibodies were spotted against Lymphocyte activation gene 3 (α-LAG3) and rat serum albumin (α-RSA). After placing biochip in the SPRi and running buffer solution in the flow cell, α-LAG3 was injected. Special image station was used on the proteins that are attached. This station can also be placed on the MALDI. Before placing on the MALDI, captured proteins were reduced, digested and loaded with matrix in order to avoid contamination.
Antigen density is directly proportional to change in reflectivity ΔR because evanescent wave penetration depth Lzc is larger than thickness of immobilized antigen layer.
where is the index increment of the molecule and is the sensitivity prism, reflectivity.
Clean mass spectrum was obtained for LAG3 protein due to good tryptic digestion and homogeneity of the matrix (α-cyano-4-hydroxycinnamic acid). Relatively high intensity m/z peak of LAG3 protein was found at 1,422.70amu with average mascot score of 87.9 ± 2.4. Validation of MS results was further confirmed by MS-MS analysis. These results are similar to classical analytical method in-gel digestion.
Greater S/N > 10, 100% reliability and detection at femtomole level on chip proves the credibility of this coupling technique. One can find protein-protein interaction and on-chip peptide distribution with high spatial resolution using subjected technique.
DNA aptamers
Aptamers are particular DNA ligands that target biomolecules such as proteins. SPR imaging platform would be a good choice to characterize aptamer -protein interactions. To study the aptamer-protein interaction, first oligonucleotides are grafted through formation of thiol Self Assembling Monolayer (SAM) on gold substrate using piezoelectric dispensing system. Thiol groups are introduced on DNA nucleotides by N-hydroxysuccinimide (NHS). Target oligonucleotides having a primary amine group at their 59th end are conjugated to HS-C (11)-NHS in phosphate buffer solution at pH 8.0 for one hour at room temperature. Aptamer grafting biosensor is placed on SPRM after rinsing. Then Thrombin is co-injected with excess of cytochrome C for signal specificity. Concentration of free thrombin is determined by calibration curve obtained by plotting initial slope of the signal at the beginning of injection against concentration. The interaction of thrombin and the aptamer can be monitored on microarray in real-time during injections of thrombin at different concentrations. Solution phase dissociation constant KDsol (3.16 ± 1.16 nM) is calculated from the measured concentrations of free thrombin.
[THR---APT] = cTHR – [THR], the equilibrium concentration of thrombin attached to aptamers in solution and [APT] = cAPT – [THR---APT], the concentration of free aptamers in solution.
Surface phase dissociation constant KDsurf (3.84 ± 0.68) is obtained by fitting Langmuir adsorption isotherm on equilibrium signals. Both dissociation constants are significantly different because KDsurf is dependent on the surface grafting density as shown in Figure 19. This dependence extrapolates linearly at low sigma to solution-phase affinity.
The difference in SPRi image can gives us information regarding the presence of binding and specificity but not suitable for quantification of free protein in case of multiple affinity sites. The real time monitoring of the interaction is possible by using SPRM to study the kinetics and the affinity of the interactions.
Detection of polymer interaction
Despite using surface plasmon resonance imaging (SPRi) in biology to characterize interactions between two biological molecules, it is also useful to monitor the interactions between two polymers. In this approach, one polymer, called as host protein HP, is immobilized on the surface of a biochip and the other polymer designated as guest polymer GP is inserted on the SPRi-Biochip to study the interactions. For example, a host protein of amine-functionalized poly(β-cyclodextrin) and guest protein of PEG (ada)4.
SPRi biochip was used for immobilization of HP of different concentrations. An array of HP active sites was produced on the chip. The attachment of HP was done through its amino groups to N-hydroxy succinimide functionalities on the gold surface. First SPRi system was filled running buffer solution followed by placing of SPRi –biochip into the analysis chamber. Two solutions of different concentrations of GP was 1g/L and 0.1 g/L were injected in the flow cell. The association and the dissociation of both polymers can be monitored in real-time on the basis of change in reflectivity and images from SPRM can be differentiated on the basis of white spots (association phase) and black spots (dissociation phase). PEG without adamantyl groups didn’t show adsorption on β-cyclodextrin cavities. On the other hand, there wasn’t any adsorption of GP without HP on the chip. Change in SPRi response on the reaction sites is provided by the capturing of kinetic curves and real time images from the CCD camera. Local changes in light reflectivity are directly related to quantity of target molecules on each point. Variation at the surface of the chip provide comprehensive knowledge on molecular binding and kinetic processes.
Bio-mineralization
One of the important class of biomaterials is polymer hydroxyapatite that is remarkably useful in the field of bone regeneration because of its resemblance with natural bone material. The advantage of hydroxyapatite, (Ca10(PO4)6(OH)2, is being started to form inside the bone tissue through mineralization which also advocate the enhancement of osteointegration. Biomineralization is also called calcification, in which calcium cations come from cells and physiological fluids while phosphate anions are produced from hydrolysis of phosphoesters and phosphoproteins as well as from the body fluids. This phenomenon is also tested in vitro studies.
For in vitro studies, Polyamidoamine (PAMAM) dendrimers with amino- and carboxylic-acid external reactive shells are considered as sensing phase. These dendrimers are required to immobilized on the gold surface and inactive to gold surface. Hence, thiols groups have to be introduced at the terminals of dendrimers so that dendrimers can be attached on the gold surface. Carboxylic groups are functionalized by N,N-(3-dimethylaminopropyl)-N’-ethyl-carbodiimide hydrochloride (EDC) and N-hydroxysuccinimide (NHS) solutions in phosphate buffer. Functional groups (amide, amino and carboxyl) act as ionic pumps capturing calcium ions from the test fluids; then calcium cations bind with phosphate anions to generate calcium-phosphate mineral nuclei on the dendrimer surface.
SPRM is expected to be sensitive enough to provide important quantitative information on mineralization's occurrence and kinetics. This detection of the mineralization is based on the specific mass change induced by the mineral nuclei formation and growth. Nucleation and progress in mineralization can be monitored by SPRM as shown in Figure 20. PAMAM-containing sensors are fixed on the SPRi analysis platform and then exposed to experimental fluids in the flow cell as shown in Figure 21. SPRM is not adapted to sense the origin and nature of mass change but it detects the modification of refractive index due to mineral precipitation.
References
Microscopy
Plasmonics | Surface plasmon resonance microscopy | [
"Physics",
"Chemistry",
"Materials_science"
] | 6,701 | [
"Plasmonics",
"Surface science",
"Condensed matter physics",
"Microscopy",
"Nanotechnology",
"Solid state engineering"
] |
47,482,979 | https://en.wikipedia.org/wiki/Semiaquatic | In biology, being semi-aquatic refers to various macroorganisms that live regularly in both aquatic and terrestrial environments. When referring to animals, the term describes those that actively spend part of their daily time in water (in which case they can also be called amphibious), or land animals that have spent at least one life stages (e.g. as eggs or larvae) in aquatic environments. When referring to plants, the term describes land plants whose roots have adapted well to tolerate regular, prolonged submersion in water, as well as emergent and (occasionally) floating-leaved aquatic plants that are only partially immersed in water.
Examples of semi-aquatic animals and plants are given below.
Semiaquatic animals
Semiaquatic animals include:
Vertebrates
Amphibious fish; also several types of normally fully aquatic fish such as the grunion and plainfin midshipman that spawn in the intertidal zone
Some amphibians such as newts and salamanders, and some frogs such as fire-bellied toads and wood frogs.
Some reptiles such as crocodilians, turtles, water snakes and marine iguanas.
Waterbirds, especially penguins, waterfowls, storks and shorebirds.
Some rodents such as beavers, muskrats and capybaras.
Some insectivorous mammals such as desmans, water shrews and platypuses.
Some carnivoran mammals, including seals, otter and polar bears.
Some marsupials, including the water opossum and the two lutrine opossums.
Hippopotamuses.
Indian rhinoceros.
Water buffalo.
Tapirs.
Moose.
Semiterrestrial echinoderms of the intertidal zone, such as the "cliff-clinging" sea urchin Colobocentrotus atratus and the starfish Pisaster ochraceus
Arthropods
Aquatic insects (e.g., dragonflies) with at least one non-aquatic life stage (e.g., adults), or amphibious insects (e.g., amphibious caterpillars or the ant Polyrhachis sokolova). Members of the hemipteran infraorders Gerromorpha and Nepomorpha occupy a variety of semiaquatic and aquatic niches, with many of the former locomoting on the water surface; a few of these are marine (e.g., Halobates, Hermatobates).
Semiaquatic springtails, such as Anurida maritima
Semiterrestrial malacostracan crustaceans (e.g., many crabs, such as Pachygrapsus marmoratus, some amphipods, such as Orchestia gammarellus, some isopods, such as Ligia oceanica and some barnacles, such as Balanus glandula)
Horseshoe crabs are mostly aquatic but spawn in the intertidal zone; juveniles live in tidal flats
Semiaquatic spiders, such as Ancylometes or Dolomedes (these are distinct from the almost fully aquatic Argyroneta)
An amphibious centipede, Scolopendra cataracta
Semiaquatic annelids, such as the earthworm Sparganophilus
Molluscs
Intertidal bivalves, such as Enigmonia, which lives on mangroves
Intertidal chitons, such as Acanthopleura granulata
Semiterrestrial gastropods, such as the intertidal Patella vulgata, a limpet; also amphibious freshwater and marine snails, such as Pomatiopsis or Cerithideopsis scalariformis, respectively
Semiterrestrial flatworms of the intertidal zone, such as the acotylean Myoramyxa pardalota
Semiaquatic plants
Semiaquatic plants include:
Semiaquatic angiosperms (e.g., mangroves, reeds, water spinach and the entire order Nymphaeales)
Semiaquatic conifers, such as pond cypress
Semiaquatic ferns, such as Pilularia americana
A semiaquatic horsetail, Equisetum fluviatile
Semiaquatic quillworts, such as Isoetes melanospora
Semiaquatic club mosses, such as Lycopodiella inundata
Semiaquatic mosses, such as Sphagnum macrophyllum
Semiaquatic liverworts, such as Riccia fluitans
Notes
References
Broad-concept articles
Aquatic ecology | Semiaquatic | [
"Biology"
] | 957 | [
"Aquatic ecology",
"Ecosystems"
] |
47,484,124 | https://en.wikipedia.org/wiki/V372%20Carinae | V372 Carinae is a single star in the southern constellation of Carina. Located around 1300 light-years distant. It shines with a luminosity approximately 1742 times that of the Sun and has a surface temperature of 14132 K. It is a Beta Cephei variable. A magnitude 5.7 star, it will be faintly visible on moonless nights to the naked eye of a person located far from city lights.
In 1977, Mikołaj Jerzykiewicz and Christiaan Sterken announced their discovery that the star, then called HD 64722, is a variable star. It was given its variable star designation, V372 Carinae, in 1981.
References
B-type main-sequence stars
B-type subgiants
Beta Cephei variables
Carina (constellation)
Carinae, b1
Durchmusterung objects
064722
038438
3582
Carinae, V372 | V372 Carinae | [
"Astronomy"
] | 192 | [
"Carina (constellation)",
"Constellations"
] |
47,485,347 | https://en.wikipedia.org/wiki/Embedded%20analytics | Embedded analytics enables organisations to integrate analytics capabilities into their own, often software as a service, applications, portals, or websites. This differs from embedded software and web analytics (also commonly known as product analytics).
This integration typically provides contextual insights, quickly, easily and conveniently accessible since these insights should be present on the web page right next to the other, operational, parts of the host application. Insights are provided through interactive data visualisations, such as charts, diagrams, filters, gauges, maps and tables often in combination as dashboards embedded within the system. This setup enables easier, in-depth data analysis without the need to switch and log in between multiple applications. Embedded analytics is also known as customer facing analytics.
Embedded analytics is the integration of analytic capabilities into a host, typically browser-based, business-to-business, software as a service, application. These analytic capabilities would typically be relevant and contextual to the use-case of the host application.
The use-case is, most commonly business-to business, since businesses typically have more sophisticated analytic expectations and needs than consumers. Here, though, the word "business" in "business-to-business software as a service", could also refer to organisational, operational use cases that ultimately benefit consumers (such as healthcare, for instance), e.g.: clinics & hospitals, care & correctional facilities, educational establishments (on/offline), government departments, municipalities, museums, not-for-profit organisations, overseers & regulators amongst others.
Business-to-business-to-consumer use-cases might also be possible, for example a wealth management software as a service application serving wealth management organisations, where a user might be an advisor to consumers.
History
The term "embedded analytics" was first used by Howard Dresner: consultant, author, former Gartner analyst and inventor of the term "business intelligence" said Howard Dresner while he was working for Hyperion Solutions, a company that Oracle bought in 2007. Oracle started then to use the term "embedded analytics" at their press release for Oracle Rapid Planning on 2009 .
Considerations with embedded analytics
When evaluating embedding analytics, consideration would normally be given to integration at various levels, these would likely include: security integration, data integration, application logic integration, business rules integration, and user experience integration.
This is in contrast to traditional BI, which expects users to leave their workflow applications to look at data insights in a separate set of tools. This immediacy makes embedded analytics much more intuitive and likely to be valued by users. A December 2016 report from Nucleus Research found that using BI tools, which require toggling between applications, can take up as much as 1–2 hours of an employee's time each week, whereas embedded analytics eliminate the need to toggle between apps.
There's a spectrum of options for embedding analytics, on the one hand, at the outset, for example in developing a software as a service minimum viable product, developers will often select a visualisation library, since this is assumed to be the most flexible way to create unique and differentiated analytic experiences. At the other end of the spectrum are Business Intelligence tools, these might make some sacrifices in flexibility for developers, but make up for this with the maturity and sophistication of products optimised for data scientists and analysts.
With embedded analytics, developers and product managers are looking for some kind of compromise between those two extremes of flexibility and user sophistication: flexibility sufficient for product teams to innovate and differentiate, sophistication sufficient to provide advanced analytic capabilities yet without the user being a data scientist or necessarily having any analytic background experience or training. The objective would be intuitive, contextual analytics, consumed as regular web content, immersed into operational user experiences and workflows usable without any special knowledge or training required.
Use-cases for embedded analytics
The use-cases for embedded analytics are as diverse as the vertical (industry-specific) or horizontal (function, process or role-specific - across industries) host applications in which they are embedded, some examples include:
Vertical use-case examples
Automotive, reservation/rental & dealerships, education, energy management, Fintech (banking, asset management, wealth management), hospital management & healthcare (clinics, care-homes and in the field), learning management, property & facilities management, retail, staffing, supply chain management, transportation & fleet management, unified communications
Horizontal use-case examples
Advertising & multichannel marketing, [[customer relationship management, enterprise resource planning, human resources, human capital management, payroll & benefits, information technology service management, procurement and purchase-topPay
Analytics vs analysis
A common perception is that analytics is mostly, or solely, about analysis. A key value from analytics is the ability to analyse, however the potential for analytics can go substantially beyond this once embedded in the processes of a host software as a service]] application.
When considering user profiles, the difference becomes clearer:
With analysis, the user would be expected to be trained, experienced or at least familiar with the principals of analysis and might have job titles such as analyst, data analyst or data scientist. This person, for example, would understand chart selection, in other words, given a specific data set, which chart type(s) would best illustrate what can be learnt from the data. This person probably has a good understanding of data structures, might have the ability to write queries, might be familiar with data modelling and would likely have a strong statistical awareness.
With embedded analytics, the user would be expected to be trained, experienced or at least familiar with business processes and might have no formal or other data science skills. This person is more interested in outcomes that can be driven from analytics rather than exploration into inconsistencies or anomalies that can be found in the data.
With embedded analytics, a software as a service user is probably less interested in spending much time analysing, their purpose, with analytic content immersed as part of a business process, is to drive outcomes, potentially at scale. For this persona, an analytic journey might start with a dashboard helpful in highlighting an anomaly which needs prioritised attention. Clicking on that anomaly, guides the user in understanding the root-cause that's causing the need for action. Once a root-cause has been investigated, the embedded analytics can place the user in the part of the host application to act, potentially at scale. So rather than reschedule, reorder, reassign one thing, the embedded analytics can apply business rules and pass parameters to the host application to act 100 or 1,000 times, instead of once, where each action may be individually customised.
Types of embedded analytics products
When considering the integration of analytics into your solution, you can choose from various categories of software products. These options can be broadly classified into three main groups:
Embedded analytics for SaaS software: Specifically designed for software as a service applications, this category offers specialized embedded analytics solutions. They are ideal for enhancing the analytics capabilities of software as a service platforms, enabling data-driven insights and features tailored to software as a service environments. Like GoodData, icCube, Logi Analytics, Looker (company), Sisense.
Business intelligence software: If your goal is to incorporate pre-existing, comprehensive Business Intelligence software into your solution, you can opt for this category. It allows for seamless integration of generic BI tools for data analysis and reporting.
JavaScript graphics library: If you prefer to build analytics solutions from the ground up, utilizing JavaScript graphics libraries provides the flexibility to create custom analytics components tailored to your specific needs.
References
External links
Gartner Glossary - Embedded Analytics
Forbes - The Competitive Advantages Of Embedded Analytics
Embedded Analytics Technology Stack Conparisons
Types of analytics
Big data
Business intelligence terms
Data management | Embedded analytics | [
"Technology"
] | 1,601 | [
"Data management",
"Data",
"Big data"
] |
47,485,605 | https://en.wikipedia.org/wiki/FN%20Canis%20Majoris | FN Canis Majoris is a binary star system in the southern constellation Canis Major, near the northern constellation border with Monoceros. It is dimly visible to the naked eye with a combined apparent visual magnitude of 5.41. The system is located at a distance of approximately 3,000 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +31 km/s. It is a runaway star associated with the Sh 2-296 nebula in the CMa OB1 association, and has a conspicuous bow-shock feature.
The brighter component is a visual magnitude 5.69 B-type star that has been assigned various stellar classification from B0 III/IV to B2 Ia/ab, suggesting it is an evolved state. In 1967, Graham Hill announced his discovery that the star, then known as HD 53974, is a variable star. It was given its variable star designation, FN Canis Majoris, in 1970. In the past it was classified as a Beta Cephei type variable star with an apparent magnitude that was measured varying between +5.38 and +5.42 over a period of 36.7 hours, but is no longer considered to be one. This is a massive star with estimates ranging from 19 to 36 times the mass of the Sun, and luminosity estimates of 122,079 to 690,000 times the Sun's luminosity. The magnitude 7.04 companion is located at an angular separation of from the primary at a position angle of 111°, as of 2003.
References
B-type giants
Canis Major
Canis Majoris, FN
2678
053974
34301
Durchmusterung objects
Binary stars | FN Canis Majoris | [
"Astronomy"
] | 356 | [
"Canis Major",
"Constellations"
] |
47,485,661 | https://en.wikipedia.org/wiki/NW%20Puppis | NW Puppis, also known as υ2 Puppis, is a star in the constellation Puppis. Located around 910 light-years distant, it shines with a luminosity approximately 1,108 times that of the Sun and has a surface temperature of .
The star's variability was first detected in 1970 (based on observations made at La Silla Observatory), and announced by Armand van Hoof in 1973. It was given its variable star designation in 1977. Anamarija Stankov ruled this star out as a Beta Cephei variable, but the GCVS and the International Variable Star Index classify it as both a Beta Cephei variable and a rotating ellipsoidal variable. The GCVS lists its period as 0.125 days, but the TESS data shows lower frequency and stochastic brightness variations.
Neither component of this double is given a letter in Lacaille's catalogue or the British Association star catalogue. Gould gave them the designations (Latin letter) v1 and v2 Puppis, but these are rarely used. Lacaille applied the Greek letter υ to the star now called υ Carinae. The designation υ2 first appeared in several catalogues at the end of the 19th century.
References
Puppis
B-type main-sequence stars
Puppis, NW
2790
057219
035406
Durchmusterung objects
Beta Cephei variables
Emission-line stars
Puppis, Upsilon2 | NW Puppis | [
"Astronomy"
] | 298 | [
"Puppis",
"Constellations"
] |
49,245,321 | https://en.wikipedia.org/wiki/Chinese%20Chemical%20Society%20%28Beijing%29 | The Chinese Chemical Society (CCS; ) is a professional society of chemists headquartered in Beijing. It is part of the China Association for Science and Technology. Current membership is at around 55,000.
History
The CCS was founded in Nanjing on August 4, 1932. It merged with the Chinese Chemical Engineering Society in 1959. The organizations were separated again in 1963. CSS has been a member of the International Union of Pure and Applied Chemistry (IUPAC) since 1980 and of the Federation of Asian Chemical Societies (FACS) since 1984.
International affiliations
Pacific Polymer Federation (PPF)
International Society of Electrochemistry (ISE)
International Association of Catalysis Societies (IACS)
International Confederation for Thermal Analysis and Calorimetry (ICTAC)
Publications
The CCS publishes many academic journals, including:
CCS Chemistry
Acta Chimica Sinica
Chinese J. Chemistry
Chinese Chemical Letters
Chemistry Bulletin
Acta Physico-Chimica Sinica
Journal of Inorganic Chemistry
Organic Chemistry
Analytical Chemistry
Journal of Applied Chemistry
Journal of Chromatography
Organic Chemistry
Acta Polymerica Sinica
Chinese J. Polym. Sci.
Polymer Bulletin
Electrochemistry
Journal of Catalysis
Chinese J. Molecular Science
Journal of Fuel Chemistry and Technology
Journal of Structural Chemistry
University Chemistry
Journal of Chemical Education
See also
Chemical Society Located in Taipei
References
External links
Chinese Chemical Society website
Professional associations based in China
Chemistry societies
Science and technology in China
1932 establishments in China | Chinese Chemical Society (Beijing) | [
"Chemistry"
] | 293 | [
"Chemistry societies",
"nan",
"Chemistry organization stubs"
] |
49,247,763 | https://en.wikipedia.org/wiki/Fluoroethyl-L-tyrosine%20%2818F%29 | {{DISPLAYTITLE:Fluoroethyl-L-tyrosine (18F)}}
Fluoroethyl--tyrosine (18F) commonly known as [18F]FET, is a radiopharmaceutical tracer used in positron emission tomography (PET) imaging. This synthetic amino acid, labeled with the radioactive isotope fluorine-18, is a valuable radiopharmaceutical tracer for use in neuro-oncology for diagnosing, planning treatment, and following up on brain tumors such as gliomas. The tracer's ability to provide detailed metabolic imaging of tumors makes it an essential tool in the clinical management of brain cancer patients. Continued advancements in PET imaging technology and the development of more efficient synthesis methods are expected to further enhance the clinical utility of [18F]FET.
Radiosynthesis
There are two common pathways for the radiosynthesis of [18F]FET. The first one utilizes a nucleophilic 18F-fluorination of ethyleneglycol-1,2-ditosylate with a subsequent 18F-fluoroethylation of a precursor di-potassium salt of L-tyrosine. This sequence requires two purification steps, two different precursors and a dual-reactor synthesis module which is not widely available in research or commercial centers. Schematic for this pathway is presented in Figure 1.
Second route of radiosynthesis is a direct nucleophilic 18F-fluorination a TET (O-(2-tosyloxy-ethyl)-N-trityl-L-tyrosine tertbutylester) pretected precursor followed by acidic hydrolysis of protecting groups. Schematic for this pathway is presented in Figure 2.
Mechanism of action
The use of radiolabeled amino acids for brain tumor imaging utilizes the increased proiliferation of tumor cells and overexpression in the amino acid transport system observed in malignant brain tumors.
As far as the [18F]FET is concerned following intravenous injection it is transported into cells primarily through amino acid transporters, particularly system L transporters, which are upregulated in many tumor cells. Once inside the cells, [18F]FET does not undergo significant further metabolism but accumulates in tumor tissues, allowing for their visualization and quantification using PET imaging.
The differential uptake provides a high tumor-to-background contrast, facilitating the detection of primary and recurrent brain tumors. Unlike some other PET tracers, [18F]FET does not significantly accumulate in inflammatory tissues, reducing false positives and improving diagnostic specificity.
Animal studies
Animal studies in rodents have demonstrated high uptake of [18F]FET in brain tumors, with a significant tumor-to-brain ratio, making it a useful tracer for brain tumor imaging.
Heiss et al. conducted in vitro and in vivo investigation of transport mechanism and uptake of [18F]FET. The experimented utilized human colon carcinoma cells (SW 707) and xenotransplanted, tumor-bearing mice. [18F]FET was shown to be transported mainly (>80%) by the l-type amino acid transporter system, which was inhibited by 2-amino-2-norbornanecarboxylic acid (BCH) and not incorporated into proteins in SW 707 cells. This study also help to establish the half-life of [18F]FET in the plasma (94 min), brain-to-blood ratio (0.86) and shower statistically significant higher uptake of [18F]FET in the xenotransplanted tumor than in any other organ beside the pancreas.
In 1999 biodistribution studies in mice with colon carcinoma cells were conducted by Wester et al. The study showed a high uptake of radioactivity in the pancreas (18% injected dose (ID)/g) at 60 min after injection of [18F]FET. The brain (2.17% ID/g) and the tumors (6.37% ID/g) showed moderate uptakes of the radiotracer. Rapid distribution of [18F]FET with completion time of less than 5 min was observed for liver, kidney and blood. The other organs showed little elevated uptake with time. [18F]FET remained intact in the tissue tested samples (pancreas, brain, tumor and plasma) and no incorporation of radiotracer into proteins was observed.
Another biodistribution study was carried out by Wang et al. In this study the comparison between [18F]FDG and [18F]FET in rats with gliomas showed a moderate uptake and a long retention time of [18F]FET in liver, kidneys, lung, heart and blood whereas a diminished uptake was observed in healthy brain. The maximum uptake of [18F]FET and [18F]FDG in the glioma was observed at 60 min post injection 1.49% and 2.77% ID/g, respectively. The tumor-to-brain ratios were 3.15 for [18F]FET and 1.44 for [18F]FDG. PET images of [18F]FET showed higher uptake and better contrast for tumor vs health tissue.
Biodistribution studies in mice and rats have shown that [18F]FET is retained in tumor tissues and exhibits low uptake in inflammatory tissues, enhancing its specificity for tumor imaging. In vivo experiments have also indicated that [18F]FET can effectively differentiate between high-grade and low-grade tumors based on the level of tracer uptake. Additionally, longitudinal studies in animal models have shown that [18F]FET PET imaging can be used to monitor tumor progression and response to therapy, providing valuable insights into the efficacy of treatment regimens. These preclinical findings have laid the groundwork for the successful translation of [18F]FET PET imaging into clinical practice.
Medical use
[18F]FET radiotracer has several clinical applications, particularly in neuro-oncology:
Diagnosis of Brain Tumors - [18F]FET is used to differentiate between malignant and benign brain lesions. It is particularly useful in identifying gliomas, which typically exhibit high [18F]FET uptake.
Tumor Grading - intensity of [18F]FET uptake provides insights into the aggressiveness of the tumor. Higher uptake values are often associated with higher tumor grades and more aggressive behavior.
Treatment Planning - [18F]FET PET imaging assists in delineating tumor boundaries more accurately than conventional imaging modalities, crucial for planning surgical resection or radiotherapy to ensure maximal tumor removal while sparing healthy tissue.
Monitoring Treatment Response - by comparing pre- and post-treatment scans, clinicians can assess the effectiveness of therapeutic interventions. A decrease in [18F]FET uptake post-treatment might indicate a positive response.
Detection of Recurrence - [18F]FET is effective in distinguishing between tumor recurrence and post-treatment changes such as radiation necrosis, critical for appropriate clinical management.
Dosimetry
Initial [18F]FET dosimetry was estimated by Pauleit et al. based on human dynamic PET scans after injection of 400 MBq of radiotracer at 70 and 200 min. The highest dose was received by bladder (0.060 mGy/MBq) and subsequently by kidneys (0.020 mGy/MBq) and uterus (0.022 mGy/MBq). No increased uptake was observed in the liver, bone, intestine, lung, heart, or pancreas. The effective dose determined by human study was 0.0165 mSv/MBq whereas the effective dose based on biodistribution data of mice was estimated to be 0.009 mSv/MBq.
Recommended activity dose for and adult (weight 70 kg) is in the range of 180 to 250 MBq.
Based on the Radiation Dose to Patients from Radiopharmaceuticals (4th addendum) the absorbed doses in human organs are presented in the table below.
Distribution
[18F]FET has a relatively short shelf life which is a result of radioactive isotope fluorine-18 half life (109.8 minutes). However, in comparison to radiotracers labelled with carbon-11 isotope, it still allows for radiotracer to be distributed through land and air up to 6 hour delivery radius.
Currently [18F]FET is commercially available in Europe as IASOglio© in France (MA number 34009 550 105 1 7/34009 550 105 2 4) and in Poland (MA number 27420). The Marketing Authorization Holder is radiopharmaceutical company called Curium™.
See also
List of PET Radiotracers
Fluorodeoxyglucose (18F)
Methionine
Fluorodopa
References
PET radiotracers
Alpha-Amino acids
Amino acid derivatives
Phenol ethers
Fluoroethyl ethers | Fluoroethyl-L-tyrosine (18F) | [
"Chemistry"
] | 1,931 | [
"Chemicals in medicine",
"Medicinal radiochemistry",
"PET radiotracers"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.