source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Toponomics | Toponomics is a discipline in systems biology, molecular cell biology, and histology concerning the study of the toponome of organisms. It is the field of study that purposes to decode the complete toponome in health and disease (the human toponome project)—which is the next big challenge in human biotechnology after having decoded the human genome.
A toponome is the spatial network code of proteins and other biomolecules in morphologically intact cells and tissues.
The spatial organization of biomolecules in cells is directly revealed by imaging cycler microscopy with parameter- and dimension-unlimited functional resolution. The resulting toponome structures are hierarchically organized and can be described by a three symbol code.
Etymology
The terms toponome and toponomics were introduced in 2003 by Walter Schubert based on observations with imaging cycler microscopes (ICM).
Toponome derived from the ancient Greek nouns topos (τόπος, 'place, position') and 'nomos' (νόμος, 'law'). Hence toponomics is a descriptive term addressing the fact that the spatial network of biomolecules in cells follows topological rules enabling coordinated actions.
References
Systems biology
Omics
Topology |
https://en.wikipedia.org/wiki/Groupoid%20object | In category theory, a branch of mathematics, a groupoid object is both a generalization of a groupoid which is built on richer structures than sets, and a generalization of a group objects when the multiplication is only partially defined.
Definition
A groupoid object in a category C admitting finite fiber products consists of a pair of objects together with five morphisms
satisfying the following groupoid axioms
where the are the two projections,
(associativity)
(unit)
(inverse) , , .
Examples
Group objects
A group object is a special case of a groupoid object, where and . One recovers therefore topological groups by taking the category of topological spaces, or Lie groups by taking the category of manifolds, etc.
Groupoids
A groupoid object in the category of sets is precisely a groupoid in the usual sense: a category in which every morphism is an isomorphism. Indeed, given such a category C, take U to be the set of all objects in C, R the set of all arrows in C, the five morphisms given by , , and . When the term "groupoid" can naturally refer to a groupoid object in some particular category in mind, the term groupoid set is used to refer to a groupoid object in the category of sets.
However, unlike in the previous example with Lie groups, a groupoid object in the category of manifolds is not necessarily a Lie groupoid, since the maps s and t fail to satisfy further requirements (they are not necessarily submersions).
Groupoid schemes
A groupoid S-scheme is a groupoid object in the category of schemes over some fixed base scheme S. If , then a groupoid scheme (where are necessarily the structure map) is the same as a group scheme. A groupoid scheme is also called an algebraic groupoid, to convey the idea it is a generalization of algebraic groups and their actions.
For example, suppose an algebraic group G acts from the right on a scheme U. Then take , s the projection, t the given action. This determines a groupoid scheme.
Constructions |
https://en.wikipedia.org/wiki/Torsor%20%28algebraic%20geometry%29 | In algebraic geometry, a torsor or a principal bundle is an analogue of a principal bundle in algebraic topology. Because there are few open sets in Zariski topology, it is more common to consider torsors in étale topology or some other flat topologies. The notion also generalizes a Galois extension in abstract algebra. Though other notions of torsors are known in more general context (e.g. over stacks) this article will focus on torsors over schemes, the original setting where torsors have been thought for. The word torsor comes from the French torseur. They are indeed widely discussed, for instance, in Michel Demazure's and Pierre Gabriel's famous book Groupes algébriques, Tome I.
Definition
Let be a Grothendieck topology and a scheme. Moreover let be a group scheme over , a -torsor (or principal -bundle) over is the data of a scheme and a morphism with a -invariant action on that is locally trivial in i.e. there exists a covering in the sense that the base change is isomorphic to the trivial torsor
First remarks
A line bundle can be seen as a -torsor, and, more in general, a vector bundle can be seen as a -torsor, for some .
It is common to consider a torsor for not just a group scheme but more generally for a group sheaf (e.g., fppf group sheaf).
The category of torsors over a fixed base forms a stack. Conversely, a prestack can be stackified by taking the category of torsors (over the prestack).
Examples and basic properties
Examples
A -torsor on X is a principal -bundle on X.
If is a finite Galois extension, then is a -torsor (roughly because the Galois group acts simply transitively on the roots.) This fact is a basis for Galois descent. See integral extension for a generalization.
Remark: A G-torsor P over X is isomorphic to a trivial torsor if and only if is nonempty. (Proof: if there is an , then is an isomorphism.)
Let P be a G-torsor with a local trivialization in étale topology. A trivial torsor admits a section: thus, the |
https://en.wikipedia.org/wiki/Thermodynamic%20solar%20panel | A thermodynamic solar panel is a type of air source heat pump. Instead of a large fan to take energy from the air, it has a flat plate collector. This means the system gains energy from the sun as well as the ambient air. Thermodynamic water heaters use a compressor to transfer the collected heat from the panel to the hot water system using refrigerant fluid that circulates in a closed cycle.
Renewable Heat Incentive
In the UK, thermodynamic solar panels cannot be used to claim the Renewable Heat Incentive. This is due to the lack of technical standards for the testing and installation. The UK Microgeneration Certification Scheme is working to develop a testing standard, either based on MIS 3001 or MIS 3005 or a brand new scheme document if appropriate.
Performance
Lab testing has been carried out by Das Wärmepumpen-Testzentrum Buchs (WPZ) in Buchs Switzerland on an Energi Eco 200esm/i thermodynamic solar panel system. This showed a Coefficient of performance of 2.8 or 2.9 (depending on tank volume).
In the UK, the first independent test is under-way at Narec Distributed Energy. So far data is available for January to April 2014. As with the Carnot cycle, the achievable efficiency is strongly dependent on the temperatures on both side of the system.
References
External links
Narec Distributed Energy thermodynamic solar panel test data
Sustainable technologies
Heat pumps
Heating
Energy conservation
Building engineering |
https://en.wikipedia.org/wiki/AES67 | AES67 is a technical standard for audio over IP and audio over Ethernet (AoE) interoperability. The standard was developed by the Audio Engineering Society and first published in September 2013. It is a layer 3 protocol suite based on existing standards and is designed to allow interoperability between various IP-based audio networking systems such as RAVENNA, Livewire, Q-LAN and Dante.
AES67 promises interoperability between previously competing networked audio systems and long-term network interoperation between systems. It also provides interoperability with layer 2 technologies, like Audio Video Bridging (AVB). Since its publication, AES67 has been implemented independently by several manufacturers and adopted by many others.
Overview
AES67 defines requirements for synchronizing clocks, setting QoS priorities for media traffic, and initiating media streams with standard protocols from the Internet protocol suite. AES67 also defines audio sample format and sample rate, supported number of channels, as well as IP data packet size and latency/buffering requirements.
The standard calls out several protocol options for device discovery but does not require any to be implemented. Session Initiation Protocol is used for unicast connection management. No connection management protocol is defined for multicast connections.
Synchronization
AES67 uses IEEE 1588-2008 Precision Time Protocol (PTPv2) for clock synchronisation.
For standard networking equipment, AES67 defines configuration parameters for a "PTP profile for media applications", based on IEEE 1588 delay request-response sync and (optionally) peer-to-peer sync (IEEE 1588 Annexes J.3 and J4); event messages are encapsulated in IPv4 packets over UDP transport (IEEE 1588 Annex D). Some of the default parameters are adjusted, specifically, logSyncInterval and logMinDelayReqInterval are reduced to improve accuracy and startup time.
Clock Grade 2 as defined in AES11 Digital Audio Reference Signal (DARS) is signal |
https://en.wikipedia.org/wiki/Algorithmic%20program%20debugging | Algorithmic debugging (also called declarative debugging) is a debugging technique that compares the results of sub-computations with what the programmer intended. The technique constructs an internal representation of all computations and sub-computations performed during the execution of a buggy program and then asks the programmer about the correctness of such computations. By asking the programmer questions or using a formal specification, the system can identify precisely where in a program a bug is located. Debugging techniques can dramatically reduce the time and effort spent on debugging.
Overview
Program debugging is an extremely common part of software development. Until the 1980s the craft of program debugging, practiced by every programmer, was without any theoretical foundation. In the early 1980s, systematic and principled approaches to program debugging were developed. In general, a bug occurs when a programmer has a specific intention regarding what the program should do, yet the program actually written exhibits a different behavior than intended in a particular case.
One way of organizing the debugging process is to automate it (at least partially) via an algorithmic debugging technique. The idea of algorithmic debugging is to have a tool that guides the programmer along the debugging process interactively: It does so by asking the programmer about possible bug sources.
The algorithmic debugging technique constructs an internal representation of all computations and sub-computations performed during the execution of a buggy program (an execution tree). Then, it asks the programmer about the correctness of such computations. The programmer answers "YES" when the result is correct or "NO" when the result is wrong. Some algorithmic debuggers also accept the answer "I don't know" when the programmer cannot give an answer (e.g., because the question is too complex). Thus, the answers of the programmer guide the search for the bug until it is isolate |
https://en.wikipedia.org/wiki/Quantum%20Artificial%20Intelligence%20Lab | The Quantum Artificial Intelligence Lab (also called the Quantum AI Lab or QuAIL) is a joint initiative of NASA, Universities Space Research Association, and Google (specifically, Google Research) whose goal is to pioneer research on how quantum computing might help with machine learning and other difficult computer science problems. The lab is hosted at NASA's Ames Research Center.
History
The Quantum AI Lab was announced by Google Research in a blog post on May 16, 2013. At the time of launch, the Lab was using the most advanced commercially available quantum computer, D-Wave Two from D-Wave Systems.
On October 10, 2013, Google released a short film describing the current state of the Quantum AI Lab.
On October 18, 2013, Google announced that it had incorporated quantum physics into Minecraft.
In January 2014, Google reported results comparing the performance of the D-Wave Two in the lab with that of classical computers. The results were ambiguous and provoked heated discussion on the Internet.
On 2 September 2014, it was announced that the Quantum AI Lab, in partnership with UC Santa Barbara, would be launching an initiative to create quantum information processors based on superconducting electronics.
On the 23rd of October 2019, the Quantum AI Lab announced in a paper that it had achieved quantum supremacy.
See also
Artificial intelligence
Glossary of artificial intelligence
Google Brain
Google X
References
External links
(NASA)
(USRA)
(Google)
(Google Quantum AI)
(Quantum Artificial Intelligence)
Google Plus profile
Applied machine learning
Google |
https://en.wikipedia.org/wiki/Windows%20Phone%208.1 | Windows Phone 8.1 is the third generation of Microsoft's Windows Phone mobile operating system, succeeding Windows Phone 8. Rolled out at Microsoft's Build Conference in San Francisco, California, on April 2, 2014, it was released in final form to Windows Phone developers on April 14, 2014 and reached general availability on August 4, 2014. All Windows Phones running Windows Phone 8 can be upgraded to Windows Phone 8.1, with release dependent on carrier rollout dates.
Windows Phone 8.1 is also the last version that uses the Windows Phone brand name as it was succeeded by Windows 10 Mobile. Some Windows Phone 8.1 devices are capable of being upgraded to Windows 10 Mobile. Microsoft delayed the upgrade and reduced the supported device list from their initial promise. Support has ended for Windows Phone 8.1 on July 11, 2017.
History
Windows Phone 8.1 was first rumored to be Windows Phone Blue, a series of updates to Microsoft's mobile operating system that would coincide with the release of Windows 8.1. Although Microsoft had originally planned to release WP8.1 in late 2013, shortly after the release of its PC counterpart, general distribution of the new operating system was pushed back until early 2014.
Instead of waiting over a year to add new features to Windows Phone 8, Microsoft opted to release three incremental updates to its existing mobile OS. These updates are delivered with corresponding firmware updates for the specific devices. The updates included GDR2 (Lumia Amber), which introduced features such as "Data Sense", and GDR3 (Lumia Black), which brought support for quad-core processors, 1080p high-definition screens of up to six inches, the addition of a "Driving Mode," and extra rows of live tiles for larger "phablet" devices.
The updated operating system's final name was leaked to the public when Microsoft released the Windows Phone 8.1 SDK to developers on February 10, 2014, but it wasn't until Microsoft's Build conference keynote on April 2, 2014 w |
https://en.wikipedia.org/wiki/Zemor%27s%20decoding%20algorithm | In coding theory, Zemor's algorithm, designed and developed by Gilles Zemor, is a recursive low-complexity approach to code construction. It is an improvement over the algorithm of Sipser and Spielman.
Zemor considered a typical class of Sipser–Spielman construction of expander codes, where the underlying graph is bipartite graph. Sipser and Spielman introduced a constructive family of asymptotically good linear-error codes together with a simple parallel algorithm that will always remove a constant fraction of errors. The article is based on Dr. Venkatesan Guruswami's course notes
Code construction
Zemor's algorithm is based on a type of expander graphs called Tanner graph. The construction of code was first proposed by Tanner. The codes are based on double cover , regular expander , which is a bipartite graph. =, where is the set of vertices and is the set of edges and = and = , where and denotes sets of vertices. Let be the number of vertices in each group, i.e, . The edge set be of size = and every edge in has one endpoint in both and . denotes the set of edges containing .
Assume an ordering on , therefore ordering will be done on every edges of for every . Let finite field , and for a word in , let the subword of the word will be indexed by . Let that word be denoted by . The subset of vertices and induces every word a partition into non-overlapping sub-words , where ranges over the elements of .
For constructing a code , consider a linear subcode , which is a code, where , the size of the alphabet is . For any vertex , let be some ordering of the vertices of adjacent to . In this code, each bit is linked with an edge of .
We can define the code to be the set of binary vectors of such that, for every vertex of , is a code word of . In this case, we can consider a special case when every edge of is adjacent to exactly vertices of . It means that and make up, respectively, the vertex set and edge set of regular |
https://en.wikipedia.org/wiki/Shearlet | In applied mathematical analysis, shearlets are a multiscale framework which allows efficient encoding of anisotropic features in multivariate problem classes. Originally, shearlets were introduced in 2006 for the analysis and sparse approximation of functions . They are a natural extension of wavelets, to accommodate the fact that multivariate functions are typically governed by anisotropic features such as edges in images, since wavelets, as isotropic objects, are not capable of capturing such phenomena.
Shearlets are constructed by parabolic scaling, shearing, and translation applied to a few generating functions. At fine scales, they are essentially supported within skinny and directional ridges following the parabolic scaling law, which reads length² ≈ width. Similar to wavelets, shearlets arise from the affine group and allow a unified treatment of the continuum and digital situation leading to faithful implementations. Although they do not constitute an orthonormal basis for , they still form a frame allowing stable expansions of arbitrary functions .
One of the most important properties of shearlets is their ability to provide optimally sparse approximations (in the sense of optimality in ) for cartoon-like functions . In imaging sciences, cartoon-like functions serve as a model for anisotropic features and are compactly supported in while being apart from a closed piecewise singularity curve with bounded curvature. The decay rate of the -error of the -term shearlet approximation obtained by taking the largest coefficients from the shearlet expansion is in fact optimal up to a log-factor:
where the constant depends only on the maximum curvature of the singularity curve and the maximum magnitudes of , and . This approximation rate significantly improves the best -term approximation rate of wavelets providing only for such class of functions.
Shearlets are to date the only directional representation system that provides sparse approximation of ani |
https://en.wikipedia.org/wiki/OpenPIC%20and%20MPIC | In order to compete with Intel's Advanced Programmable Interrupt Controller (APIC), which had enabled the first Intel 486-based multiprocessor systems, in early 1995 AMD and Cyrix proposed as somewhat similar-in-purpose OpenPIC architecture supporting up to 32 processors. The OpenPIC architecture had at least declarative support from IBM and Compaq around 1995. No x86 motherboard was released with OpenPIC however. After the OpenPIC's failure in the x86 market, AMD licensed the Intel APIC Architecture for its AMD Athlon and later processors.
IBM however developed their Multiprocessor Interrupt Controller (MPIC) based on the OpenPIC register specification. In the reference IBM design, the processors share the MPIC over a DCR bus, with their access to the bus controlled by a DCR Arbiter. MPIC supports up to four processors and up to 128 interrupt sources. Through various implementations, the MPIC was included in PowerPC reference designs and some retail computers.
IBM used a MPIC based on OpenPIC 1.0 in their RS/6000 F50 and one based on OpenPIC 1.2 in their RS/6000 S70. Both of these systems also used a dual 8259 on their PCI-ISA bridges. An IBM MPIC was also used in the RS/6000 7046 Model B50.
The Apple Hydra Mac I/O (MIO) chip (from the 1990s classic Mac OS era) implemented a MPIC alongside a SCSI controller, ADB controller, GeoPort controller, and timers. The Apple implementation of "Open PIC" (as the Apple documentation of this era spells it) in their first MIO chip for the Common Hardware Reference Platform was based on version 1.2 of the register specification and supported up to two processors and up to 20 interrupt sources. A MPIC was also incorporated in the newer K2 I/O controller used in the Power Mac G5s.
Freescale also uses a MPIC ("compatible with the Open PIC") on all its PowerQUICC and QorIQ processors. The Linux Kernel-based Virtual Machine (KVM) supports a virtualized MPIC with up to 256 interrupts, based on the Freescale variants.
See also
Pr |
https://en.wikipedia.org/wiki/Advanced%20Vehicle%20Technology%20Competitions | Advanced vehicle technology competitions (AVTCs) are competitions sponsored by the United States Department of Energy, in partnership with private industry and universities, which stimulates "the development of advanced propulsion and alternative fuel technologies and provide the training ground for the next generation of automotive engineers."
Overview
Since 1988, the U.S. Department of Energy has sponsored advanced vehicle technology competitions (AVTCs) in partnership with the North American automotive industry. Managed by the Argonne National Laboratory, AVTCs represent a unique coalition of government, industry and academic partners who join forces to execute North America's premier collegiate automotive engineering competitions. AVTCs provide a challenging, real-world training ground for North America's future engineers and automotive leaders and accelerate the development and demonstration of technologies of interest to the DOE and the automotive industry.
Early years
History of competitions
1988-1990 Methanol Marathon
The Methanol Marathon (1988-1990) was an alternative fuels competition for college and university students in the U.S. and Canada. General Motors Corporation provided 1988 Chevrolet Corsicas which were converted to operate on an M85 fuel (85% methanol and 15% hydrocarbons).Competition organizers went on to further challenge the student teams by establishing more stringent and controlled tests of their 1988 Chevrolet Corsicas in 1989. An important addition to the Methanol Marathon was the inclusion of a gasoline-powered control vehicle that provided a baseline from which to judge the effectiveness of the conversions.
1990-1993 Natural Gas Vehicle (NGV) Challenge
The Natural Gas Vehicle Challenge (1990-1993) was conducted annually which gave teams the opportunity to go back to the drawing board twice to improve the vehicle in areas believed to be lacking. In 1993, twenty-two colleges and universities from the U.S., Canada, and Mexico, i |
https://en.wikipedia.org/wiki/Nerdfighteria | Nerdfighteria is a mainly online-based community subculture that originated on YouTube in 2007, when the VlogBrothers (John and Hank Green) rose to prominence in the YouTube community. As their popularity grew, so did coverage on Nerdfighteria, whose followers are individually known as Nerdfighters. The term was coined when John saw a copy of the arcade game Aero Fighters and misread the title as Nerd Fighters.
Hank Green describes it as "a community that sprung up around our videos, and basically we just get together and try to do awesome things and have a good time and fight against world suck". He defines "world suck" as "the amount of suck in the world". The Greens established The Foundation to Decrease World Suck, in order to raise funds and launch projects that would help a variety of causes. Nerdfighters believe in fighting world suck, promoting education, freedom of speech and the use of the intellect in modern society. Nerdfighters and the Green brothers have collaborated on many projects such as the charitable drive, Project for Awesome which launched in 2007, and VidCon, the convention focusing on topics surrounding the world of digital media. Nerdfighters have been documented by websites such as The Hollywood Reporter, and The Wall Street Journal, with a following estimated to be in the millions.
Community topics
Nerdfighteria is known for its online collaborative nature: forums, spinoff blogs, meet-ups, and charitable events have been spawned by its members. Instances of the community collaborating can be observed in the creation of college campus groups at universities such as the University of Maryland, Texas Christian University, the University of British Columbia, and the University of California, Los Angeles. Another Nerdfighter club was founded at Auburn University, in which the members have stated their desire to do charity work with The Humane Society and This Star Won't Go Out.
The Nerdfighter subculture was able to force the release of the |
https://en.wikipedia.org/wiki/Order-4%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the order-4 120-cell honeycomb is one of five compact regular space-filling tessellations (or honeycombs). With Schläfli symbol {5,3,3,4}, it has four 120-cells around each face. Its dual is the order-5 tesseractic honeycomb, {4,3,3,5}.
Related honeycombs
It is related to the (order-3) 120-cell honeycomb, and order-5 120-cell honeycomb.
It is analogous to the order-4 dodecahedral honeycomb and order-4 pentagonal tiling.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
Honeycombs (geometry) |
https://en.wikipedia.org/wiki/Order-5%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the order-5 120-cell honeycomb is one of five compact regular space-filling tessellations (or honeycombs). With Schläfli symbol {5,3,3,5}, it has five 120-cells around each face. It is self-dual. It also has 600 120-cells around each vertex.
Related honeycombs
It is related to the (order-3) 120-cell honeycomb, and order-4 120-cell honeycomb. It is analogous to the order-5 dodecahedral honeycomb and order-5 pentagonal tiling.
Birectified order-5 120-cell honeycomb
The birectified order-5 120-cell honeycomb constructed by all rectified 600-cells, with octahedron and icosahedron cells, and triangle faces with a 5-5 duoprism vertex figure and has extended symmetry [[5,3,3,5]].
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
Honeycombs (geometry)
Self-dual tilings |
https://en.wikipedia.org/wiki/Cubic%20honeycomb%20honeycomb | In the geometry of hyperbolic 4-space, the cubic honeycomb honeycomb is one of two paracompact regular space-filling tessellations (or honeycombs). It is called paracompact because it has infinite facets, whose vertices exist on 3-horospheres and converge to a single ideal point at infinity. With Schläfli symbol {4,3,4,3}, it has three cubic honeycombs around each face, and with a {3,4,3} vertex figure. It is dual to the order-4 24-cell honeycomb.
Related honeycombs
It is related to the Euclidean 4-space 16-cell honeycomb, {3,3,4,3}, which also has a 24-cell vertex figure.
It is analogous to the paracompact tesseractic honeycomb honeycomb, {4,3,3,4,3}, in 5-dimensional hyperbolic space, square tiling honeycomb, {4,4,3}, in 3-dimensional hyperbolic space, and the order-3 apeirogonal tiling, {∞,3} of 2-dimensional hyperbolic space, each with hypercube honeycomb facets.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
Honeycombs (geometry) |
https://en.wikipedia.org/wiki/Order-4%2024-cell%20honeycomb | In the geometry of hyperbolic 4-space, the order-4 24-cell honeycomb is one of two paracompact regular space-filling tessellations (or honeycombs). It is called paracompact because it has infinite vertex figures, with all vertices as ideal points at infinity. With Schläfli symbol {3,4,3,4}, it has four 24-cells around each face. It is dual to the cubic honeycomb honeycomb.
Related honeycombs
It is related to the regular Euclidean 4-space 24-cell honeycomb, {3,4,3,3}, with 24-cell facets.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
Honeycombs (geometry) |
https://en.wikipedia.org/wiki/Head%20II | Head II is an oil and tempera on hardboard painting by the Irish-born British figurative artist Francis Bacon. Completed in 1948, it is the second in a series of six heads, painted from the winter of 1948 in preparation for a November 1949 exhibition at the Hanover Gallery, London.
The figure seems half human, half animal, and has disintegrated to an extent that, like the preceding Head I of the series, the entire upper head has disappeared leaving only mouth and jaw. The figure is set in a shallow pictorial space, and is positioned behind curtains that borrow from Titian's 1558 Portrait of Cardinal Filippo Archinto. The curtains are fastened at one point by a safety pin. John Russell sees the curtains as enclosing the figure, as if the walls of a prison or execution dock. Remarking on their dreary and drab appearance he further speculates that they seem "stiffened by fifty year's crasse of a tenth rate lodging-house; or they could be sliding shutters that has been pulled apart to admit a new victim."
The painting's overall grisaille appearance give the impression of x-ray photographs, and the look may have been inspired by K.C. Clark's Positioning In Radiography, a book Bacon often acknowledged as a key source for his work. The painting contains a small arrow just below the figures mouth; the first appearance of a motif the artist was to continue using for the rest of his career.
References
Notes
Sources
Dawson, Barbara; Sylvester, David. Francis Bacon in Dublin. London: Thames & Hudson, 2000.
Farr, Dennis; Peppiatt, Michael; Yard, Sally. Francis Bacon: A Retrospective. NY: Harry N Abrams, 1999.
Peppiatt, Michael. Anatomy of an Enigma. London: Westview Press, 1996.
Russell, John. Francis Bacon (World of Art). NY: Norton, 1971.
1949 paintings
Paintings by Francis Bacon
Heads in the arts |
https://en.wikipedia.org/wiki/List%20of%20chemical%20compounds%20in%20coffee | There are more than 1,000 chemical compounds in coffee, and their molecular and physiological effects are areas of active research in food chemistry.
Overview
There are a large number of ways to organize coffee compounds. The major texts in the area variously sort by effects on flavor, physiology, pre- and post-roasting effects, growing and processing effects, botanical variety differences, country of origin differences, and many others. Interactions between chemical compounds also is a frequent area of taxonomy, as are the major organic chemistry categories (protein, carbohydrate, lipid, etc.) that are relevant to the field. In the field of aroma and flavor alone, Flament gives a list of 300 contributing chemicals in green beans, and over 850 after roasting. He lists 16 major categories to cover those compounds related to aroma and flavor.
The chemical complexity of coffee is emerging, especially due to observed physiological effects which cannot be related only to the presence of caffeine. Moreover, coffee contains an exceptionally substantial amount of antioxidants such as chlorogenic acids, hydroxycinnamic acids, caffeine and Maillard reaction products, such as melanoidins. Chemical groups, such as alkaloids and caffeoylquinic acids, are common insecticides; their effects on coffee quality and flavor have been investigated in most studies. Although health effects are certainly a valid taxonomy category, less than 30 of the over 1,000 compounds have been subjected to juried, health-related research (e.g. official potential carcinogen classification — see furans, for example), so health categorization has been avoided.
On the other hand, physiological effects are well documented in some (e.g. stimulant effects of caffeine), and those are listed where they are relevant and well-documented. Internet claims for individual chemicals, or compound synergies, such as preventing dental cavities (speculative but unproven effect of the alkaloid trigonelline with in vitr |
https://en.wikipedia.org/wiki/Biclique%20attack | A biclique attack is a variant of the meet-in-the-middle (MITM) method of cryptanalysis. It utilizes a biclique structure to extend the number of possibly attacked rounds by the MITM attack. Since biclique cryptanalysis is based on MITM attacks, it is applicable to both block ciphers and (iterated) hash-functions. Biclique attacks are known for having weakened both full AES and full IDEA, though only with slight advantage over brute force. It has also been applied to the KASUMI cipher and preimage resistance of the Skein-512 and SHA-2 hash functions.
The biclique attack is still () the best publicly known single-key attack on AES. The computational complexity of the attack is , and for AES128, AES192 and AES256, respectively. It is the only publicly known single-key attack on AES that attacks the full number of rounds. Previous attacks have attacked round reduced variants (typically variants reduced to 7 or 8 rounds).
As the computational complexity of the attack is , it is a theoretical attack, which means the security of AES has not been broken, and the use of AES remains relatively secure. The biclique attack is nevertheless an interesting attack, which suggests a new approach to performing cryptanalysis on block ciphers. The attack has also rendered more information about AES, as it has brought into question the safety-margin in the number of rounds used therein.
History
The original MITM attack was first suggested by Diffie and Hellman in 1977, when they discussed the cryptanalytic properties of DES. They argued that the key-size was too small, and that reapplying DES multiple times with different keys could be a solution to the key-size; however, they advised against using double-DES and suggested triple-DES as a minimum, due to MITM attacks (MITM attacks can easily be applied to double-DES to reduce the security from to just , since one can independently bruteforce the first and the second DES-encryption if they have the plain- and ciphertext).
Since |
https://en.wikipedia.org/wiki/Driver%20%28software%29 | A driver in software provides a programming interface to control and manage specific lower-level interfaces that are often linked to a specific type of hardware, or other low-level service. In the case of hardware, the specific subclass of drivers controlling physical or virtual hardware devices are known as device drivers.
Example
A client library for connecting to a database is often known as a driver, for example, the MySQL native driver for PHP.
References
Computing terminology
Application programming interfaces
Computer libraries |
https://en.wikipedia.org/wiki/Steam%20infusion | Steam Infusion is a direct-contact heating process in which steam condenses on the surface of a pumpable food product. Its primary use is for the gentle and rapid heating of a variety of food ingredients and products including milk, cream, soymilk, ketchup, soups and sauces.
Unlike steam injection and traditional vesselled steam heating; the steam infusion process surrounds the liquid food product with steam as opposed to passing steam through the liquid.
Steam Infusion allows food product to be cooked, mixed and pumped within a single unit, often removing the need for multiple stages of processing.
History
Steam infusion was first used in pasteurization and has since been developed for further liquid heating applications.
First generation
In the 1960s APV PLC launched the first steam infusion system under the Palarisator brand name. This involves a 2-stage process for steam infusion whereby the liquid is cascaded into a large pressurized steam chamber and is sterilized when falling as film or droplets through the chamber. The liquid is then condensed at the chilled bottom of the chamber. Illustrated in the image on the right hand side of the page.
Second generation
The Steam Infusion process was first developed in 2000 by Pursuit Dynamics PLC as a method for marine propulsion. The process has since been developed to be used for applications in brewing, food and beverages, public health and safety, bioenergy, industrial licensing, and waste treatment worldwide. On the right a diagram shows how the process creates an environment of vaporised product surrounded by high energy steam. The supersonic steam flow entrains and vaporises the process flow to form a multiphase flow, which heats the suspended particles by surface conduction and condensation. The condensation of the steam causes the process flow to return to a liquid state. This causes rapid and uniform heating over the unit making it applicable to industrial cooking processes. This process has been use |
https://en.wikipedia.org/wiki/Reflectometry | Reflectometry is a general term for the use of the reflection of waves or pulses at surfaces and interfaces to detect or characterize objects, sometimes to detect anomalies as in fault detection and medical diagnosis.
There are many different forms of reflectometry. They can be classified in several ways: by the used radiation (electromagnetic, ultrasound, particle beams), by the geometry of wave propagation (unguided versus wave guides or cables), by the involved length scales (wavelength and penetration depth in relation to size of the investigated object), by the method of measurement (continuous versus pulsed, polarization resolved, ...), and by the application domain.
Radiation sources
Electromagnetic radiation of widely varying wavelength is used in many different forms of reflectometry:
Radar: Reflections of radiofrequency pulses are used to detect the presence and to measure the location and speed of objects such as aircraft, missiles, ships, vehicles.
Lidar: Reflections of light pulses are used typically to penetrate ground cover by vegetation in aerial archaeological surveys.
Characterization of semiconductor and dielectric thin films: Analysis of reflectance data utilizing the Forouhi Bloomer dispersion equations can determine the thickness, refractive index, and extinction coefficient of thin films utilized in the semiconductor industry.
X-ray reflectometry: is a surface-sensitive analytical technique used in chemistry, physics, and materials science to characterize surfaces, thin films and multilayers.
Propagation of electric pulses and reflection at discontinuities in cables is used in time domain reflectometry (TDR) to detect and localize defects in electric wiring.
Skin reflectance: In anthropology, reflectometry devices are often used to gauge human skin color through the measurement of skin reflectance. These devices are typically pointed at the upper arm or forehead, with the emitted waves then interpreted at various percentages. Lower fr |
https://en.wikipedia.org/wiki/Credential%20lag | Credential lag usually occurs for a user who is attempting to log in to a system that relies on updating its cached or otherwise saved user credentials by conferring with Active Directory or similar database.
When a user changes or resets their password, it may take some time for the third party software to retrieve the new credentials from the active directory catalog; for instance, an intranet service that queries AD for permissions.
Example
User "ANOther" is prompted to change her password as it has expired on her windows domain account. Once changed, Active Directory is updated, and the user proceeds to log in.
However, it may be the case that the internal intranet site only refreshes every 15 minutes, therefore, until the intranet refreshes its credential database, the user is unable to log into the intranet service for up to 15 minutes.
References
Computer access control |
https://en.wikipedia.org/wiki/Emerald%20Dragon | is a role-playing video game developed by Glodia that was released for multiple platforms in Japan. It was released for NEC Corporation's PC-8801 and PC-9801 home computers on December 22, 1989, followed by conversions for the X68000 (released on December 6, 1990), MSX2 (released on December 26 of the same year) and FM Towns (released on May 28, 1992). Developer Alfa System later produced console versions of the game for the PC Engine in Super CD-ROM² format (released on January 28, 1994) and the Super Famicom (released on July 28, 1995). The game features characters and locations based on Zoroastrian mythology.
Gameplay
The game utilises a top-down overhead perspective, where players move the controllable character in two dimensions. As players move around in a world map, they may encounter battles, which are turn-based with a time point system: both movements and attacks sap a bar on the top of the screen, and the character's turn ends when the bar is depleted. Experience points, which are used to level up playable characters, are collected for defeating enemies.
Stronger attacks for the main protagonist, Atorushan, are made available through collecting key items called the Emerald Graces. These transform him into a dragon to unleash a powerful attack, at the cost of reducing his HP when used.
Plot
A long time ago, dragons and humans lived in peace in the land of Ishbahn. Lord Tiridates, believing the existence of dragons among humans defiles Ishbahn, places a curse that kills dragons in the area. Some of the dragons (now collectively called the Dragon Tribe) manage to escape and find refuge in Draguria, where a dimensional rift prevents humans from crossing it.
At the start of the game, a ship wrecks on the coast of Draguria. The protagonist, a Dragon Tribe youth named Atorushan seeks the friendship of the sole survivor, a human girl named Tamryn by the White Dragon, leader of the tribe. The girl is nurtured by the dragons of the land, but 12 years later she |
https://en.wikipedia.org/wiki/Amata%20cerbera | Amata cerbera, the heady maiden, is a moth of the subfamily Arctiinae. It was described by Carl Linnaeus in 1764. It has an extensive range in sub-Saharan Africa.
Range
It is found in Angola, the DRC, Gabon, Ghana, Guinea, Guinea-Bissau, Kenya, Malawi, Nigeria, Senegal, Sierra Leone, South Africa, Tanzania and Uganda.
Food plants
The larvae feed on Rumex, Corylus, Plantago and Rubus species, but have also been recorded feeding on various grasses (including Festuca and Anthoxanthum) as well as Thapsia, Taraxacum, Urtica and Sonchus species, and even hay and paper.
Description
Upperside: Antennae and head black. Thorax and abdomen shining blueish green; the latter having on the middle three rings of scarlet extending from side to side, but not meeting underneath. Anterior wings dark green, with six transparent spots like glass on them; the smallest, near the base, is round; three others, placed next the external margin, are oblong; the other two, which are in the middle, are oval and triangular. Posterior wings dark green, with two transparent spots; the largest next the shoulders; the other, which is round and small, beyond the middle.
Underside: Breast, abdomen, and legs shining mazarine blue, inclining to green; on the former is a small red spot, close to the shoulders of the superior wings. The hinder legs have one joint white. Wings of the same colour as on the upper side.
Subspecies
Amata cerbera cerbera
Amata cerbera hanningtoni (Seitz, 1926) – DRC, Malawi, Tanzania
References
External links
Bode, J. (2011). Amata cerbera mating, video of A. cerbera mating habits, taken near Darling, West Coast of South Africa, YouTube
cerbera
Moths of Africa
Moths described in 1764
Taxa named by Carl Linnaeus
Descriptions from Illustrations of Exotic Entomology |
https://en.wikipedia.org/wiki/Processor%20Control%20Region | Processor Control Region (PCR) is a Windows kernel mode data structure that contains information about the current processor. It can be accessed via the fs segment register on x86 versions, or the gs segment register on x64 versions respectively.
Structure
In Windows, the PCR is known as KPCR. It contains information about the current processor.
Processor Control Block
The PCR contains a substructure called Processor Control Block (KPRCB), which contains information such as CPU step and a pointer to the thread object of the current thread.
See also
Process Environment Block
Process control block
References
http://www.nirsoft.net/kernel_struct/vista/KPCR.html
http://www.nirsoft.net/kernel_struct/vista/KPRCB.html
Windows NT kernel
Data structures by computing platform |
https://en.wikipedia.org/wiki/ANSI/ASA%20S1.1-2013 | ANSI/ASA S1.1-2013, published by the American National Standards Institute (ANSI), is the current American National Standard on Acoustical Terminology. ANSI S1.1 was first published in 1960 and has its roots in a 1942 standard published by the American Standards Association, the predecessor of ANSI. It includes the following sections
Scope
General
Levels
Oscillation, vibration, and shock
Transmission and propagation
Transducers and linear systems
Acoustical apparatus and instruments
Underwater acoustics
Sonics and ultrasonic testing
Architectural acoustics
Physiological and psychological acoustics
Musical acoustics
External links
ANSI/ASA S1.1 & S3.20 Standard Acoustical & Bioacoustical Terminology Database
ANSI website
References
+
American National Standards Institute standards |
https://en.wikipedia.org/wiki/Adobe%20Experience%20Cloud | Adobe Experience Cloud (AEC), formerly Adobe Marketing Cloud (AMC), is a collection of integrated online marketing and web analytics products by Adobe Inc.
History
Adobe Experience Cloud includes a set of analytics, social, advertising, media optimization, targeting, web experience management, journey orchestration and content management products, hosted on Microsoft Azure.
The Adobe Marketing Cloud collection was introduced to the public in October 2012 as Adobe began retiring the Omniture branding it acquired in October 2009. Products of the defunct company were then integrated, step-by-step, into the new Cloud service which includes the following eight applications: Adobe Analytics, Adobe Target, Adobe Social, Adobe Experience Manager, Adobe Media Optimizer, Adobe Campaign (Classic and Standard), Audience Manager and Primetime. In November 2013, Adobe Systems introduced mobile features to its Marketing Cloud, making smartphones and other mobile devices new targets for analytics.
On September 15, 2009, Omniture, Inc. and Adobe Systems announced that Adobe would be purchasing Omniture, an online marketing and web analytics business unit in Orem, Utah. The deal worth $1.8 billion, was completed on October 23, 2009, and is now joined by other Adobe owned assets such as Day Software and Efficient Frontier, as the main components of Adobe's Digital Marketing Business Unit. Around 2012, Adobe withdrew the Omniture brand while its products were being integrated into the Adobe Marketing Cloud.
In 2013, Adobe also attained Satellite TMS from Search Discovery and renamed it Adobe Dynamic Tag Management (Adobe DTM) to replace their Adobe Tag Manager.
Using what was learned from Adobe DTM, Adobe made Adobe Launch, the next-generation tag management system, and released it in 2018.
On May 21, 2018, Adobe announced the acquisition of Magento for $1.68 billion. The addition of the Magento Commerce enables commerce features to be integrated into the Adobe Experience Cloud.
|
https://en.wikipedia.org/wiki/Artin%27s%20criterion | In mathematics, Artin's criteria are a collection of related necessary and sufficient conditions on deformation functors which prove the representability of these functors as either Algebraic spaces or as Algebraic stacks. In particular, these conditions are used in the construction of the moduli stack of elliptic curves and the construction of the moduli stack of pointed curves.
Notation and technical notes
Throughout this article, let be a scheme of finite-type over a field or an excellent DVR. will be a category fibered in groupoids, will be the groupoid lying over .
A stack is called limit preserving if it is compatible with filtered direct limits in , meaning given a filtered system there is an equivalence of categoriesAn element of is called an algebraic element if it is the henselization of an -algebra of finite type.
A limit preserving stack over is called an algebraic stack if
For any pair of elements the fiber product is represented as an algebraic space
There is a scheme locally of finite type, and an element which is smooth and surjective such that for any the induced map is smooth and surjective.
See also
Artin approximation theorem
Schlessinger's theorem
References
Deformation theory and algebraic stacks - overview of Artin's papers and related research
Algebraic geometry |
https://en.wikipedia.org/wiki/Alcazar%3A%20The%20Forgotten%20Fortress | Alcazar: The Forgotten Fortress is a dungeon action-adventure game, similar to Dungeon Master and The Legend of Zelda. It was released in 1985 for the Coleco Adam computer along with a port for the ColecoVision. It was created by Tom Loughry from Activision, graphics by Keri (Janssen) Longaway. The game was also ported to the Commodore 64 later.
Plot
The plot of Alcazar is to get to the main castle "Alcazar", by going through multiple enemy castles, to retrieve the stolen Crown.
Gameplay
The game starts on a world map, which contains 22 castles. The player's main goal is to move the character through the various castles to ultimately arrive at the main castle fortress on the right side of the map. Each castle has multiple rooms, traps and floors. The map and routes change every time a new game is started. The game has four difficulty levels: beginner, intermediate, advanced, and expert. Various items may also be obtained in the game, such as a pistol used for attacking or a "hook" that can be used to cross gaps. You also have a map in the bottom left corner, that can help prevent you from being lost in the castle dungeons.
There are also many traps or enemies present for the player to either fight or evade. If the player incurs too much damage, they lose a life, and losing all lives ends the game, though extra lives may also be obtained as well. The game has no "continues", and only the "Adam" version of the game supports saving.
The game is one of the earliest adventure games to have a demo mode that shows you a demonstration of game play. Wait at the title screen for 30 seconds and the game will go into this mode.
Reception
Ahoy! praised Alcazars "beautifully written theme song", but stated that its graphics were insufficiently detailed. The magazine concluded that the game was "an enticing blend of mental and physical stimulation ... an electronic passport to hours of entertainment".
Legacy
The game, specifically the MSX port, would later serve as an inspir |
https://en.wikipedia.org/wiki/LaunchCode | LaunchCode, headquartered in St. Louis Missouri, is a Non-profit organization that helps people enter the technology field by providing free and accessible education, training, and paid apprenticeship placements.
Its courses and programs include:
LC101 is LaunchCode's part-time evening flagship course. In a classroom with mentoring from instructors, teaching fellows and local developers, students learn programming concepts in JavaScript before moving on to a skill track focused on either Java or C#.
Women+ (formerly CoderGirl): Is LaunchCode's education program for everyone who identifies as female, students choose one of seven 24- or 45-week specialized skill tracks that lead to an apprenticeship job program.
Immersive CodeCamp is a 14-week, full-time course taking a deep dive into in-demand technologies and skills.
Liftoff is a career readiness and project course for pre-apprenticeship.
Discovery is a free, self-paced online program developed to introduce people to computer programming and help them determine if they want to pursue a career in tech.
LaunchCode also offers apprenticeships: full-time, paid positions with partner companies. A LaunchCode apprenticeship is a full-time, paid position with one of LaunchCode’s hiring partners. During the apprenticeship, candidates work on a team of experienced developers and are paired with a mentor that invests in their growth. Candidates are carefully matched with hiring companies for compatibility in technical skills, soft skills, workplace fit, and drive.
In 2020, 60% of LaunchCode students identified as women or non-binary, 49% identified as people of color, 19% identified as LGBTQIA+, and 46% did not have a 4-year degree.
References
External links
LaunchCode website
Computer programming
Privately held companies of the United States
American educational websites |
https://en.wikipedia.org/wiki/Small%20stellated%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the small stellated 120-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {5/2,5,3,3}, it has three small stellated 120-cells around each face. It is dual to the pentagrammic-order 600-cell honeycomb.
It can be seen as a stellation of the 120-cell honeycomb, and is thus analogous to the three-dimensional small stellated dodecahedron {5/2,5} and four-dimensional small stellated 120-cell {5/2,5,3}. It has density 5.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
Honeycombs (geometry)
5-polytopes |
https://en.wikipedia.org/wiki/Pentagrammic-order%20600-cell%20honeycomb | In the geometry of hyperbolic 4-space, the pentagrammic-order 600-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {3,3,5,5/2}, it has five 600-cells around each face in a pentagrammic arrangement. It is dual to the small stellated 120-cell honeycomb. It can be considered the higher-dimensional analogue of the 4-dimensional icosahedral 120-cell and the 3-dimensional great dodecahedron. It is related to the order-5 icosahedral 120-cell honeycomb and great 120-cell honeycomb: the icosahedral 120-cells and great 120-cells in each honeycomb are replaced by the 600-cells that are their convex hulls, thus forming the pentagrammic-order 600-cell honeycomb.
This honeycomb can also be constructed by taking the order-5 5-cell honeycomb and replacing clusters of 600 5-cells meeting at a vertex with 600-cells. Each 5-cell belongs to five such clusters, and thus the pentagrammic-order 600-cell honeycomb has density 5.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
Honeycombs (geometry)
5-polytopes |
https://en.wikipedia.org/wiki/Order-5%20icosahedral%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the order-5 icosahedral 120-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {3,5,5/2,5}, it has five icosahedral 120-cells around each face. It is dual to the great 120-cell honeycomb.
It can be constructed by replacing the great dodecahedral cells of the great 120-cell honeycomb with their icosahedral convex hulls, thus replacing the great 120-cells with icosahedral 120-cells. It is thus analogous to the four-dimensional icosahedral 120-cell. It has density 10.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
Honeycombs (geometry)
5-polytopes |
https://en.wikipedia.org/wiki/Great%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the great 120-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {5,5/2,5,3}, it has three great 120-cells around each face. It is dual to the order-5 icosahedral 120-cell honeycomb.
It can be seen as a greatening of the 120-cell honeycomb, and is thus analogous to the three-dimensional great dodecahedron {5,5/2} and four-dimensional great 120-cell {5,5/2,5}. It has density 10.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
Honeycombs (geometry)
5-polytopes |
https://en.wikipedia.org/wiki/Chemogenetics | Chemogenetics is the process by which macromolecules can be engineered to interact with previously unrecognized small molecules. Chemogenetics as a term was originally coined to describe the observed effects of mutations on chalcone isomerase activity on substrate specificities in the flowers of Dianthus caryophyllus. This method is very similar to optogenetics; however, it uses chemically engineered molecules and ligands instead of light and light-sensitive channels known as opsins.
In recent research projects, chemogenetics has been widely used to understand the relationship between brain activity and behavior. Prior to chemogenetics, researchers used methods such as transcranial magnetic stimulation and deep brain stimulation to study the relationship between neuronal activity and behavior.
Comparison to optogenetics
Optogenetics and chemogenetics are the more recent and popular methods used to study this relationship. Both of these methods target specific brain circuits and cell population to influence cell activity. However, they use different procedures to accomplish this task. Optogenetics uses light-sensitive channels and pumps that are virally introduced into neurons. Cells' activity, having these channels, can then be manipulated by light. Chemogenetics, on the other hand, uses chemically engineered receptors and exogenous molecules specific for those receptors, to affect the activity of those cells. The engineered macromolecules used to design these receptors include nucleic acid hybrids, kinases, variety of metabolic enzymes, and G-protein coupled receptors such as DREADDs.
DREADDs are the most common G protein–coupled receptors used in chemogenetics. These receptors solely get activated by the drug of interest (inert molecule) and influence physiological and neural processes that take place within and outside of the central nervous system.
Chemogenetics has recently been favored over optogenetics, and it avoids some of the challenges of optogenetic |
https://en.wikipedia.org/wiki/Acceptance%20test-driven%20development | Acceptance test–driven development (ATDD) is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example (SBE), behavior-driven development (BDD), example-driven development (EDD), and support-driven development also called story test–driven development (SDD). All these processes aid developers and testers in understanding the customer's needs prior to implementation and allow customers to be able to converse in their own domain language.
ATDD is closely related to test-driven development (TDD). It differs by the emphasis on developer-tester-business customer collaboration. ATDD encompasses acceptance testing, but highlights writing acceptance tests before developers begin coding.
Overview
Acceptance tests are from the user's point of view – the external view of the system. They examine externally visible effects, such as specifying the correct output of a system given a particular input. Acceptance tests can verify how the state of something changes, such as an order that goes from "paid" to "shipped". They also can check the interactions with interfaces of other systems, such as shared databases or web services. In general, they are implementation independent, although automation of them may not be.
Creation
Acceptance tests are created when the requirements are analyzed and prior to coding. They can be developed collaboratively by requirement requester (product owner, business analyst, customer representative, etc.), developer, and tester. Developers implement the system using the acceptance tests. Failing tests provide quick feedback that the requirements are not being met. The tests are specified in business domain terms. The terms then form a ubiquitous language that is shared between the customers, developers, and testers. Tests and requirements are interrelated. A requirement that lacks a test may not be implemented pro |
https://en.wikipedia.org/wiki/Super-resolution%20optical%20fluctuation%20imaging | Super-resolution optical fluctuation imaging (SOFI) is a post-processing method for the calculation of super-resolved images from recorded image time series that is based on the temporal correlations of independently fluctuating fluorescent emitters.
SOFI has been developed for super-resolution of biological specimen that are labelled with independently fluctuating fluorescent emitters (organic dyes, fluorescent proteins). In comparison to other super-resolution microscopy techniques such as STORM or PALM that rely on single-molecule localization and hence only allow one active molecule per diffraction-limited area (DLA) and timepoint, SOFI does not necessitate a controlled photoswitching and/ or photoactivation as well as long imaging times. Nevertheless, it still requires fluorophores that are cycling through two distinguishable states, either real on-/off-states or states with different fluorescence intensities. In mathematical terms SOFI-imaging relies on the calculation of cumulants, for what two distinguishable ways exist. For one thing an image can be calculated via auto-cumulants that by definition only rely on the information of each pixel itself, and for another thing an improved method utilizes the information of different pixels via the calculation of cross-cumulants. Both methods can increase the final image resolution significantly although the cumulant calculation has its limitations. Actually SOFI is able to increase the resolution in all three dimensions.
Principle
Likewise to other super-resolution methods SOFI is based on recording an image time series on a CCD- or CMOS camera. In contrary to other methods the recorded time series can be substantially shorter, since a precise localization of emitters is not required and therefore a larger quantity of activated fluorophores per diffraction-limited area is allowed. The pixel values of a SOFI-image of the n-th order are calculated from the values of the pixel time series in the form of a n-th ord |
https://en.wikipedia.org/wiki/Web%20application%20firewall | A web application firewall (WAF) is a specific form of application firewall that filters, monitors, and blocks HTTP traffic to and from a web service. By inspecting HTTP traffic, it can prevent attacks exploiting a web application's known vulnerabilities, such as SQL injection, cross-site scripting (XSS), file inclusion, and improper system configuration.
History
Dedicated web application firewalls entered the market in the late 1990s during a time when web server attacks were becoming more prevalent.
An early version of WAF was developed by Perfecto Technologies with its AppShield product, which focused on the e-commerce market and protected against illegal web page character entries. Other early WAF products, from Kavado and Gilian technologies, were available in the market at the same time, trying to solve the increasing amount of attacks on web applications in the late 90s. In 2002, the open source project ModSecurity was formed in order to make WAF technology more accessible. They finalized a core rule set for protecting web applications, based on OASIS Web Application Security Technical Committee’s (WAS TC) vulnerability work. In 2003, they expanded and standardized rules through the Open Web Application Security Project’s (OWASP) Top 10 List, an annual ranking for web security vulnerabilities. This list would become the industry standard for web application security compliance.
Since then, the market has continued to grow and evolve, especially focusing on credit card fraud prevention. With the development of the Payment Card Industry Data Security Standard (PCI DSS), a standardization of control over cardholder data, security has become more regulated in this sector. According to CISO Magazine, the WAF market was expected to grow to $5.48 billion by 2022.
Description
A web application firewall is a special type of application firewall that applies specifically to web applications. It is deployed in front of web applications and analyzes bi-directio |
https://en.wikipedia.org/wiki/Nalini%20Anantharaman | Nalini Anantharaman (born 26 February 1976) is a French mathematician who has won major prizes including the Henri Poincaré Prize in 2012.
Life
Nalini Florence Anantharaman was born in Paris in 1976 to two mathematicians. Her father and her mother are Professors at the University of Orléans. She entered Ecole Normale Supérieure in 1994.
She completed her PhD in Paris under the supervision of François Ledrappier in 2000 at Université Pierre et Marie Curie (Paris 6).
She became a full Professor, at the University of Paris-Sud, Orsay in 2009 following time out at the University of California in Berkeley in the year before as a Visiting Miller professor. From January to June 2013 she was in Princeton at the Institute for Advanced Study. She is now a Professor at Université de Strasbourg.
Recognition
In 2012 she won the Henri Poincaré Prize for mathematical physics that she shared with Freeman Dyson, Barry Simon and fellow Frenchwoman Sylvia Serfaty. Anantharaman was included for her work in "quantum chaos, dynamical systems and Schrödinger equation, including a remarkable advance in the problem of quantum unique ergodicity". In 2011 she won the Salem Prize which is awarded for work associated with the Fourier Series. She also took the from the French Academy of Sciences in 2011. In 2015, Nalini Anantharaman was elected to be a member of the Academia Europaea. She was an invited plenary speaker at the 2018 International Congress of Mathematicians.
In 2018, for her work related to “Quantum Chaos”, Anantharaman won the Infosys Prize (in Mathematical Sciences category), one of the highest monetary awards in India that recognize excellence in science and research. In 2020 she received the Nemmers Prize in Mathematics.
Selected writings
References
1976 births
Living people
École Normale Supérieure alumni
Scientists from Paris
20th-century French mathematicians
French women mathematicians
French people of Indian descent
Dynamical systems theorists
21st-century French |
https://en.wikipedia.org/wiki/CxProcess | CxProcess is the trademark of an image processing technology used in Minolta and Konica Minolta digital cameras.
Image processing in a camera converts the raw image data from a CCD image sensor into the format that is stored on the memory card. This processing is one of the bottlenecks in the speed of digital cameras.
Between 2001 and 2006, CxProcess was used in various Minolta and Konica Minolta digital compact cameras, bridge cameras and DSLRs. In order to distinguish the image processing algorithms from the image processor, the image processor was named SUPHEED (for superior image and speed) since 2003. It can be seen as the predecessor of Sony's Bionz image processor since their taking-over of Konica Minolta's camera business in 2006.
CxProcess was originally introduced with the Minolta Dimage 5 in 2001. SUPHEED was introduced with the Minolta Dimage A1 in 2003, which was also the first to implement CxProcess II.
Cameras such as the Minolta Dimage 7 series (Dimage 7, 7i, 7Hi), A1 and Konica Minolta Dimage A2 were using a MegaChips (MCL) DSC-2 MA07163 series of 32-bit RISC processors with MIPS R3000 core.
The cameras were running under Integrated Systems' (ISI) operating system pSOSystem/MIPS (pSOS+/MIPS V2.5.4, pREPC+/MIPS V2.5.2, pHILE+/MIPS FA V4.0.2, pNA+/MIPS V4.0.5).
The CxProcess III was implemented in the Konica Minolta Maxxum 7D (2004) and 5D (2005) utilizing SUPHEED II, a MegaChips DSC-3H MA07168 running under MiSPO's NORTi/MIPS, an RTOS following the µITRON standard.
While no longer named CxProcess III on SUPHEED II, this was also implemented in the Sony Alpha 100 (2006), utilizing a MegaChips MA07169.
Sony introduced their Bionz image processor in 2007 that was originally based on this same technology.
See also
Expeed – Nikon
DIGIC – Canon
References
Minolta
Konica Minolta
Camera firmware
Image processors |
https://en.wikipedia.org/wiki/Mem%20%28computing%29 | In computational complexity theory, computing efficiency, combinatorial optimization, supercomputing, computational cost (algorithmic efficiency) and other computational metrics, the mem is a measurement unit for the number of memory accesses used or needed by a process, function, instruction set, algorithm or data structure.
Example usage, when discussing processing time of a search tree node, for finding 10 × 10 Latin squares: "A typical node of the search tree probably requires about 75 mems (memory accesses) for processing, to check validity. Therefore the total running time on a modern computer would be roughly the time needed to perform mems." (Donald Knuth, 2011, The Art of Computer Programming, Volume 4A, p. 6).
Reducing mems as a speed and efficiency enhancement is not a linear benefit, as it trades off increases in ordinary operations costs.
PFOR compression
This optimization technique also is called PForDelta
Although lossless compression methods like Rice, Golomb and PFOR are most often associated with signal processing codecs, the ability to optimize binary integers also adds relevance in reducing MEMS tradeoffs vs. operations. (See Golomb coding for details).
See also
CAS latency
Clock signal
Clock rate
Computer performance
Instructions per second
Memoization
References
Breaking the Wall of the Quantum Computing Hype - MemComputing, Inc.
Analysis of algorithms
Computer performance
Software optimization |
https://en.wikipedia.org/wiki/Tox%20%28protocol%29 | Tox is a peer-to-peer instant-messaging and video-calling protocol that offers end-to-end encryption. The stated goal of the project is to provide secure yet easily accessible communication for everyone. A reference implementation of the protocol is published as free and open-source software under the terms of the GNU GPL-3.0-or-later.
History
Inception
An idea of developing a secure peer-to-peer messenger which would later turn into Tox sparked on an anonymous imageboard 4chan amidst the allegations that Skype provided NSA with an access to their infrastructure and encryption, just before they were bought by Microsoft.
The initial commit to GitHub was pushed on June 23, 2013, by a user named irungentoo. Unofficial community builds became available as early as on August 23, 2013, with first official builds made available in October 2013. On July 12, 2014, Tox entered an alpha stage in development and a redesigned download page was created for the occasion.
Project's fork and Rust implementation
Sometime during 2016, the original reference implementation saw a steady decline in development activity, with the last known commit currently dated Oct 2018. This caused the project to split, with those interested in continuing the development creating a new fork of Tox core called "c-toxcore" around the end of September 2016.
Currently c-toxcore is being developed by a collective known as the TokTok Project. They describe their mission as "to promote universal freedom of expression and to preserve unrestricted information exchange". Their current goals are to continue slow iterative development of the existing core implementation, along with in-parallel development of a new reference implementation in Rust.
Initially, the Rust implementation of the protocol library was split in two halves, one handling most of the grunt work of communication with the network, and another one responsible specifically for bootstrap node operation. In December 2022 those were merged, |
https://en.wikipedia.org/wiki/Rees%20matrix%20semigroup | In mathematics, the Rees matrix semigroups are a special class of semigroups introduced by David Rees in 1940. They are of fundamental importance in semigroup theory because they are used to classify certain classes of simple semigroups.
Definition
Let S be a semigroup, I and Λ non-empty sets and P a matrix indexed by I and Λ with entries pi,λ taken from S.
Then the Rees matrix semigroup M(S; I, Λ; P) is the set I×S×Λ together with the multiplication
(i, s, λ)(j, t, μ) = (i, spλ,j t, μ).
Rees matrix semigroups are an important technique for building new semigroups out of old ones.
Rees' theorem
In his 1940 paper Rees proved the following theorem characterising completely simple semigroups:
That is, every completely simple semigroup is isomorphic to a semigroup of the form M(G; I, Λ; P) for some group G. Moreover, Rees proved that if G is a group and G0 is the semigroup obtained from G by attaching a zero element, then M(G0; I, Λ; P) is a regular semigroup if and only if every row and column of the matrix P contains an element that is not 0. If such an M(G0; I, Λ; P) is regular, then it is also completely 0-simple.
See also
Semigroup
Completely simple semigroup
David Rees (mathematician)
References
.
.
Semigroup theory |
https://en.wikipedia.org/wiki/ANGLE%20%28software%29 | ANGLE (Almost Native Graphics Layer Engine) is an open source, cross-platform graphics engine abstraction layer developed by Google. ANGLE translates OpenGL ES 2/3 calls to DirectX 9, 11, OpenGL or Vulkan API calls. It's a portable version of OpenGL but with limitations of OpenGL ES standard.
The API is mainly designed to bring up a high-performance OpenGL compatibility to MS Windows and to web browsers such as Chromium by translating OpenGL calls to Direct3D, which has much better driver support on Windows systems. On Windows, there are two backend renderers for ANGLE: the oldest one uses Direct3D 9.0c, while the newer one uses Direct3D 11.
ANGLE is currently used by Google Chrome (it's embedded into the Blink browser engine), Firefox, Edge, WebKit, and the Qt Framework. The engine is also used by Windows 10 for compatibility with apps ported from Android. Throughout 2019, the Apple team contributed a Metal API backend for the ANGLE so Apple devices could run on their native graphics APIs.
ANGLE is distributed under a BSD-license.
History
The project started as a way for Google to bring full hardware acceleration for WebGL to Windows without relying on OpenGL graphics drivers. Google initially released the program under the BSD license.
The current production version (2.1.x) implements OpenGL ES 2.0, 3.0,
3.1 and EGL 1.5, claiming to pass the conformance tests for both. Work was started on then future OpenGL ES 3.0 version, for the newer Direct3D 11 backend.
The capability to use ANGLE in a Windows Store app was added in 2014. Microsoft contributed support for lower feature levels to the project. Supporting CoreWindow and SwapChainPanel in ANGLE's EGL allows applications to run on Windows 8.1, Windows Phone 8.1, and later.
Level of OpenGL ES support via backing renderers
Software utilizing ANGLE
ANGLE is currently used in a number of programs and software.
Chromium and Google Chrome. Chrome uses ANGLE not only for WebGL, but also for its implementatio |
https://en.wikipedia.org/wiki/Single-channel%20architecture | In computer networking, single-channel architecture (SCA) is the design of a wireless network in such a way that the wireless client sees a single point of access to the network. This design utilizes a centralized controller to decide which access point (AP) will be used to communicate with a client device. This method allows the network to maintain a higher level of control over the communication medium than does multiple-channel architecture, which allows client devices to determine which APs to communicate with.
Principles
Single-channel architecture is based on a principle of "virtual cells". All APs joined to a virtual cell use the same wireless channel and identify themselves with the same basic service set identifier (BSSID, i.e. a MAC address). The APs in a cell are managed by a centralized Wireless LAN controller (WLC) that coordinates the APs such that APs/transmissions do not interfere with one another. From a client's point of view, a virtual cell appears as a single AP.
Multiple virtual cells can co-exist, with each virtual cell having its own BSSID and channel. This topology effectively simulates a multiple-channel architecture and can be used to reduce channel congestion in environments with high AP density and overlapping signal range. For example, in a classroom with two cells, clients can be directed to associate with one or the other cell, leaving more bandwidth available to the clients on each channel.
Benefits
The biggest advantage of a single-channel architecture is that there is a zero handoff time for roaming clients. In multiple-channel architecture, as a client device travels around the physical location of the network, it will change which AP it is associated with. Since each AP in a multiple-channel architecture has its own BSSID, a client needs to re-authenticate itself every time it associates with a new AP. In comparison, in a single-channel architecture, since the client only sees one AP, it is up to the central controller to dec |
https://en.wikipedia.org/wiki/Multiple-channel%20architecture | In computer networking, multiple-channel architecture (MCA) is the design of a wireless network in such a way that the client sees multiple points of access to the wireless network. MCA allows wireless clients to choose which access points (APs) to communicate with for access to the network, in contrast to single-channel architecture, which gives more control to the centralized network devices such as the wireless LAN controller.
MCA is the most commonly used network architecture, as it is the most intuitive way to solve the problem of co-channel interference (although it does not eliminate the problem).
See also
Single-channel architecture (SCA)
References
Wireless networking standards |
https://en.wikipedia.org/wiki/Chemical%20shift%20index | The chemical shift index or CSI is a widely employed technique in protein nuclear magnetic resonance spectroscopy that can be used to display and identify the location (i.e. start and end) as well as the type of protein secondary structure (beta strands, helices and random coil regions) found in proteins using only backbone chemical shift data The technique was invented by David S. Wishart in 1992 for analyzing 1Hα chemical shifts and then later extended by him in 1994 to incorporate 13C backbone shifts. The original CSI method makes use of the fact that 1Hα chemical shifts of amino acid residues in helices tends to be shifted upfield (i.e. towards the right side of an NMR spectrum) relative to their random coil values and downfield (i.e. towards the left side of an NMR spectrum) in beta strands. Similar kinds of upfield and downfield trends are also detectable in backbone 13C chemical shifts.
Implementation
The CSI is a graph-based technique that essentially employs an amino acid-specific digital filter to convert every assigned backbone chemical shift value into a simple three-state (-1, 0, +1) index. This approach generates a more easily understood and much more visually pleasing graph of protein chemical shift values. In particular, if the upfield 1Hα chemical shift (relative to an amino acid-specific random coil value) of a certain residue is > 0.1 ppm, then that amino acid residue is assigned a value of -1. Similarly, if the downfield 1Hα chemical shift of a certain amino acid residue is > 0.1 ppm then that residue is assigned a value of +1. If an amino acid residue's chemical shift is not shifted downfield or upfield by a sufficient amount (i.e. <0.1 ppm), it is given a value of 0. When this 3-state index is plotted as a bar graph over the full length of the protein sequence, simple inspection can allow one to identify beta strands (clusters of +1 values), alpha helices (clusters of -1 values), and random coil segments (clusters of 0 values). A list |
https://en.wikipedia.org/wiki/Razor%20%28configuration%20management%29 | Razor is an integrated suite software configuration management system from Visible Systems, which provides process management, issue/problem tracking, version control, and release management.
Razor provides a framework for managing software development processes, including support for agile and waterfall methodologies. It includes a built-in issue tracking system that allows users to log and track bugs, defects, and other issues that arise during the software development process. It also includes a version control system that allows users to track changes to their code over time and collaborate with other team members. Apart from all these functions, the Razor have a release management system that allows users to manage the release process for their software, including tracking the status of different releases and managing the distribution of software to customers and other stakeholders.
Razor runs on Windows, NT, Unix, Linux, or Motif environments. It was developed as an integrated product to support Integrated Development Environment for IBM VisualAge, Microsoft Visual Studio, Microsoft.NET, Rational Rose, and PowerBuilder.
References
Bug and issue tracking software
Version control systems |
https://en.wikipedia.org/wiki/Cotriple%20homology | In algebra, given a category C with a cotriple, the n-th cotriple homology of an object X in C with coefficients in a functor E is the n-th homotopy group of the E of the augmented simplicial object induced from X by the cotriple. The term "homology" is because in the abelian case, by the Dold–Kan correspondence, the homotopy groups are the homology of the corresponding chain complex.
Example: Let N be a left module over a ring R and let . Let F be the left adjoint of the forgetful functor from the category of rings to Set; i.e., free module functor. Then defines a cotriple and the n-th cotriple homology of is the n-th left derived functor of E evaluated at M; i.e., .
Example (algebraic K-theory): Let us write GL for the functor . As before, defines a cotriple on the category of rings with F free ring functor and U forgetful. For a ring R, one has:
where on the left is the n-th K-group of R. This example is an instance of nonabelian homological algebra.
Notes
References
Further reading
Who Threw a Free Algebra in My Free Algebra?, a blog post.
Adjoint functors
Category theory
Homotopy theory |
https://en.wikipedia.org/wiki/Switchyard%20reactor | In an electric power transmission grid system, switchyard reactors are large inductors installed at substations to help stabilize the power system.
For transmission lines, the space between overhead line and ground forms a capacitor parallel to transmission line, which causes an increase in voltage as the distance increases. To offset the capacitive effect of the transmission line and to regulate the voltage and reactive power of the power system, reactors are connected either at line terminals or at the middle, thereby improving the voltage profile of transmission line.
In large systems with many generators connected in parallel, it may be necessary to use a series reactor to prevent excessively large current flow during a short circuit; this protects transmission line conductors and switching apparatus from damage due to high currents and forces produced during a short circuit.
A shunt reactor is connected in parallel with a transmission line or other load. A series reactor is connected between a load and source.
Bus reactors
A bus reactor is an air core inductor, or oil filled inductor, connected between two buses or two sections of the same bus to limit the voltage transients on either bus. It is installed in a bus to maintain system voltage when the load of the bus changes. It adds Inductance to the system to offset the Capacitance of the line.
Line reactors
A line reactor is placed in line at the point of use or just after a transformer to maintain a stable amperage to the user. When a line is disconnected from the system, the line reactor is also disconnected from the system. Line reactors are often used to compensate line capacitance, mitigate voltage transients due to switching, and to limit fault currents, especially in case of underground transmission lines.
A bus reactor and a line reactor are interchangeable as long as they are rated for the same voltage which is dependent upon substation's physical layout, and bus configuration.
Shunt reactor |
https://en.wikipedia.org/wiki/Resource%20and%20Energy%20Economics | Resource and Energy Economics is a quarterly peer-reviewed academic journal covering energy economics and environmental economics published by Elsevier. It was established in 1978 as Resources and Energy and obtained its current title in 1993. The editors-in-chief are R.D. Horan (Michigan State University) and D. van Soest (Tilburg University). The journal was founded by University of Chicago economist George S. Tolley, and Tolley continues to serve as an honorary editor.
Abstracting and indexing
The journal is abstracted and indexed in ABI/Inform, Engineering Index, Geosystems, INSPEC, Journal of Economic Literature, RePEc, Scopus, Current Contents/Social & Behavioral Sciences, and the Social Sciences Citation Index. According to the Journal Citation Reports, the journal has a 2012 impact factor of 1.495.
See also
The Energy Journal
Energy Economics
References
External links
Quarterly journals
Elsevier academic journals
Energy economics
Economics journals
Energy and fuel journals
Academic journals established in 1978
English-language journals
Hybrid open access journals |
https://en.wikipedia.org/wiki/List%20of%20text%20mining%20software | Text mining computer programs are available from many commercial and open source companies and sources.
Commercial
Angoss – Angoss Text Analytics provides entity and theme extraction, topic categorization, sentiment analysis and document summarization capabilities via the embedded
AUTINDEX – is a commercial text mining software package based on sophisticated linguistics by IAI (Institute for Applied Information Sciences), Saarbrücken.
DigitalMR – social media listening & text+image analytics tool for market research.
DiscoverText - online tools for archiving, searching and sorting web-based text from sources, including social media and public comments from a wide range of sources.
FICO Score – leading provider of analytics.
General Sentiment – Social Intelligence platform that uses natural language processing to discover affinities between the fans of brands with the fans of traditional television shows in social media. Stand alone text analytics to capture social knowledge base on billions of topics stored to 2004.
IBM LanguageWare – the IBM suite for text analytics (tools and Runtime).
IBM SPSS – provider of Modeler Premium (previously called IBM SPSS Modeler and IBM SPSS Text Analytics), which contains advanced NLP-based text analysis capabilities (multi-lingual sentiment, event and fact extraction), that can be used in conjunction with Predictive Modeling. Text Analytics for Surveys provides the ability to categorize survey responses using NLP-based capabilities for further analysis or reporting.
Inxight – provider of text analytics, search, and unstructured visualization technologies. (Inxight was bought by Business Objects that was bought by SAP AG in 2008).
Language Computer Corporation – text extraction and analysis tools, available in multiple languages.
Lexalytics – provider of a text analytics engine used in Social Media Monitoring, Voice of Customer, Survey Analysis, and other applications. Salience Engine. The software provides the uniq |
https://en.wikipedia.org/wiki/Benzyl%20cinnamate | Benzyl cinnamate is the chemical compound which is the ester derived from cinnamic acid and benzyl alcohol.
Natural occurrence
Benzyl cinnamate occurs in Balsam of Peru and Tolu balsam, in Sumatra and Penang benzoin, and as the main constituent of copaiba balsam. It is used as an ingredient in the medicated cream product Sudocrem.
Synthesis
Benzyl cinnamate can be prepared by heating benzyl chloride and excess sodium cinnamate in water to 100–115 °C or by heating sodium cinnamate with an excess of benzyl chloride in the presence of diethylamine.
Uses
Benzyl cinnamate is used in heavy oriental perfumes and as a fixative. It is used as a flavoring agent.
It is used pharmaceutically as an antibacterial and antifungal.
References
External links
Benzyl cinnamate at National Library of Medicine's Toxicology Data Network
Cinnamate esters
Perfume ingredients
Flavors
Benzyl esters |
https://en.wikipedia.org/wiki/Valleytronics | Valleytronics (from valley and electronics) is an experimental area in semiconductors that exploits local extrema ("valleys") in the electronic band structure. Certain semiconductors have multiple "valleys" in the electronic band structure of the first Brillouin zone, and are known as multivalley semiconductors. Valleytronics is the technology of control over the valley degree of freedom, a local maximum/minimum on the valence/conduction band, of such multivalley semiconductors.
Details
The term was coined in analogy to spintronics. While in spintronics the internal degree of freedom of spin is harnessed to store, manipulate and read out bits of information, the proposal for valleytronics is to perform similar tasks using the multiple extrema of the band structure, so that the information of 0s and 1s would be stored as different discrete values of the crystal momentum.
Valleytronics may refer to other forms of quantum manipulation of valleys in semiconductors, including quantum computation with valley-based qubits, valley blockade and other forms of quantum electronics. First experimental evidence of valley blockade predicted in Ref. (which completes the set of Coulomb charge blockade and Pauli spin blockade) has been observed in a single atom doped silicon transistor.
Several theoretical proposals and experiments were performed in a variety of systems, such as graphene, few-layer phosphorene, some transition metal dichalcogenide monolayers, diamond, bismuth, silicon, carbon nanotubes, aluminium arsenide and silicene.
References
External links
Matthew Francis: "Experiments hint at a new type of electronics: valleytronics" at Ars Technica
Source of the above: Zeng, H., Dai, J., Yao, W., et al. "Valley polarization in MoS2 monolayers by optical pumping". Nature Nanotechnology, 7 490–493 (August 2012). .
Quantum mechanics
Semiconductors |
https://en.wikipedia.org/wiki/Tango%20%28platform%29 | Tango (formerly named Project Tango, while in testing) was an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It used computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. This allowed application developers to create user experiences that include indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world.
The first product to emerge from ATAP, Tango was developed by a team led by computer scientist Johnny Lee, a core contributor to Microsoft's Kinect. In an interview in June 2015, Lee said, "We're developing the hardware and software technologies to help everything and everyone understand precisely where they are, anywhere."
Google produced two devices to demonstrate the Tango technology: the Peanut phone and the Yellowstone 7-inch tablet. More than 3,000 of these devices had been sold as of June 2015, chiefly to researchers and software developers interested in building applications for the platform. In the summer of 2015, Qualcomm and Intel both announced that they were developing Tango reference devices as models for device manufacturers who use their mobile chipsets.
At CES, in January 2016, Google announced a partnership with Lenovo to release a consumer smartphone during the summer of 2016 to feature Tango technology marketed at consumers, noting a less than $500 price-point and a small form factor below 6.5 inches. At the same time, both companies also announced an application incubator to get applications developed to be on the device on launch.
On 15 December 2017, Google announced that they would be ending support for Tango on March 1, 2018, in favor of ARCore.
Overview
Tango was different from other contemporary 3D-sensing computer vision products, in th |
https://en.wikipedia.org/wiki/Product%20change%20notification | A product change notification (PCN) is a document issued by a manufacturer to inform customers about a change to a mass-produced product or its manufacturing process. In the semi-conductor industry, the JEDEC standard J-STD-046 describes the requirements for product change notifications and examples of types of changes that should be notified.
See also
End-of-life (product)
Last order date
References
Product management
Software release |
https://en.wikipedia.org/wiki/Last%20order%20date | Last order date (LOD) is the date before which customers can buy a product. After this date, its mainstream support has been ended. This is part of the product lifecycle, as specified in JEDEC standards.
See also
Product change notification
End-of-life (product)
End of life announcement
References
Product lifecycle management
Software release |
https://en.wikipedia.org/wiki/Enchanting%20%28programming%20language%29 | Enchanting is a free and open-source cross-platform educational programming language designed to program Lego Mindstorms NXT robots. It is primarily developed by Southern Alberta Robotics Enthusiasts group in the province of Alberta, Canada, and runs on Mac OS X, Windows, and experimentally on Linux devices. Since 2013, the Enchanting version 0.2 has been available.
Technology
Its predecessor, the 1998 Robotics Invention System was developed by Scratch developer team led by Mitch Resnick at MIT Media Lab. Based on BYOB, which is developed by the University of California, Berkeley, the current version of Enchanting runs on Windows XP, Windows Vista, Windows 7 and Windows 8 (but not Windows 8 RT); on Mac OS X it runs on version 10.4 and newer; and on Linux it runs on Ubuntu version 10.10.
Educational resources, use and events
It has been used in secondary-to-tertiary computer science program at Monash University in Australia, where an interactive PDF book for use on computer or iPad, titled Robotics with Enchanting and LEGO® NXT is available for free download. Most recent SABRE Games, organized in 2013 by Southern Alberta Robotics Enthusiasts group, consisted of three disciplines: Tug Of War, where two robots are tied together with a string and each tries to pull its opponent over the center line; Sumo, where two robots are placed in a sumo ring and each tries to find and push its opponent out without going out of the ring itself; and Parade, where robots follow a line trying not to crash into the robot in front.
References
External links
Home page, enchanting.robotclub.ab.ca, Canada
Robotics with Enchanting and LEGO® NXT: A Project Based Introduction to Programming, Australia
Educational programming languages
Robot programming languages |
https://en.wikipedia.org/wiki/Annihilation%20%28VanderMeer%20novel%29 | Annihilation is a 2014 novel by Jeff VanderMeer. It is the first in a series of three books called the Southern Reach Trilogy. The book describes a team of four women (a biologist, an anthropologist, a psychologist, and a surveyor) who set out into an area known as Area X. The area is abandoned and cut off from the rest of civilization. They are the 12th expedition; the previous expeditions have been fraught with disappearances, suicides, aggressive cancers, and mental trauma. The novel won the 2014 Nebula Award for Best Novel and the 2014 Shirley Jackson Award for best novel.
A film based on the novel, starring Natalie Portman, was released by Paramount Pictures on February 23, 2018.
Background
The inspiration for Annihilation and the Southern Reach Trilogy was a hike through St. Marks National Wildlife Refuge in northwestern Florida. Many of the animals and vegetation that VanderMeer has seen on this hike over the past 17 years appear in the novel. He has said that someday he hopes to do a "Weird Nature" anthology as well.
In March 2014, as part of a piece on VanderMeer and Annihilation, VanderMeer visited the St. Marks Lighthouse that inspired one of the settings in Annihilation.
Plot summary
A team of four women (a fifth having abandoned the team before entering) crosses the border into an uninhabited area known as "Area X", an unspecified coastal location that has been closed to the public for three decades. The group comprises the 12th expedition into Area X and consists of a biologist, an anthropologist, a psychologist, and a surveyor, none of whom are ever identified by name. The story is told through the biologist's field journal, written near the end of the expedition. It is revealed that the biologist's husband was part of the preceding 11th expedition, from which he had returned unexpectedly, showing up in their kitchen without any recollection of how he got there. The other members of the 11th expedition had shown up similarly, and a few months l |
https://en.wikipedia.org/wiki/Quotient%20type | In the field of type theory in computer science, a quotient type is a data type which respects a user-defined equality relation. A quotient type defines an equivalence relation on elements of the type - for example, we might say that two values of the type Person are equivalent if they have the same name; formally p1 == p2 if p1.name == p2.name. In type theories which allow quotient types, an additional requirement is made that all operations must respect the equivalence between elements. For example, if f is a function on values of type Person, it must be the case that for two Persons p1 and p2, if p1 == p2 then f(p1) == f(p2).
Quotient types are part of a general class of types known as algebraic data types. In the early 1980s, quotient types were defined and implemented as part of the Nuprl proof assistant, in work led by Robert L. Constable and others. Quotient types have been studied in the context of Martin-Löf type theory, dependent type theory, higher-order logic, and homotopy type theory.
Definition
To define a quotient type, one typically provides a data type together with an equivalence relation on that type, for example, Person // ==, where == is a user-defined equality relation. The elements of the quotient type are equivalence classes of elements of the original type.
Quotient types can be used to define modular arithmetic. For example, if Integer is a data type of integers, can be defined by saying that if the difference is even. We then form the type of integers modulo 2:
Integer //
The operations on integers, +, - can be proven to be well-defined on the new quotient type.
Variations
In type theories that lack quotient types, setoids - sets explicitly equipped with an equivalence relation –= are often used instead. However, unlike with setoids, many type theories may require a formal proof that any functions defined on quotient types are equivalence relation#well-definedness_under_an_equivalence_relation#well-defined.
Properties
Quot |
https://en.wikipedia.org/wiki/Cosheaf | In topology, a branch of mathematics, a cosheaf with values in an ∞-category C that admits colimits is a functor F from the category of open subsets of a topological space X (more precisely its nerve) to C such that
(1) The F of the empty set is the initial object.
(2) For any increasing sequence of open subsets with union U, the canonical map is an equivalence.
(3) is the pushout of and .
The basic example is where on the right is the singular chain complex of U with coefficients in an abelian group A.
Example: If f is a continuous map, then is a cosheaf.
See also
sheaf (mathematics)
Notes
References
Algebraic topology
Category theory
Sheaf theory |
https://en.wikipedia.org/wiki/Surespot | Surespot was a free open-source instant messaging application for Android and iOS with a focus on privacy and security. It was shut down on July 31, 2022.
Features
The application supported the sending of text, pictures, audio messages (in the past only after an in-app purchase), and Emoji icons. It also supported the deletion of messages from the receiving device. It allowed user blocking. There was no support for group messages and sending files other than photos. Surespot provided offline backup via iTunes (PC or Mac) on the iOS version, or to local device storage on the Android version.
For secure communication, Surespot used end-to-end encryption by default. 256-bit AES-GCM encryption was used, with keys created with 512-bit ECDH.
App users could use multiple identities, for instance for private or business use.
Surespot was donationware.
Reception
As of November 4, 2014, Surespot had a score of 5 out of 7 points on the Electronic Frontier Foundation secure messaging scorecard. It had received points for having communications encrypted in transit, having communications encrypted with keys the provider doesn't have access to (end-to-end encryption), making it possible for users to independently verify their correspondent's identities, having its code open to independent review (open-source), and for having its security design well-documented. It was missing points because past communications were not secured if the encryption keys were stolen (no forward secrecy) and because there had not been a recent independent security audit.
Controversy
In May 2015, Channel 4 News published an investigation in which they alleged that "at least 115 ISIS-linked people" appeared to have used Surespot between November 2014 and May 2015. In June 2015, a Surespot user wrote a blog post about how the Surespot developers had stopped responding to his repeated questions regarding "governmental demands for information", leading to the user alleging that the Surespot develope |
https://en.wikipedia.org/wiki/Order-7%20heptagrammic%20tiling | In geometry, the order-7 heptagrammic tiling is a tiling of the hyperbolic plane by overlapping heptagrams.
Description
This tiling is a regular star-tiling, and has Schläfli symbol of {7/2,7}. The heptagrams forming the tiling are of type {7/2}, . The overlapping heptagrams subdivide the hyperbolic plane into isosceles triangles, 14 of which form each heptagram.
Each point of the hyperbolic plane that does not lie on a heptagram edge belongs to the central heptagon of one heptagram, and is in one of the points of exactly one other heptagram. The winding number of each heptagram around its points is one, and the winding number around the central heptagon is two, so adding these two numbers together, each point of the plane is surrounded three times; that is, the density of the tiling is 3.
In the Euclidean plane, a heptagram of type {7/2} would have angles of 3/7 at its vertices, but in the hyperbolic plane heptagrams can have the sharper vertex angle 2/7 that is needed to make exactly seven other heptagrams meet up at the center of each heptagram of the tiling.
Related tilings
It has the same vertex arrangement as the regular order-7 triangular tiling, {3,7}. The full set of edges coincide with the edges of a heptakis heptagonal tiling. The valance 6 vertices in this tiling are false-vertices in the heptagrammic one caused by crossed edges.
It is related to a Kepler-Poinsot polyhedron, the small stellated dodecahedron, {5/2,5}, which is polyhedron and a density-3 regular star-tiling on the sphere:
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
External links
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Regular tilings
Heptagrammic tilings
Order-7 tilings |
https://en.wikipedia.org/wiki/Heptagrammic-order%20heptagonal%20tiling | In geometry, the heptagrammic-order heptagonal tiling is a regular star-tiling of the hyperbolic plane. It has Schläfli symbol of {7,7/2}. The vertex figure heptagrams are {7/2}, . The heptagonal faces overlap with density 3.
Related tilings
It has the same vertex arrangement as the regular order-7 triangular tiling, {3,7}. The full set of edges coincide with the edges of a heptakis heptagonal tiling.
It is related to a Kepler-Poinsot polyhedron, the great dodecahedron, {5,5/2}, which is polyhedron and a density-3 regular star-tiling on the sphere (resembling a regular icosahedron in this state, similarly to this tessellation resembling the order-7 triangular tiling):
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Heptagonal tilings
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Regular tilings
Heptagrammic-order tilings |
https://en.wikipedia.org/wiki/List%20of%20cosmological%20computation%20software | This List of Cosmological Computation Software catalogs the tools and programs used by scientists in cosmological research.
In the past few decades, the accelerating technological evolution has profoundly enhanced astronomical instrumentation, enabling more precise observations and expanding the breadth and depth of data collection by several orders of magnitude. Simultaneously, the exponential growth in computational power has enabled the creation of computer simulations that reveal details with unprecedented resolution and accuracy. For performing computer simulations of the cosmos and analyzing data from both cosmological experiments and simulations, many advanced methods and computational software are developed every year. These software are widely used by researchers all across the globe, in all various fields and topics of cosmology.
The computational software, used in cosmology can be classified into the following major classes:
Cosmological Boltzmann codes: These codes are used for calculating the theoretical power spectrum given the cosmological parameters. These codes are capable of calculating the power spectrum from the standard LCDM model or its derivatives. Some of the most used CMB Boltzmann codes are CMBFAST, CAMB, CMBEASY, CLASS, CMBAns etc.
Cosmological parameter estimator: The parameter estimation codes are used for calculating the best-fit parameters from the observation data. The ready to use codes available for this purpose are CosmoMC, AnalyzeThis, SCoPE etc.
Newtonian cosmological simulation codes
GADGET
GADGET, named "GAlaxies with Dark matter and Gas intEracT" is a code written in C++ for cosmological N-body/Smoothed-particle hydrodynamics (SPH) simulations on massively parallel computers with distributed memory. Its first version was developed by German astrophysicist, Volker Springel and was published in 2000. It was followed by two more official public versions, with GADGET-2 released in 2005 and GADGET-4 released in 2020, whi |
https://en.wikipedia.org/wiki/PragmaDev%20Studio | PragmaDev Studio is a modeling and testing software tool introduced by PragmaDev in 2002 dedicated to the specification of communicating systems. It was initially called Real Time Developer Studio or RTDS. Its primary objective was to support SDL-RT modeling technology. Since V5.0 launched on October 7, 2015 RTDS is called PragmaDev Studio, and it is organized in four independent modules: Specifier, Developer, Tester and Tracer. V5.1 launched on November 29, 2016 introduces a freemium licensing model.
Features
Specification and Description Language
The Specification and Description Language (SDL) is a modeling language standardized by ITU-T to describe communicating systems. SDL is graphical but contains an action language with a semantic of execution making the SDL models executable. SDL is considered formal because it is complete and non-ambiguous. SDL-RT is a variant of SDL where the action language is replaced by C or C++ instructions. SDL-RT is considered semi-formal because it mixes SDL with code. ITU-T has standardized a UML profile based on SDL making by extension any SDL tool a sort of UML tool.
Simulation
PragmaDev Specifier embeds an SDL simulator that behaves like a model debugger. It is possible to set breakpoints graphically, to view variables, and pending timers. During execution a live trace is generated based on the Message Sequence Chart ITU-T standard.
Code generation (compiler)
PragmaDev Studio can generate C or C++ code out of an SDL model, and PragmaDev Developer can generate C or C++ code out of an SDL-RT model. The generated code can be adapted to any Real Time Operating System or scheduler. The tool offers a number of integrations with debuggers such as gdb so that the user feels he is still debugging the model, not the generated code.
Model checking
PragmaDev Studio can export the SDL model to different formats such as IF, FIACRE, or XLIA in order to verify the model in third party tools such as IFx from Verimag, TINA from LAAS, or |
https://en.wikipedia.org/wiki/Varenye | Varenye (, , ) is a popular whole-fruit preserve, widespread in Eastern Europe (Russia, Ukraine, Belarus), as well as the Baltic region (, , ). It is made by cooking berries, other fruits, or more rarely nuts, vegetables, or flowers, in sugar syrup. In some traditional recipes, other sweeteners such as honey or treacle are used instead of or in addition to sugar.
Varenye is similar to jam except the fruits are not macerated, and no gelling agent is added. It is characterized by a thick but transparent syrup having the natural colour of the fruits.
Etymology, translations, and cultural references
Varenye is an old Slavic word which is used in East Slavic languages in a more general sense to refer to any type of sweet fruit preserve. The word has common etymological roots with the verbs denoting cooking, boiling, brewing, or stewing (, , ).
In literary translations, especially of children's books, into Russian, the term is often used to replace less-common loanwords, such as jam, confiture or marmalade. Examples are the translations of Alice's Adventures in Wonderland, Harry Potter, The Adventures of Tom Sawyer, and the animated movies about Karlsson-on-the-Roof.
The same is true when translating from Russian. For instance, the making of raspberry varenye is described in Leo Tolstoy's novel Anna Karenina (VI-2). In her classic translation, Constance Garnett refers to the activity as "jam-making".
In the popular Soviet children's book A Tale About a War Secret, About the Boy Nipper-Pipper, and His Word of Honour by Arkady Gaidar the antihero Little Baddun betrays his friends for "a barrel of varenye and a basket of biscuits" (; again, in English translation jam is used instead of varenye). This phrase became an idiomatic expression for betrayal or selling out in Russian, similar to thirty pieces of silver.
Preparation
The making of varenye requires a careful balance between cooking, or sometimes steeping in the hot sugar mixture for just enough time to allow |
https://en.wikipedia.org/wiki/HHVM | HipHop Virtual Machine (HHVM) is an open-source virtual machine based on just-in-time (JIT) compilation that serves as an execution engine for the Hack programming language. By using the principle of JIT compilation, Hack code is first transformed into intermediate HipHop bytecode (HHBC), which is then dynamically translated into x86-64 machine code, optimized, and natively executed. This contrasts with PHP's usual interpreted execution, in which the Zend Engine transforms PHP source code into opcodes that serve as a form of bytecode, and executes the opcodes directly on the Zend Engine's virtual CPU.
HHVM is developed by Meta, with the project's source code hosted on GitHub; it is licensed under the terms of the PHP License and Zend License.
Overview
HHVM was created as the successor to the HipHop for PHP (HPHPc) PHP execution engine, which is a PHP-to-C++ transpiler also created by Facebook. Based on the gained experience and aiming to solve issues introduced by HPHPc, Meta decided in early 2010 to create a JIT-based PHP virtual machine. Issues associated with HPHPc included reaching a plateau for further performance improvements, a fundamental inability to support all features of the PHP language, and difficulties arising from specific time- and resource-consuming development and deployment processes. In Q1 2013, the production version of the facebook.com website stopped using HPHPc and switched to HHVM.
Following the JIT compilation principle, HHVM first converts the executed code into an intermediate language, the high-level bytecode HHBC. HHBC is a bytecode format created specifically for HHVM, appropriate for consumption by both interpreters and just-in-time compilers. Next, HHVM dynamically ("just-in-time") translates the HHBC into x86-64 machine code, optimized through dynamic analysis of the translated bytecode. Finally, it executes the x86-64 machine code. As a result, HHVM has certain similarities to the virtual machines used by other programm |
https://en.wikipedia.org/wiki/Carl%20S.%20Herz | Carl Samuel Herz (10 April 1930 – 1 May 1995) was an American-Canadian mathematician, specializing in harmonic analysis. His name is attached to the Herz–Schur multiplier. He held professorships at Cornell University and McGill University, where he was Peter Redpath Professor of Mathematics at the time of his death.
Education and career
Herz received his bachelor's degree from Cornell University in 1950 and continued on as a mathematics graduate student at Princeton University. There he received a Ph.D. under the supervision of Salomon Bochner in 1953 with the dissertation "Bessel Functions of Matrix Argument". According to Tom H. Koornwinder, Herz's dissertation (published in the Annals of Mathematics in May 1955) "was a pioneering paper in the field of special functions in several variables associated with Lie groups and with root systems." Herz returned to Cornell as an instructor, rising in rank to assistant professor in 1955, associate professor in 1958, and full professor in 1963. He remained at Cornell until 1969. During the academic year 1969–1970 he worked at Brandeis University and then in 1970 joined the faculty of McGill University as full professor, where he remained until his death in 1995. During the academic year 1962–1963 Herz was a Sloan Fellow at Université de Paris-Sud at Orsay, where he established close ties with mathematicians there that led to frequent academic visits at Orsay of a month or two each year. In the academic years 1957–1958 and 1976–1977 he was a visiting scholar at the Institute for Advanced Study.
Herz did mathematical research on spectral synthesis, positive-definite functions, Fourier transforms on convex sets, potential theory, Hp, and BMO. According to Nicholas Varopoulos, Herz made contributions "to the theory of symmetric spaces, Lie groups and the heat kernel on these; among other things he succeeded in classifying all faithful representations of Lie groups by contact transformations of a compact manifold."
In 1978 he |
https://en.wikipedia.org/wiki/Dead%20letter%20queue | In message queueing a dead letter queue (DLQ) is a service implementation to store messages that the messaging system cannot or should not deliver. Although implementation-specific, messages can be routed to the DLQ for the following reasons:
The message is sent to a queue that does not exist.
The maximum queue length is exceeded.
The message exceeds the size limit.
The message expires because it reached the TTL (time to live)
The message is rejected by another queue exchange.
The message has been read and rejected too many times.
Routing these messages to a dead letter queue enables analysis of common fault patterns and potential software problems. If a message consumer receives a message that it considers invalid, it can instead forward it an Invalid Message Channel, allowing a separation between application-level faults and delivery failures.
Queueing systems that incorporate dead letter queues include Amazon EventBridge, Amazon Simple Queue Service, Apache ActiveMQ, Google Cloud Pub/Sub, HornetQ, Microsoft Message Queuing, Microsoft Azure Event Grid and Azure Service Bus, WebSphere MQ, Solace PubSub+, Rabbit MQ, Apache Kafka and Apache Pulsar.
See also
Poison message
Dead letter mail
References
Inter-process communication
Events (computing) |
https://en.wikipedia.org/wiki/List%20of%20birds%20by%20flight%20heights | This is a list of birds by flight height.
Birds by flight height
See also
Organisms at high altitude
List of birds by flight speed
References
Flight heights
Highest flight |
https://en.wikipedia.org/wiki/Solemya | Solemya is a genus of saltwater clams, marine bivalve mollusks in the family Solemyidae, the awning clams. Solemya is the type genus of the family Solemyidae.
Description
The shell valves of species in this genus are fragile and subcylindrical in shape; there are no hinge teeth. The shell has a persistent thin periostracum which extends beyond the valve margins, hence the common name "awning clams".
These clams have chemosynthetic bacterial symbionts that produce their food. The bacteria live within their gill cells, and produce energy by oxidizing hydrogen sulfide, which they then use to fix carbon dioxide via the Calvin cycle. This symbiosis has been best-studied in the Atlantic species S. velum and the Pacific species S. reidi.
Species
Species within the genus Solemya include:
Solemya africana
Solemya atacama
Solemya australis
Solemya borealis
Solemya elarraichensis
Solemya flava
Solemya moretonensis
Solemya notialis
Solemya occidentalis
Solemya panamensis
Solemya parkinsonii
Solemya pervernicosa
Solemya pusilla
Solemya reidi
Solemya tagiri
Solemya terraereginae
Solemya velesiana
Solemya velum
Solemya winkworthi
References
Solemyidae
Bivalve genera
Taxa named by Jean-Baptiste Lamarck
Chemosynthetic symbiosis |
https://en.wikipedia.org/wiki/Mailpile | Mailpile is a free and open-source email client with the main focus of privacy and usability. It is a webmail client, albeit one run from the user's computer, as a downloaded program launched as a local website.
Features
In the default setup of the program, the user is given a public and a private PGP key, for the purpose of (respectively) receiving encrypted email and then decrypting it. Mailpile uses PGP and stores all locally generated files in encrypted form on-disk. The client takes an opportunistic approach to finding other users to encrypt to, those that support it, and integrates this in the process of sending email.
The program preloads a lot of email data into RAM to accelerate search results. While the search results remain really fast despite large amounts of emails, this gradually slows down the start-up time of the program as stored email data increases. This feature will likely be altered in the planned Mailpile version 2.
History
Mailpile started out as a search engine in 2011.
Crowdfunding
The project gained recognition following an Indiegogo crowdfunding campaign, raising $163,192 between August and September 2013. In the middle of the campaign, PayPal froze a large portion of the raised funds, and subsequently released them after Mailpile took the issue to the public on blogs and social media platforms including Twitter.
Releases
Alpha
The first publicly tagged release 0.1.0 from January 2014 included an original typeface (also by the name of "Mailpile"), UI feedback of encryption and signatures, custom search engine, integrated spam-filtering support, and localization to around 30 languages.
Alpha II
July 2014 This release introduced storing logs encrypted, partial native IMAP support, and the spam filtering engine gained more ways to auto-classify e-mail. The graphical interface was revamped. A wizard was introduced to help users with account setup.
Beta
Mailpile released a beta version in September 2014.
Beta II
January 2015
|
https://en.wikipedia.org/wiki/Microautophagy | Microautophagy is one of the three common forms of autophagic pathway, but unlike macroautophagy and chaperone-mediated autophagy, it is mediated—in mammals by lysosomal action or in plants and fungi by vacuolar action—by direct engulfment of the cytoplasmic cargo. Cytoplasmic material is trapped in the lysosome/vacuole by a random process of membrane invagination.
The microautophagic pathway is especially important for survival of cells under conditions of starvation, nitrogen deprivation, or after treatment with rapamycin. Generally a non-selective process, there are three special cases of a selective microautophagic pathway: micropexophagy, piecemeal microautophagy of the nucleus, and micromitophagy, all which are activated only under a specific conditions.
Functions of microautophagy
Microautophagy together with macroautophagy is necessary for nutrient recycling under starvation. Microautophagy due to degradation of lipids incorporated into vesicles regulates the composition of lysosomal/vacuolar membrane.
Microautophagic pathway functions also as one of the mechanism of glycogen delivery into the lysosomes.
This autophagic pathway engulfs multivesicular bodies formed after endocytosis therefore it plays role in membrane proteins turnover.
Microautophagy is also connected with organellar size maintenance, composition of biological membranes, cell survival under nitrogen restriction, and the transition pathway from starvation-induced growth arrest to logarithmic growth.
Non-selective microautophagy
Non-selective microautophagic process can be dissected into 5 distinct steps. Majority of experiments were done on yeast (vacuolar invaginations) but the molecular principles seem to be more general
Membrane invagination and autophagic tubes formation
Invagination is a constitutive process but its frequency is dramatically increased during periods of starvation. Invagination is a tubular process by which is formed the autophagic tube.
Formation of the autop |
https://en.wikipedia.org/wiki/Global%20analysis | In mathematics, global analysis, also called analysis on manifolds, is the study of the global and topological properties of differential equations on manifolds and vector bundles. Global analysis uses techniques in infinite-dimensional manifold theory and topological spaces of mappings to classify behaviors of differential equations, particularly nonlinear differential equations. These spaces can include singularities and hence catastrophe theory is a part of global analysis. Optimization problems, such as finding geodesics on Riemannian manifolds, can be solved using differential equations, so that the calculus of variations overlaps with global analysis. Global analysis finds application in physics in the study of dynamical systems and topological quantum field theory.
Journals
Annals of Global Analysis and Geometry
The Journal of Geometric Analysis
See also
Atiyah–Singer index theorem
Geometric analysis
Lie groupoid
Pseudogroup
Morse theory
Structural stability
Harmonic map
References
Further reading
Mathematics 241A: Introduction to Global Analysis
Fields of mathematical analysis
Manifolds |
https://en.wikipedia.org/wiki/Museum%20of%20the%20City%20of%20San%20Francisco | The Museum of the City of San Francisco is a nonprofit museum containing a collection of historic artifacts related to San Francisco. It was founded by Gladys Hansen, who was the city archivist of San Francisco. The executive director is Richard Hansen, Gladys's son.
History
The Museum of the City of San Francisco was founded in 1991 by Gladys Hansen, who had recently retired as the city archivist of San Francisco. It was recognized as the official historical museum of San Francisco by the Board of Supervisors in 1995. The museum had a small exhibit space at The Cannery (a former Del Monte fruit-canning plant that is now a shopping center) until 2000, when it lost its lease. It then had temporary exhibits at Pier 45 (near Fisherman's Wharf) and at San Francisco City Hall.
In February 2002, the Museum of the City of San Francisco merged with the San Francisco Historical Society to create the San Francisco Museum and Historical Society. San Francisco municipal government recognized the newly merged organization as the official historical museum of San Francisco, since it was the successor to the Museum of the City of San Francisco. One of the purposes of the merger of the two organizations was to put together a single proposal to renovate and operate the Old San Francisco Mint as a history museum, which ultimately did not succeed.
Notwithstanding the merger, the Museum of the City of San Francisco's website, operated directly by Gladys Hansen, remained independent and in 2003 renamed itself the Virtual Museum of the City of San Francisco. Hansen's personal research collection of artifacts from the 1906 San Francisco earthquake also remained in her possession. In 2013, it started partnering with the Bethlehem Shipyard Museum on exhibits, and it displayed some of its artifacts in the San Francisco History Museum, near Union Square.
In 2019, the Virtual Museum of the City of San Francisco dropped "Virtual" from its name and reverted to its original name, after the |
https://en.wikipedia.org/wiki/A%20Death%20of%20Honor | A Death of Honor is a science fiction mystery novel by American author Joe Clifford Faust. It was published in 1987 by Del Rey Books.
Plot summary
The novel is set in a crumbling 21st-century America. D. A. Payne, a bioengineer, is the prime suspect when a dead woman turns up in his apartment. He takes on the task of clearing himself but what he uncovers changes his life.
Background
According to the author, A Death of Honor was originally envisioned as a collaboration between various authors. Having written the first chapter while working on his novel Desperate Measures, Faust decided to finish the book himself when the collaboration stalled. It was his first novel to be published.
Award nominations
1988 – Locus Poll Award, Best First Novel nominee
References
1987 American novels
1987 science fiction novels
American science fiction novels
American mystery novels
Debut science fiction novels
Biological engineering
Dystopian novels
1987 debut novels |
https://en.wikipedia.org/wiki/List%20of%20pickled%20foods | This is a list of pickled foods. Many various types of foods are pickled to preserve them and add flavor. Some of these foods also qualify as fermented foods.
Pickled foods
A
B
C
Champoy – Myrica rubra pickled in salt, sugar, and vinegar from the Philippines
D
E
Encurtido – a pickled vegetable appetizer, side dish and condiment in the Mesoamerican region
F
G
– sometimes referred to as dilly beans
H
J
K
L
M
N
O
Onions
P
– also referred to as pickled pork
Pickled carrot – a carrot that has been pickled in a brine, vinegar, or other solution and left to ferment for a period of time
R
S
T
Turnip - Lebanese pickle with bertroot for red coloring.
U
W
Z
See also
References
External links
Pickled
Food preservation
Condiments |
https://en.wikipedia.org/wiki/Idiopathic%20granulomatous%20hepatitis | Idiopathic granulomatous hepatitis is a rare medical condition characterised by granulomas in the liver, recurrent fever, myalgia, and fatigue. The condition is not a true hepatitis, and some experts believe it is a variant of sarcoidosis.
References
Ailments of unknown cause
Abdominal pain
Rare diseases
Monocyte- and macrophage-related cutaneous conditions
Autoimmune diseases |
https://en.wikipedia.org/wiki/Android%20TV | Android TV is a smart TV operating system based on Android and developed by Google for television sets, digital media players, set-top boxes, and soundbars. A successor to Google TV, it features a user interface designed around content discovery and voice search, content aggregation from various media apps and services, and integration with other recent Google technologies such as Assistant, Cast, and Knowledge Graph.
The platform was first unveiled in June 2014, and was first made available on the Nexus Player that November. The platform has been adopted as smart TV middleware by companies such as Sony and Sharp, while Android TV products have also been adopted as set-top boxes by a number of IPTV television providers.
A special edition, called Android TV "Operator Tier", is provided to pay television and other service operators that implement Android TV on the device they provide to their subscribers to access media content. In this edition, the operator can customize the home screen and services on the device. This certification streamlines UI management, app launches, and content delivery, empowering operators to deliver unique experiences on Android TV.
History
Android TV was first announced at Google I/O in June 2014, as a successor to the commercially unsuccessful Google TV. The Verge characterized it as being more in line with other digital media player platforms, but leveraging Google's Knowledge Graph project; Chromecast compatibility; a larger emphasis on search; closer ties to the Android ecosystem (including Google Play Store and integration with other Android families such as Android Wear); and native support for video games, Bluetooth gamepads, and the Google Play Games framework. Some attendees received the platform's development kit, the ADT-1; The Information reported that the ADT-1 was based on a scrapped "Nexus TV" launch device that was being developed internally by Google. Google unveiled the first Android TV device, the Nexus Player develo |
https://en.wikipedia.org/wiki/Richard%20Gibbs%20%28biologist%29 | Richard Alexander Gibbs, , is an Australian geneticist. He is currently the Wofford Cain Chair and Professor of Molecular and Human Genetics at Baylor College of Medicine in Houston, Texas.
In 1996, he founded the Human Genome Sequencing Center at BCM, which was one of five worldwide sites selected to complete the final phase of the Human Genome Project.
References
External links
Richard A. Gibbs, Ph.D. (Department of Molecular and Human Genetics, Baylor College of Medicine)
1950s births
Living people
Australian geneticists
Companions of the Order of Australia
University of Melbourne alumni
Australian expatriates in the United States
Human Genome Project scientists |
https://en.wikipedia.org/wiki/Mining%20pool | In the context of cryptocurrency mining, a mining pool is the pooling of resources by miners, who share their processing power over a network, to split the reward equally, according to the amount of work they contributed to the probability of finding a block. A "share" is awarded to members of the mining pool who present a valid partial proof-of-work. Mining in pools began when the difficulty for mining increased to the point where it could take centuries for slower miners to generate a block. The solution to this problem was for miners to pool their resources so they could generate blocks more quickly and therefore receive a portion of the block reward on a consistent basis, rather than randomly once every few years.
History
November 2010: Slush launched in 2010 and is the first mining pool.<
2011–2013: The era of deepbit, which at its peak held up to 45% of the network hashrate.
2013–2014: Since the introduction of ASIC, and when deepbit failed to support the newer stratum protocol, GHash.IO replaced deepbit and became the largest.
2014–2015: F2Pool, which launched in May 2013, overtook GHash.IO and became then the largest mining pool.
2016–2018: Rise of Bitmain and its AntPool. Bitmain also controls a few other smaller pools like BTC.com and ViaBTC.
2019–2020: The launch of Poolin. Poolin and F2Pool each held about 15% of the network hashrate at this time period, with smaller pools following.
2020: Binance launches a mining pool, following Huobi and OKex. Luxor launches a US-based mining pool..
2022: Cruxpool launches the first french mining pool. PEGA Pool launches the first eco friendly focused mining pool. But at the end of summer 2023, PEGA Pool announced the closure of its mining operation.
Mining pool share
Share is the principal concept of the mining pool operation. Share is a potential block solution. So it may be a block solution, but it is not necessarily so. For example, suppose a block solution is a number that ends with 10 zeros an |
https://en.wikipedia.org/wiki/Symbiotic%20fermentation | Symbiotic fermentation is a form of fermentation in which multiple organisms (yeasts, acetic acid bacteria, lactic acid bacteria and others) interact in symbiosis in order to produce the desired product. For example, a yeast may produce ethanol, which is then consumed by an acetic acid bacterium. Described early on as the fermentation of sugars following saccharification in a mixed fermentation process.
History
The earliest mention of the term can be found in a lecture given by Dr. Allan Macfadyen of the Jenner Institute of Preventative Medicine in 1902. Dr. Macfadyen described symbiotic fermentation as noting "a close relationship between the organisms at work, the action of one aiding or modifying the action of the other, whilst both members are more active as a results of the partnership." Fermentative microorganisms have had a deep history as seen by kefir and kumis fermentations of milk by Nomadic tribes in Russia, as well as Japanese koji fermentation (see Aspergillus oryzae).
In 1927, Dr. Aldo Castellani defined symbiotic fermentation as "two microorganisms neither of which alone produces fermentation with gas in certain carbohydrates, may do so when living in symbiosis or when artificially mixed." He based this definition on the observation that ordinary bakers yeast consisted of two or more microorganisms- Saccharomyces and Bacilli. He performed experiments to show that when two different Bacilli species were grown in culture together with maltose as the sugar, gas was produced as a result of symbiotic fermentation. Dr. Castellani also described symbiotic fermentation as a method to distinguish between Bacillus dysentariae Shiga (now Shigella dysentariae Shiga) and B. dysentariae Flexner (now Shigella flexneri) by fermenting each of them with Bacillus morgani (now Morganella morganii) in mannitol. The culture with Flexner would always produce gas and acid, while the culture with Shiga only produced acid. To summarize, one bacteria performs acid fermentat |
https://en.wikipedia.org/wiki/Point-to-point%20encryption | Point-to-point encryption (P2PE) is a standard established by the PCI Security Standards Council. Payment solutions that offer similar encryption but do not meet the P2PE standard are referred to as end-to-end encryption (E2EE) solutions. The objective of P2PE and E2EE is to provide a payment security solution that instantaneously converts confidential payment card (credit and debit card) data and information into indecipherable code at the time the card is swiped, in order to prevent hacking and fraud. It is designed to maximize the security of payment card transactions in an increasingly complex regulatory environment.
The standard
The P2PE Standard defines the requirements that a "solution" must meet in order to be accepted as a PCI-validated P2PE solution. A "solution" is a complete set of hardware, software, gateway, decryption, device handling, etc. Only "solutions" can be validated; individual pieces of hardware such as card readers cannot be validated. It is also a common mistake to refer to P2PE validated solutions as "certified"; there is no such certification.
The determination of whether or not a solution meets the P2PE standard is the responsibility of a P2PE Qualified Security Assessor (P2PE-QSA). P2PE-QSA companies are independent third-party companies who employ assessors that have met the PCI Security Standards Council's requirements for education and experience, and have passed the requisite exam. The PCI Security Standards Council does not validate solutions.
How it works
As a payment card is swiped through a card reading device, referred to as a point of interaction (POI) device, at the merchant location or point of sale, the device immediately encrypts the card information. A device that is part of a PCI-validated P2PE solution uses an algorithmic calculation to encrypt the confidential payment card data. From the POI, the encrypted, indecipherable codes are sent to the payment gateway or processor for decryption. The keys for encrypti |
https://en.wikipedia.org/wiki/9LV | 9LV is a Naval Combat Management System (CMS) from the Swedish company Saab. The 9LV was established when Philips Teleindustri AB (1975 renamed Philips Elektronikindustrier AB), a subsidiary of Philips of the Netherlands, was selected as the supplier of the torpedo and dual purpose gun fire control system including a radar fire control director for the Royal Swedish Navy Norrköping-class torpedo boats.
Prior to the Norrköping class, Philips provided torpedo fire control to and Plejad-class torpedo boats, as well as to and s, and also anti-submarine fire control for the s and . However, not until the air defence fire control and radar fire control director was introduced, the name 9LV was established; LV is the Swedish abbreviation for "luftvärn", i.e. air defence.
9LV is currently used on several classes of naval combatants, including the Australian s, the Swedish s, the Canadian s and the Australian ships. It will be used on the Norwegian Coast Guard’s new Jan Mayen-class vessels.
Name etymology
The heritage of the name 9LV can be traced back to the late 1960s and Philips Teleindustri (Järfälla, SE). Philips gave the number nine to the product from Sweden.
LV is the Swedish abbreviation for Ground Based Air Defence (GBAD), in Swedish "Luftvärn".
Often, a specific ship class 9LV configuration is identified by a three digit number, consisting of one complexity digit and a two digit serial or country code. The first digit is defined by the following table.
Naval Combat Management introduction
A Naval Combat Management System (CMS) is the computer system that connects a naval ship's sensors, weapons, data links, support measures and other equipment to the officers and staff performing the tasks in combat through the cycle of the OODA loop. Typical functions include sensor control, sensor data fusion, threat evaluation, weapons assignment, weapons control etc.
9LV Mk1 (1968–1977)
To meet the requirements of the Royal Swedish Navy weapon control system on |
https://en.wikipedia.org/wiki/Quarter%20order-6%20square%20tiling | In geometry, the quarter order-6 square tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of q{4,6}. It is constructed from *3232 orbifold notation, and can be seen as a half symmetry of *443 and *662, and quarter symmetry of *642.
Images
Projections centered on a vertex, triangle and hexagon:
Related polyhedra and tiling
See also
Square tiling
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Order-6 tilings
Square tilings
Uniform tilings |
https://en.wikipedia.org/wiki/Regular%20scheme | In algebraic geometry, a regular scheme is a locally Noetherian scheme whose local rings are regular everywhere. Every smooth scheme is regular, and every regular scheme of finite type over a perfect field is smooth.
For an example of a regular scheme that is not smooth, see Geometrically regular ring#Examples.
See also
Étale morphism
Dimension of an algebraic variety
Glossary of scheme theory
Smooth completion
References
Algebraic geometry
Scheme theory |
https://en.wikipedia.org/wiki/Direct%20clustering%20algorithm | Direct clustering algorithm (DCA) is a methodology for identification of cellular manufacturing structure within an existing manufacturing shop. The DCA was introduced in 1982 by H.M. Chan and D.A. Milner The algorithm restructures the existing machine / component (product) matrix of a shop by switching the rows and columns in such a way that a resulting matrix shows component families (groups) with corresponding machine groups. See Group technology. The algorithm is executable in manual way but was already suitable for computer use of the time.
Procedure
The cellular manufacturing structure consists of several machine groups (production cells) where corresponding product groups (products with similar technology) are being exclusively manufactured. In aim of identification of possible cellular manufacturing structure within an existing manufacturing shop the DCA methodology roughly provides following procedure:
Setting up a matrix where one dimension represents machines, the other products. All intersections where a product requires a machine is filled with "1", all others are filled with "0".
The position of the columns and order of the rows is than changed. The algorithm provides the rules for column changing and row changing in aim of concentration of matrix cells containing "1" in several groups.
The resulting matrix shows groups of products with corresponding machines aligned by the matrix diagonal.
The experience
The DCA methodology would give a perfect result in an ideal case where there are no overlapping machines or products between the groups. The overlapping in most real cases represents further challenge for the methodology users. The "Formation of Machine Cells/ Part Families in Cellular Manufacturing Systems Using an ART-Modified Single Linkage Clustering Approach – A Comparative Study" by M. Murugan and V. Selladurai shows the comparison of DCA to some other methodologies of the same purpose.
References
External links
Saving Time With Quick R |
https://en.wikipedia.org/wiki/Turbonomic | Turbonomic is a resource-simulation software company headquartered in Boston, MA and owned by IBM. The company was originally named VMTurbo.
Reception
In 2011, Gartner named Turbonomic as a Cool Vendor in Cloud Management. In 2016, Turbonomic was listed as the top product for Virtualization Management in a report by IDG and IT Central Station. Turbonomic has made five appearances to the Inc. 5000, and made the Forbes Cloud 100 four times. In 2020, Fast Company named Turbonomic to their Best Workplaces for Innovators List. The company was also deemed a Vendor to Watch in AIOps by Enterprise Management Associates (EMA) for its combination of abstraction, analytics, and automation engine that continually assures performance of a customer’s applications.
History
The company formed partnerships with Cisco and IBM, entering into OEM agreements to bring Application Resource Management to a larger customer base.
Since its founding in 2008 or 2009, Turbonomic had raised more than $250M from venture capital firms including Bain Capital Ventures and Highland Capital Partners.
The company's product was updated in 2017 for use with cloud computing platforms.
The company was originally named VMTurbo and changed its name to Turbonomic in August 2017.
Turbonomic acquired ParkMyCloud and SevOne in 2019.
IBM acquired Turbonomic on June 17, 2021.
The company's product simulates supply and demand forces in order to efficiently allocate resources such as computing, database, memory and storage.
References
Further reading
External links
2008 establishments in Massachusetts
Business software companies
Cloud computing providers
Virtualization software
Companies based in Boston
American companies established in 2008
Software companies established in 2008
Software companies based in Massachusetts
Defunct software companies of the United States
2021 mergers and acquisitions
IBM acquisitions
IBM subsidiaries |
https://en.wikipedia.org/wiki/IEEE%20Cloud%20Computing | IEEE Cloud Computing is a global initiative launched by IEEE to promote cloud computing, big data and related technologies, and to provide expertise and resources to individuals and enterprises involved in cloud computing.
History
In 2010, the Institute of Electrical and Electronics Engineers (IEEE) sponsored two cloud computing–specific conferences: IEEE CLOUD and IEEE CloudCom. With the success of the two conferences, IEEE Senior Member and IEEE Computer Society past president Steve Diamond, began urging the organization to take an active role in the development of cloud computing standards.
In April 2011, with the support of the IEEE Future Directions Committee and funding from the IEEE New Initiatives Committee, IEEE Cloud Computing was launched. The initiative was designed to follow a multi-year plan and includes a focus across multiple tracks: conferences, education, publications, standards, Intercloud Testbed, web portal, marketing, and public relations.
As part of the initiative's launch, two new cloud computing standards development projects were approved: IEEE P2301, Draft Guide for Cloud Portability and Interoperability Profile, and IEEE P2302, Draft Standard for Intercloud Interoperability and Federation (SIIF). With a growing need for greater cloud computing interoperability and federation, IEEE Cloud Computing focused its development activities and resources behind IEEE P2302 standard.
Current work
IEEE Cloud Computing continues to pursue efforts to provide cloud computing standards, advancement of cloud computing technologies, and to educate users on the benefits of cloud computing. As part of this ongoing effort, it offers a variety of activities, products, and services, including the IEEE Cloud Computing portal, conferences and events, continuing education courses, publications, standards, and the IEEE Intercloud Testbed platform for testing cloud computing interoperability.
IEEE Cloud Computing web portal
The IEEE Cloud Computing portal ser |
https://en.wikipedia.org/wiki/Tandy%20Graphics%20Adapter | Tandy Graphics Adapter (TGA, also Tandy graphics) is a computer display standard for the Tandy 1000 series of IBM PC compatibles, which has compatibility with the video subsystem of the IBM PCjr but became a standard in its own right.
PCjr graphics
The Tandy 1000 series began in 1984 as a clone of the IBM PCjr, offering support for existing PCjr software. As a result, its graphics subsystem is largely compatible.
The PCjr, released in 1983, has a graphics subsystem built around IBM's Video Gate Array (not to be confused with the later Video Graphics Array) and an MC6845 CRTC and extends on the capabilities of the Color Graphics Adapter (CGA), increasing the number of colors in each screen mode. CGA's 2-color mode can be displayed with four colors, and its 4-color mode can be displayed with all 16 colors.
Since the Tandy 1000 was much more successful than PCjr, their shared hardware capabilities became more associated with the Tandy brand than with IBM.
While there is no specific name for the Tandy graphics subsystem (Tandy's documentation calls it the "Video System Logic"), common parlance referred to it as TGA. Where not otherwise stated, information in this article that describes the TGA also applies to the PCjr video subsystem.
While EGA would eventually deliver a superset of TGA graphics on IBM compatibles, software written for TGA is not compatible with EGA cards.
Output capabilities
Tandy Video I / PCjr
Tandy 1000 systems before the Tandy 1000 SL, and the PCjr, have this type of video. It offers several CGA-compatible modes and enhanced modes.
CGA compatible modes:
in 4 colors from a 16 color (4-bit RGBI) hardware palette. Pixel aspect ratio of 1:1.2.
in 2 colors from 16. Pixel aspect ratio of 1:2.4
with pixel font text mode (effective resolution of )
with pixel font text mode (effective resolution of )
Both text modes could themselves be set to display in monochrome, or in 16 colors.
In addition to the CGA modes, it offers:
160×200 with |
https://en.wikipedia.org/wiki/Gordonia%20%28bacterium%29 | Gordonia is a genus of gram-positive, aerobic, catalase-positive bacterium in the Actinomycetota, closely related to the Rhodococcus, Mycobacterium, Skermania, and Nocardia genera. Gordonia bacteria are aerobic, motile, and non-sporulating. Gordonia is from the same lineage that includes Mycobacterium tuberculosis.
The genus was discovered by Tsukamura in 1971 and named after American bacteriologist Ruth Gordon.. Many species are often found in the soil, while other species have been isolated from aquatic environments. Gordonia species are rarely known to cause infections in humans.
Some pathogenic instances of Gordonia have been reported to cause skin and soft tissue infections, including bacteremia and cutaneous infections. Though infections are generally treated with antibiotics, surgical procedures are sometimes used to contain infections. Some investigations have found that 28 °C is the ideal temperature for the growth of Gordonia bacteria. Gordonia species often have high G-C base pair contents in DNA, ranging from 63% to 69%. G-C base pair content levels are generally positively correlated with melting temperature.
Some species of Gordonia, such as Gordonia rubripertincta, produce colonies that have a bright orange or orange-red color.
Some strains of Gordonia have recently garnered interest in the biotechnology industry due to their ability to degrade environmental pollutants.
Cases of Pathogenicity
Gordonia bronchialis has occasionally shown pathogenicity, infecting sternal wounds from surgery. However, since G. bronchialis infections can present with minimal and mild symptoms, few reporst of G. bronchialis infections have been documented.
Gordonia can infect immunocompetent and immunocompromised individuals.
Environmental Uses
Gordonia species are able to degrade various environmental pollutants toxins and other natural compounds that cannot regularly be biodegraded. Two common materials, natural and synthetic isoprene rubber (cis-1,4-polyisoprene) |
https://en.wikipedia.org/wiki/Monomial%20ideal | In abstract algebra, a monomial ideal is an ideal generated by monomials in a multivariate polynomial ring over a field.
A toric ideal is an ideal generated by differences of monomials (provided the ideal is a prime ideal). An affine or projective algebraic variety defined by a toric ideal or a homogeneous toric ideal is an affine or projective toric variety, possibly non-normal.
Definitions and Properties
Let be a field and be the polynomial ring over with n variables .
A monomial in is a product for an n-tuple of nonnegative integers.
The following three conditions are equivalent for an ideal :
is generated by monomials,
If , then , provided that is nonzero.
is torus fixed, i.e, given , then is fixed under the action for all .
We say that is a monomial ideal if it satisfies any of these equivalent conditions.
Given a monomial ideal , is in if and only if every monomial ideal term of is a multiple of one the .
Proof:
Suppose and that is in . Then , for some .
For all , we can express each as the sum of monomials, so that can be written as a sum of multiples of the . Hence, will be a sum of multiples of monomial terms for at least one of the .
Conversely, let and let each monomial term in be a multiple of one of the in . Then each monomial term in can be factored from each monomial in . Hence is of the form for some , as a result .
The following illustrates an example of monomial and polynomial ideals.
Let then the polynomial is in , since each term is a multiple of an element in , i.e., they can be rewritten as and both in . However, if , then this polynomial is not in , since its terms are not multiples of elements in .
Monomial Ideals and Young Diagrams
A monomial ideal can be interpreted as a Young diagram. Suppose , then can be interpreted in terms of the minimal monomials generators as , where and . The minimal monomial generators of can be seen as the inner corners of the Young diagram. The minimal gener |
https://en.wikipedia.org/wiki/CodePeer | CodePeer is a static analysis tool, which identifies constructs that are likely to lead to run-time errors such as buffer overflows, and it flags legal but suspect code, typical of logic errors in Ada programs. All Ada run-time checks are exhaustively verified by CodePeer, using a variant of abstract interpretation. In October 2014, CodePeer was qualified for use in safety-critical contexts as a sound tool for identifying possible run-time errors. CodePeer also produces detailed as-built documentation of each subprogram, including pre- and post-conditions, to help with code review and to ease locating potential bugs and vulnerabilities early.
CodePeer is produced by AdaCore, a computer software company with North American headquarters in New York City and European headquarters in Paris.
See also
Abstract interpretation
Static code analysis
Software testing
Software Security Assurance
List of tools for static code analysis
References
External links
CodePeer product description
AdaCore web site
CodePeer qualification news release
AdaCore's CodePeer developed in partnership with SofCheck
Why is static analysis a challenge? - interview with Michael Friess
Tucker Taft, "Advanced static analysis meets contract-based programming", 2013.
Embedded Computing Design, "Making static analysis a part of code review", 2009.
Static program analysis tools
Java platform software
Development software companies |
https://en.wikipedia.org/wiki/Central%20Institute%20of%20Brackishwater%20Aquaculture | Central Institute of Brackishwater Aquaculture (CIBA) is one of the research institutes under Indian Council of Agricultural Research (ICAR), New Delhi to serve as the nodal agency for catering to the needs of the brackishwater aquaculture research in India. The institute is headquartered at Santhome High Road, Raja Annamalai Puram, Chennai with a research centre at Kakdwip in West Bengal and an experimental field station at Muttukadu, roughly 30 km to the south of Chennai. The institute works under the Ministry of Agriculture, India.
Service profile
The Central Institute of Brackishwater Aquaculture assists small aqua farmers in optimising their finfish and shrimp farming by providing suitable modern technologies. They also offer study courses and research facilities for students, farmers and entrepreneurs. CIBA regularly conducts farmers’ meet, trainings, exhibitions, workshops, and brainstorming sessions.
Mandate of CIBA
CIBA was formed, under Indian Council of Agricultural Research, with a mandate to:
conduct research for development of techno-economically viable and sustainable culture systems for finfish and shellfish in brackish water.
act as a repository of information on brackish water fishery resources with a systematic database
undertake transfer of technology through training, education and extension programmes
provide consultancy services
Divisions
The organization has the following research divisions for the development of marine science
Crustacean Culture Division (CCD)
The Division deals with research on crustaceans such as:
Maturation and seed production
Nursery rearing and culture
Area development and impact assessment
Molecular approach in disease diagnostics and species identification
Finfish Culture Division (FCD)
FCD deals with brackish water fish such as Asian Seabass, Mullets, King fish and Milk fish with special emphasis on domestication, production under controlled conditions, spawning, breeding, maturation and broodstoc |
https://en.wikipedia.org/wiki/3-subset%20meet-in-the-middle%20attack | The 3-subset meet-in-the-middle (hereafter shortened MITM) attack is a variant of the generic meet-in-the-middle attack, which is used in cryptology for hash and block cipher cryptanalysis. The 3-subset variant opens up the possibility to apply MITM attacks on ciphers, where it is not trivial to divide the keybits into two independent key-spaces, as required by the MITM attack.
The 3-subset variant relaxes the restriction for the key-spaces to be independent, by moving the intersecting parts of the keyspaces into a subset, which contains the keybits common between the two key-spaces.
History
The original MITM attack was first suggested in an article by Diffie and Hellman in 1977, where they discussed the cryptanalytic properties of DES. They argued that the keysize of DES was too small, and that reapplying DES multiple times with different keys could be a solution to the key-size; however, they advised against using double-DES and suggested triple-DES as a minimum, due to MITM attacks (Double-DES is very susceptible to a MITM attack, as DES could easily be split into two subciphers (the first and second DES encryption) with keys independent of one another, thus allowing for a basic MITM attack that reduces the computational complexity from to .
Many variations has emerged, since Diffie and Hellman suggested MITM attacks. These variations either makes MITM attacks more effective, or allows them to be used in situations, where the basic variant cannot. The 3-subset variant was shown by Bogdanov and Rechberger in 2011, and has shown its use in cryptanalysis of ciphers, such as the lightweight block-cipher family KTANTAN.
Procedure
As with general MITM attacks, the attack is split into two phases: A key-reducing phase and a key-verification phase. In the first phase, the domain of key-candidates is reduced, by applying the MITM attack. In the second phase, the found key-candidates are tested on another plain-/ciphertext pair to filter away the wrong key(s).
Key- |
https://en.wikipedia.org/wiki/Partial-matching%20meet-in-the-middle%20attack | Partial-matching is a technique that can be used with a MITM attack. Partial-matching is where the intermediate values of the MITM attack, and , computed from the plaintext and ciphertext, are matched on only a few select bits, instead of on the complete state.
Uses
A limitation with MITM attacks is the amount of intermediate values that needs to be stored. In order to compare the intermediate values and , all 's need to be computed and stored first, before each computed can be compared against them.
If the two subciphers identified by the MITM attack both has a sufficiently large subkey, then an unfeasible amount of intermediate values need to be stored.
While there are techniques such as cycle detection algorithms that allows one to perform a MITM attack without storing either all values of or , these techniques requires that the subciphers of the MITM attack are symmetric.
Thus it is a solution that allows one to perform a MITM attack in a situation, where the subkeys are of a cardinality just large enough to make the amount of temporary values that need to be stored infeasible.
While this allows one to store more temporary values, its use is still limited, as it only allows one to perform a MITM attack on a subcipher with a few more bits. As an example: If only 1/8 of the intermediate value is stored, then the subkey needs only be 3 bits larger, before the same amount of memory is required anyway, since
A in most cases far more useful feature provided by partial-matching in MITM attacks, is the ability to compare intermediate values computed at different rounds in the attacked cipher. If the diffusion in each round of the cipher is low enough, it might be possible over a span of rounds to find bits in the intermediate states that has not changed with a probability of 1. These bits in the intermediate states can still be compared.
The disadvantage for both of these uses, is that there will be more false positives for key candidates, which needs to be t |
https://en.wikipedia.org/wiki/ULLtraDIMM | The ULLtraDIMM is a solid state storage device from SanDisk that connects flash storage directly onto the DDR3 memory bus. Unlike traditional PCIe Flash Storage devices, the ULLtraDIMM is plugged directly into an industry standard RDIMM memory bus slot in a server.
This design and connection location provides deterministic (consistent) known latency to enable applications to be streamlined for improved performance.
The ULLtraDIMM is compatible with the JEDEC MO-269 DDR3 RDIMM specification.
The ULLtraDIMM supports support both 1.35 V and 1.5 V operation from 800–1333 MHz, and 1.5 V @ 1600 MHz DDR3 transfer rates. DDR3 ECC bits are used to verify the integrity of the data being sent across memory bus. The ULLtraDIMM will verify that correct ECC is received and, if there are errors, the device driver will re-run the transfer. The CPU treats the ECC from the ULLtraDIMM in the same manner as ECC from a memory DIMM, single symbol errors are corrected. The DDR3 ECC bits are not stored in the flash array. A separate ECC scheme is used for protecting data in the flash array. Memory interleaving of standard RAM is not affected by the presence of ULLtraDIMMs.
UEFI/BIOS updates are required to properly recognize an ULLtraDIMM in the system as a block device and not halt the bootstrap sequence.
References
Computer peripherals
Computer storage devices
File system management
Non-volatile memory
Solid-state computer storage
Solid-state computer storage media |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.