source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Multifractal%20system | A multifractal system is a generalization of a fractal system in which a single exponent (the fractal dimension) is not enough to describe its dynamics; instead, a continuous spectrum of exponents (the so-called singularity spectrum) is needed.
Multifractal systems are common in nature. They include the length of coastlines, mountain topography, fully developed turbulence, real-world scenes, heartbeat dynamics, human gait and activity, human brain activity, and natural luminosity time series. Models have been proposed in various contexts ranging from turbulence in fluid dynamics to internet traffic, finance, image modeling, texture synthesis, meteorology, geophysics and more. The origin of multifractality in sequential (time series) data has been attributed to mathematical convergence effects related to the central limit theorem that have as foci of convergence the family of statistical distributions known as the Tweedie exponential dispersion models, as well as the geometric Tweedie models. The first convergence effect yields monofractal sequences, and the second convergence effect is responsible for variation in the fractal dimension of the monofractal sequences.
Multifractal analysis is used to investigate datasets, often in conjunction with other methods of fractal and lacunarity analysis. The technique entails distorting datasets extracted from patterns to generate multifractal spectra that illustrate how scaling varies over the dataset. Multifractal analysis has been used to decipher the generating rules and functionalities of complex networks. Multifractal analysis techniques have been applied in a variety of practical situations, such as predicting earthquakes and interpreting medical images.
Definition
In a multifractal system , the behavior around any point is described by a local power law:
The exponent is called the singularity exponent, as it describes the local degree of singularity or regularity around the point .
The ensemble formed by all th |
https://en.wikipedia.org/wiki/Berkeley%20Timesharing%20System | The Berkeley Timesharing System was a pioneering time-sharing operating system implemented between 1964 and 1967 at the University of California, Berkeley. It was designed as part of Project Genie and marketed by Scientific Data Systems for the SDS 940 computer system.
It was the first commercial time-sharing which allowed general-purpose user programming, including machine language.
History
In the mid-1960s, most computers used batch processing: one user at a time with no interactivity. A few pioneering systems such as the Atlas Supervisor at the University of Manchester, Compatible Time-Sharing System at MIT, and the Dartmouth Time Sharing System at Dartmouth College required large expensive machines.
Implementation started in 1964 with the arrival of the SDS 930 which was modified slightly, and an operating system was written from scratch.
Students who worked on the Berkeley Timesharing System included undergraduates Chuck Thacker and L. Peter Deutsch and doctoral student Butler Lampson.
The heart of the system was the Monitor (roughly what is now usually called a kernel) and the
Executive (roughly what is now usually called a command-line interface).
When the system was working, Max Palevsky, founder of Scientific Data Systems, was at first not interested in selling it as a product. He thought timesharing had no commercial demand. However, as other customers expressed interest, it was put on the SDS pricelist as an expensive variant of the 930.
By November 1967 it was being sold commercially as the SDS 940.
By August 1968 a version 2.0 was announced that was just called the "SDS 940 Time-Sharing System".
Other timesharing systems were generally one-of-a-kind systems, or limited to a single application (such as teaching Dartmouth BASIC). The 940 was the first to allow for general-purpose programming, and sold about 60 units: not large by today's standards, but it was a significant part of SDS' revenues.
One customer was Bolt, Beranek and Newman. The TENEX op |
https://en.wikipedia.org/wiki/The%20Structure%20and%20Distribution%20of%20Coral%20Reefs | The Structure and Distribution of Coral Reefs, Being the first part of the geology of the voyage of the Beagle, under the command of Capt. Fitzroy, R.N. during the years 1832 to 1836, was published in 1842 as Charles Darwin's first monograph, and set out his theory of the formation of coral reefs and atolls. He conceived of the idea during the voyage of the Beagle while still in South America, before he had seen a coral island, and wrote it out as HMS Beagle crossed the Pacific Ocean, completing his draft by November 1835. At the time there was great scientific interest in the way that coral reefs formed, and Captain Robert FitzRoy's orders from the Admiralty included the investigation of an atoll as an important scientific aim of the voyage. FitzRoy chose to survey the Keeling Islands in the Indian Ocean. The results supported Darwin's theory that the various types of coral reefs and atolls could be explained by uplift and subsidence of vast areas of the Earth's crust under the oceans.
The book was the first volume of three Darwin wrote about the geology he had investigated during the voyage, and was widely recognised as a major scientific work that presented his deductions from all the available observations on this large subject. In 1853, Darwin was awarded the Royal Society's Royal Medal for the monograph and for his work on barnacles. Darwin's theory that coral reefs formed as the islands and surrounding areas of crust subsided has been supported by modern investigations, and is no longer disputed, while the cause of the subsidence and uplift of areas of crust has continued to be a subject of discussion.
Theory of coral atoll formation
When the Beagle set out in 1831, the formation of coral atolls was a scientific puzzle. Advance notice of her sailing, given in the Athenaeum of 24 December, described investigation of this topic as "the most interesting part of the Beagle'''s survey" with the prospect of "many points for investigation of a scientific nature b |
https://en.wikipedia.org/wiki/OwlCrate | OwlCrate is a web-based subscription service business specializing in monthly boxes shipped out internationally by mail, themed around books and book collecting. While OwlCrate is largely popular with book reviewers on social media, the service ships books to anybody within select available countries who places an order. Subscription boxes are largely themed around the fiction genres of science fiction and fantasy, with boxes designed both for young children and adolescent readers. Subscription boxes typically contain an exclusive (rare design) book cover with an author autograph, and a variety of surprise items associated with the book, including coffee mugs, t-shirts, pillow cases, lip balm, stickers, pinback buttons, jewellery and other paraphernalia.
History
OwlCrate was officially launched in February 2015 after months of development by its founders, Robert Madden and Korrina Ede. It began as a small home-based business, with the first book title included in the OwlCrate shipments being V. E. Schwab's fantasy novel A Darker Shade of Magic. Since 2015, OwlCrate has reportedly amassed thousands of subscribers, many of whom include Goodreads book reviewers, social media gurus, authors and book collectors.
In comparison, OwlCrate orders most of its non-book stock from independent artists, and continues to sell remaining stock from previous monthly boxes at a reduced price until the stock eventually runs out. One of the biggest assets that initially drew fans to the subscription service was that the OwlCrate versions of shipped books always have an exclusive cover and are autographed by the author, with most of these authors being traditionally-published by large publishing houses. According to a thesis study done through the Faculty of Communication, Art and Technology at Simon Fraser University, "exclusive covers vary book to book — some books will have a slight change in colour or font, some will have significant changes, and some will have a totally different |
https://en.wikipedia.org/wiki/Ted%20Janssen | Theo Willem Jan Marie Janssen (13 August 1936 – 29 September 2017), better known as Ted Janssen, was a Dutch physicist and Full Professor of Theoretical Physics at the Radboud University Nijmegen. Together with Pim de Wolff and Aloysio Janner, he was one of the founding fathers of N-dimensional superspace approach in crystal structure analysis for the description of quasi periodic crystals and modulated structures. For this work he received the Aminoff Prize of the Royal Swedish Academy of Sciences (together with de Wolff and Janner) in 1988 and the Ewald Prize of the International Union of Crystallography (with Janner) in 2014. These achievements were merit of his unique talent, combining a deep knowledge of physics with a rigorous mathematical approach. Their theoretical description of the structure and symmetry of incommensurate crystals using higher dimensional superspace groups also included the quasicrystals that were discovered in 1982 by Dan Schechtman, who received the Nobel Prize in Chemistry in 2011. The Swedish Academy of Sciences explicitly mentioned their work at this occasion.
Early life and education
Ted Janssen was born on August 13, 1936, in Vught, near 's-Hertogenbosch in the Netherlands. Already as a young boy he was fascinated by the sciences. He built radios, set up a chemistry lab in the attic of his parental home, was an avid bird watcher and he built his own telescopes. He remembered high school as ‘not very inspiring’ and he passed all exams without much effort, but viewed it as a time that truly formed him. Instead of spending time on homework he studied the history and philosophy of science and was very interested in astronomy and astrophysics.
During his high school years he also developed a deep appreciation of literature and music. Later he added the visual arts, ballet, and architecture to that list. The enjoyment of the arts was vital to Ted. He called it essential components of life. He started playing the piano, harpsichord and c |
https://en.wikipedia.org/wiki/Data%20center%20network%20architectures | A data center is a pool of resources (computational, storage, network) interconnected using a communication network. A data center network (DCN) holds a pivotal role in a data center, as it interconnects all of the data center resources together. DCNs need to be scalable and efficient to connect tens or even hundreds of thousands of servers to handle the growing demands of cloud computing. Today's data centers are constrained by the interconnection network.
Types of data center network topologies
Data center networks can be divided into multiple separate categories.
Fixed topology
Tree-based
Basic tree
Clos network
VL2
Fat-tree
Al-Fares et al.
Portland
Hedera
Recursive
DCell
BCube
MDCube
FiConn
Flexible topology
Fully optical
OSA (Optical switching architecture)
Hybrid
c-Through
Helios
Types of data center network architectures
Three-tier
The legacy three-tier DCN architecture follows a multi-rooted tree based network topology composed of three layers of network switches, namely access, aggregate, and core layers. The servers in the lowest layers are connected directly to one of the edge layer switches. The aggregate layer switches interconnect together multiple access layer switches. All of the aggregate layer switches are connected to each other by core layer switches. Core layer switches are also responsible for connecting the data center to the Internet. The three-tier is the common network architecture used in data centers. However, three-tier architecture is unable to handle the growing demand of cloud computing. The higher layers of the three-tier DCN are highly oversubscribed. Moreover, scalability is another major issue in three-tier DCN. Major problems faced by the three-tier architecture include, scalability, fault tolerance, energy efficiency, and cross-sectional bandwidth. The three-tier architecture uses enterprise-level network devices at the higher layers of topology that are very expensive and power hungry.
Fat tree
The |
https://en.wikipedia.org/wiki/Out-of-band%20agreement | In the exchange of information over a communication channel, an out-of-band agreement is an agreement or understanding between the communicating parties that is not included in any message sent over the channel but which is relevant for the interpretation of such messages.
By extension, in a client–server or provider-requester setting, an out-of-band agreement is an agreement or understanding that governs the semantics of the request/response interface but which is not part of the formal or contractual description of the interface specification itself.
See also
API
Contract
Out-of-band
Off-balance-sheet
External links
SakaiProject definition
Computer networking |
https://en.wikipedia.org/wiki/Supermembranes | Supermembranes are hypothesized objects that live in the 11-dimensional theory called M-Theory and should also exist in 11-dimensional supergravity. Supermembranes are a generalisation of superstrings to another dimension. Supermembranes are 2-dimensional surfaces. For example, they can be spherical or shaped like a torus. As in superstring theory the vibrations of the supermembranes correspond to different particles. Supermembranes also exhibit a symmetry called supersymmetry without which the vibrations would only correspond to bosons and not fermions.
Energy
The energy of a classical supermembrane is given by its surface area. One consequence of this is that there is no difference between one or two membranes since two membranes can be connected by a long 1 dimensional string of zero area. Hence, the idea of 'membrane-number' has no meaning. A second consequence is that unlike strings a supermembrane's vibrations can represent several particles at once. In technical terms this means it is already 'second-quantized'. All the particles in the Universe can be thought to arise as vibrations of a single membrane.
Spectrum
When going from the classical theory to the quantum theory of supermembranes it is found that they can only exist in 11 dimensions, just as superstrings can only exist in 10 dimensions. When examining the energy spectrum (the allowed frequencies that a string can vibrate in) it was found that they can only be in discrete values corresponding to the masses of different particles.
It has been shown:
The energy spectrum for the classical bosonic membrane is continuous.
The energy spectrum for the quantum bosonic membrane is discrete.
The energy spectrum for the quantum supermembrane is continuous.
At first the discovery that the spectrum was continuous was thought to mean the theory didn't make sense. But it was realised that it meant that supermembranes actually correspond to multiple particles. (The continuous degrees of freedom corresponding |
https://en.wikipedia.org/wiki/Andreaea%20rupestris | Andreaea rupestris is a species of moss in the class Andreaeopsida, are commonly referred to as the "lantern mosses" due to the appearance of their dehisced sporangia. It is typically found on smooth, acidic, exposed rock in the Northern hemisphere. It exhibits the common features of the genus Andreaea such as being acrocarpous, having dark pigmentation, lacking a seta, and bearing 4 lines of dehiscence in its mature sporangia, but can be further identified upon careful examination of its gametophytic leaves which have an ovate base to a more blunt apex compared to other similar species.
Taxonomy and classification
Andreaea rupestris is in the genus Andreaea, which has around 100 different species.
It may be difficult to differentiate A. rupestris from some other species in its genus as it does bear some similar characteristics to other species. Some species which may be mistaken for A. rupestris are:
A. rothii, which has a similar habitat to A. rupestris but its leaves are nerved, and they are falcate-secund in both moist and dry conditions (A. rupestris is only falcate-secund in moist conditions).
A. mutabilis, which has a similar appearance but has a yellow leaf bases, which are more widely spread apart.
A. alpestris and A. sinuosa, which can only be differentiated from A. rupestris using a microscope.
A. megistospora, which has a similar habitat to A. rupestris and can only be differentiated by the size of its spores, and its nerved leaves.
Description
The appearance of Andreaea rupestris is dark in colour, varying from dark red/brown/green to black depending on its life stage. It grows in patches of dense, cushion-like tufts up to 2–3 cm high and has imbricate leaves in dry conditions. In moist conditions, the leaves may be falcate-secund (curved to one side) yet this does not always hold true. Unlike some other mosses, A. rupestris have biseriate rhizoids which aid in attaching the gametophyte to substrate.
Gametophyte
The gametophyte leaves have |
https://en.wikipedia.org/wiki/Inertial%20wave | Inertial waves, also known as inertial oscillations, are a type of mechanical wave possible in rotating fluids. Unlike surface gravity waves commonly seen at the beach or in the bathtub, inertial waves flow through the interior of the fluid, not at the surface. Like any other kind of wave, an inertial wave is caused by a restoring force and characterized by its wavelength and frequency. Because the restoring force for inertial waves is the Coriolis force, their wavelengths and frequencies are related in a peculiar way. Inertial waves are transverse. Most commonly they are observed in atmospheres, oceans, lakes, and laboratory experiments. Rossby waves, geostrophic currents, and geostrophic winds are examples of inertial waves. Inertial waves are also likely to exist in the molten core of the rotating Earth.
Restoring force
Inertial waves are restored to equilibrium by the Coriolis force, a result of rotation. To be precise, the Coriolis force arises (along with the centrifugal force) in a rotating frame to account for the fact that such a frame is always accelerating. Inertial waves, therefore, cannot exist without rotation. More complicated than tension on a string, the Coriolis force acts at a 90° angle to the direction of motion, and its strength depends on the rotation rate of the fluid. These two properties lead to the peculiar characteristics of inertial waves.
Characteristics
Inertial waves are possible only when a fluid is rotating, and exist in the bulk of the fluid, not at its surface. Like light waves, inertial waves are transverse, which means that their vibrations occur perpendicular to the direction of wave travel. One peculiar geometrical characteristic of inertial waves is that their phase velocity, which describes the movement of the crests and troughs of the wave, is perpendicular to their group velocity, which is a measure of the propagation of energy.
Whereas a sound wave or an electromagnetic wave of any frequency is possible, inertial wa |
https://en.wikipedia.org/wiki/Proportional%20control | Proportional control, in engineering and process control, is a type of linear feedback control system in which a correction is applied to the controlled variable, and the size of the correction is proportional to the difference between the desired value (setpoint, SP) and the measured value (process variable, PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor.
The proportional control concept is more complex than an on–off control system such as a bi-metallic domestic thermostat, but simpler than a proportional–integral–derivative (PID) control system used in something like an automobile cruise control. On–off control will work where the overall system has a relatively long response time, but can result in instability if the system being controlled has a rapid response time. Proportional control overcomes this by modulating the output to the controlling device, such as a control valve at a level which avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional gain.
A drawback of proportional control is that it cannot eliminate the residual SP − PV error in processes with compensation e.g. temperature control, as it requires an error to generate a proportional output. To overcome this the PI controller was devised, which uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time to produce an "I" component for the controller output.
Theory
In the proportional control algorithm, the controller output is proportional to the error signal, which is the difference between the setpoint and the process variable. In other words, the output of a proportional controller is the multiplication product of the error signal and the proportional gain.
This can be mathematically expressed as
where
: Controller output with zero error.
: Output of the proportional control |
https://en.wikipedia.org/wiki/Intraspecific%20competition | Intraspecific competition is an interaction in population ecology, whereby members of the same species compete for limited resources. This leads to a reduction in fitness for both individuals, but the more fit individual survives and is able to reproduce.
By contrast, interspecific competition occurs when members of different species compete for a shared resource. Members of the same species have rather similar requirements for resources, whereas different species have a smaller contested resource overlap, resulting in intraspecific competition generally being a stronger force than interspecific competition.
Individuals can compete for food, water, space, light, mates, or any other resource which is required for survival or reproduction. The resource must be limited for competition to occur; if every member of the species can obtain a sufficient amount of every resource then individuals do not compete and the population grows exponentially. Prolonged exponential growth is rare in nature because resources are finite and so not every individual in a population can survive, leading to intraspecific competition for the scarce resources.
When resources are limited, an increase in population size reduces the quantity of resources available for each individual, reducing the per capita fitness in the population. As a result, the growth rate of a population slows as intraspecific competition becomes more intense, making it a negatively density dependent process. The falling population growth rate as population increases can be modelled effectively with the logistic growth model. The rate of change of population density eventually falls to zero, the point ecologists have termed the carrying capacity (K). However, a population can only grow to a very limited number within an environment. The carrying capacity, defined by the variable k, of an environment is the maximum number of individuals or species an environment can sustain and support over a longer period of time. The r |
https://en.wikipedia.org/wiki/Lego%20Mindstorms%20EV3 | LEGO Mindstorms EV3 (stylized: LEGO MINDSTORMS EV3) is the third generation robotics kit in LEGO's Mindstorms line. It is the successor to the second generation LEGO Mindstorms NXT kit. The "EV" designation refers to the "evolution" of the Mindstorms product line. "3" refers to the fact that it is the third generation of computer modules - first was the RCX and the second is the NXT. It was officially announced on January 4, 2013, and was released in stores on September 1, 2013. The education edition was released on August 1, 2013. There are many competitions using this set, including the FIRST LEGO League Challenge and the World Robot Olympiad, sponsored by LEGO.
After an announcement in October 2022, The Lego Group officially discontinued Lego Mindstorms at the end of 2022.
Overview
The biggest change from the LEGO Mindstorms NXT and NXT 2.0 to the EV3 is the technological advances in the programmable brick. The main processor of the NXT was an ARM7 microcontroller, whereas the EV3 has a more powerful ARM9 CPU running Linux. A USB connector and Micro SD slot (up to 32GB) are new to the EV3. It comes with the plans to build 5 different robots: EV3RSTORM, GRIPP3R, R3PTAR, SPIK3R, and TRACK3R. LEGO has also released instructions online to build 12 additional projects: ROBODOZ3R, BANNER PRINT3R, EV3MEG, BOBB3E, MR-B3AM, RAC3 TRUCK, KRAZ3, EV3D4, EL3CTRIC GUITAR, DINOR3X, WACK3M, and EV3GAME. It uses a program called LEGO Mindstorms EV3 Home Edition, which is developed by LabVIEW, to write code using blocks instead of lines. However it can also be programmed on the actual robot and saved. MicroPython support has been recently added.
The EV3 Home (31313) set consists of: 1 EV3 programmable brick, 2 Large Motors, 1 Medium Motor, 1 Touch Sensor, 1 Color Sensor, 1 Infrared Sensor, 1 Remote Control, cables, USB cable, and 585 TECHNIC elements.
The Education EV3 Core Set (45544) set consists of: 1 EV3 programmable brick, 2 Large Motors, 1 Medium Motor, 2 Touch Sensors, |
https://en.wikipedia.org/wiki/Tolerant%20sequence | In mathematical logic, a tolerant sequence is a sequence
,...,
of formal theories such that there are consistent extensions
,...,
of these theories with each interpretable in . Tolerance naturally generalizes from sequences of theories to trees of theories. Weak interpretability can be shown to be a special, binary case of tolerance.
This concept, together with its dual concept of cotolerance, was introduced by Japaridze in 1992, who also proved that, for Peano arithmetic and any stronger theories with effective axiomatizations, tolerance is equivalent to -consistency.
See also
Interpretability
Cointerpretability
Interpretability logic |
https://en.wikipedia.org/wiki/Crystal%20structure | In crystallography, crystal structure is a description of the ordered arrangement of atoms, ions, or molecules in a crystalline material. Ordered structures occur from the intrinsic nature of the constituent particles to form symmetric patterns that repeat along the principal directions of three-dimensional space in matter.
The smallest group of particles in the material that constitutes this repeating pattern is the unit cell of the structure. The unit cell completely reflects the symmetry and structure of the entire crystal, which is built up by repetitive translation of the unit cell along its principal axes. The translation vectors define the nodes of the Bravais lattice.
The lengths of the principal axes, or edges, of the unit cell and the angles between them are the lattice constants, also called lattice parameters or cell parameters. The symmetry properties of the crystal are described by the concept of space groups. All possible symmetric arrangements of particles in three-dimensional space may be described by the 230 space groups.
The crystal structure and symmetry play a critical role in determining many physical properties, such as cleavage, electronic band structure, and optical transparency.
Unit cell
Crystal structure is described in terms of the geometry of arrangement of particles in the unit cells. The unit cell is defined as the smallest repeating unit having the full symmetry of the crystal structure. The geometry of the unit cell is defined as a parallelepiped, providing six lattice parameters taken as the lengths of the cell edges (a, b, c) and the angles between them (α, β, γ). The positions of particles inside the unit cell are described by the fractional coordinates (xi, yi, zi) along the cell edges, measured from a reference point. It is thus only necessary to report the coordinates of a smallest asymmetric subset of particles, called the crystallographic asymmetric unit. The asymmetric unit may be chosen so that it occupies the smalle |
https://en.wikipedia.org/wiki/Social%20stress | Social stress is stress that stems from one's relationships with others and from the social environment in general. Based on the appraisal theory of emotion, stress arises when a person evaluates a situation as personally relevant and perceives that they do not have the resources to cope or handle the specific situation.
The activation of social stress does not necessarily have to occur linked to a specific event, the mere idea that the event may occur could trigger it. This means that any element that takes a subject out of their personal and intimate environment could become a stressful experience. This situation makes them socially incompetent individuals.
There are three main categories of social stressors. Life events are defined as abrupt, severe life changes that require an individual to adapt quickly (ex. sexual assault, sudden injury). Chronic strains are defined as persistent events which require an individual to make adaptations over an extended period of time (ex. divorce, unemployment). Daily hassles are defined as minor events that occur, which require adaptation throughout the day (ex. bad traffic, disagreements). When stress becomes chronic, one experiences emotional, behavioral, and physiological changes that can put one under greater risk for developing a mental disorder and physical illness.
Humans are social beings by nature, as they typically have a fundamental need and desire to maintain positive social relationships. Thus, they usually find maintaining positive social ties to be beneficial. Social relationships can offer nurturance, foster feelings of social inclusion, and lead to reproductive success. Anything that disrupts or threatens to disrupt their relationships with others can result in social stress. This can include low social status in society or in particular groups, giving a speech, interviewing with potential employers, caring for a child or spouse with a chronic illness, meeting new people at a party, the threat of or actual |
https://en.wikipedia.org/wiki/Tifama | Tifama is a monotypic moth genus in the family Notodontidae (the prominent moths) erected by Francis Walker in 1855. Its only species, Tifama chera, was first described by Dru Drury in 1773. The species is known from Suriname and Brazil.
Description
Upperside: antennae setaceous (bristly). Head, thorax, and abdomen greyish russet. Wings grey-ash coloured, the anterior having a dark brown irregular line running near the posterior and external edges to the anterior near the tips. Posterior wings immaculate. Underside: the same colours as the upper, without any marks. Margins of the wings entire. Wingspan nearly inches (60 mm). |
https://en.wikipedia.org/wiki/Amiga%20music%20software | This article deals with music software created for the Amiga line of computers and covers the AmigaOS operating system and its derivates AROS and MorphOS and is a split of main article Amiga software.
See also related articles Amiga productivity software, Amiga programming languages, Amiga Internet and communications software and Amiga support and maintenance software for other information regarding software that run on Amiga.
Noteworthy Amiga music software
Samplitude by SEK'D (Studio fuer Elektronische Klangerzeugung Dresden), Instant Music, DMCS (DeLuxe Music) 1 and 2, Music-X, TigerCub, Dr. T's KCS, Dr. T's Midi Recording Studio, Bars and Pipes (from Blue Ribbon Soundworks, a firm which was bought by Microsoft and is now part of its group. Bars and Pipes internal structure then inspired to create audio streaming data passing of DirectX libraries), AEGIS Audio Master, Pro Sound Designer, AEGIS Sonix, Audio Sculpture, Audition 4 from SunRize Industries, SuperJAM!, HD-Rec, Audio Evolution, RockBEAT drum machine and various MIDI sequencing programs by Gajits Music Software.
Audio Digitizers Software
Together with the well known Dr. T's Midi Recording Studio, Pro Sound Designer, Sonix, SoundFX, Audition 4, HD-Rec, and Audio Evolution, there was also much Amiga software to pilot digitizers such as GVP DSS8 Plus 8bit audio sampler/digitizer for Amiga, Sunrize AD512 and AD516 professional 12 and 16-bit DSP sound cards for the Amiga that included Studio-16 as standard software, Soundstage professional 20-bit DSP expansion sound card for the Amiga, Aura 12-bit sound sampler which is connected to the PCMCIA port of Amiga 600 and Amiga 1200 models, and the Concierto 16-bit sound card optional module to be added to the Picasso IV graphic card, etcetera.
Sound design / SoftSynth
Synthia, FMSynth by Christian Stiens (inspired by Yamaha's FM-operating DX Series), Assampler, SoundFX (a.k.a. SFX), WaveTracer, S.A.M. Sample-Synthesizer and Gajits' CM-Panion and 4D Companion p |
https://en.wikipedia.org/wiki/PTQ%20implant | PTQ implant is a type of bio-compatible injectable bulking agent used in urinary and fecal incontinence. The material is a type of silicone, and is injected into the desired area to bulk out the tissues and reduce incontinence symptoms. |
https://en.wikipedia.org/wiki/Law%20of%20comparative%20judgment | The law of comparative judgment was conceived by L. L. Thurstone. In modern-day terminology, it is more aptly described as a model that is used to obtain measurements from any process of pairwise comparison. Examples of such processes are the comparisons of perceived intensity of physical stimuli, such as the weights of objects, and comparisons of the extremity of an attitude expressed within statements, such as statements about capital punishment. The measurements represent how we perceive entities, rather than measurements of actual physical properties. This kind of measurement is the focus of psychometrics and psychophysics.
In somewhat more technical terms, the law of comparative judgment is a mathematical representation of a discriminal process, which is any process in which a comparison is made between pairs of a collection of entities with respect to magnitudes of an attribute, trait, attitude, and so on. The theoretical basis for the model is closely related to item response theory and the theory underlying the Rasch model, which are used in psychology and education to analyse data from questionnaires and tests.
Background
Thurstone published a paper on the law of comparative judgment in 1927. In this paper he introduced the underlying concept of a psychological continuum for a particular 'project in measurement' involving the comparison between a series of stimuli, such as weights and handwriting specimens, in pairs. He soon extended the domain of application of the law of comparative judgment to things that have no obvious physical counterpart, such as attitudes and values (Thurstone, 1929). For example, in one experiment, people compared statements about capital punishment to judge which of each pair expressed a stronger positive (or negative) attitude.
The essential idea behind Thurstone's process and model is that it can be used to scale a collection of stimuli based on simple comparisons between stimuli two at a time: that is, based on a series of |
https://en.wikipedia.org/wiki/Kaede%20%28protein%29 | Kaede is a photoactivatable fluorescent protein naturally originated from a stony coral, Trachyphyllia geoffroyi. Its name means "maple" in Japanese. With the irradiation of ultraviolet light (350–400 nm), Kaede undergoes irreversible photoconversion from green fluorescence to red fluorescence.
Kaede is a homotetrameric protein with the size of 116 kDa. The tetrameric structure was deduced as its primary structure is only 28 kDa. This tetramerization possibly makes Kaede have a low tendency to form aggregates when fused to other proteins.
Discovery
The property of photoconverted fluorescence Kaede protein was serendipitously discovered and first reported by Ando et al. in Proceedings of the United States National Academy of Sciences. An aliquot of Kaede protein was discovered to emit red fluorescence after being left on the bench and exposed to sunlight. Subsequent verification revealed that Kaede, which is originally green fluorescent, after exposure to UV light is photoconverted, becoming red fluorescent. It was then named Kaede.
Properties
The property of photoconversion in Kaede is contributed by the tripeptide, His-Tyr-Gly, that acts as a green chromophore that can be converted to red. Once Kaede is synthesized, a chromophore, 4-(p-hydroxybenzylidene)-5-imidazolinone, derived from the tripeptide mediates green fluorescence in Kaede. When exposed to UV, Kaede protein undergoes un conventional cleavage between the amide nitrogen and the α carbon (Cα) at His62 via a formal β-elimination reaction. Followed by the formation of a double bond between His62-Cα and –Cβ, the π-conjugation is extended to the imidazole ring of His62. A new chromophore, 2-[(1E)-2-(5-imidazolyl)ethenyl]-4-(p-hydroxybenzylidene)-5-imidazolinone, is formed with the red-emitting property.
The cleavage of the tripeptide was analysed by SDS-PAGE analysis. Unconverted green Kaede shows one band at 28 kDa, where two bands at 18 kDa and 10 kDa are observed for converted red Kaede, indicating |
https://en.wikipedia.org/wiki/Remote%20work | Remote work (also called telecommuting, telework, work from home, hybrid work, and other terms) is the practice of working from one's home or another space rather than from an office.
The practice began at a small scale in the 1970s, when technology was developed that linked satellite offices to downtown mainframes through dumb terminals using telephone lines as a network bridge. It became more common in the 1990s and 2000s, facilitated by internet technologies such as collaborative software on cloud computing and conference calling via videotelephony. In 2020, workplace hazard controls for COVID-19 catalyzed a rapid transition to remote work for white-collar workers around the world, which largely persisted even after restrictions were lifted.
Proponents of remote work argue that it reduces costs associated with maintaining an office, grants employees autonomy and flexibility that improves their motivation and job satisfaction, eliminates environmental harms from commuting, allows employers to draw from a more geographically diverse pool of applicants, and allows employees to relocate to a place they would prefer to live.
Opponents of remote work argue that remote telecommunications technology has been unable to replicate the advantages of face-to-face interaction, that employees may be more easily distracted and may struggle to maintain separation between work and non-work spheres without the physical separation, and that the reduced social interaction may lead to feelings of isolation.
Terminology
The term "remote work" became popular during the COVID-19 pandemic that forced the majority of office and knowledge workers to work from home. Prior to that, the practice of working full days from home, or somewhere nearer to home than the office, was largely known as telecommuting.
The term telework has been commonly used as a synonym for telecommuting, but the 1973 originator of both words, Jack Nilles, intended the latter to mean any substitution of technology |
https://en.wikipedia.org/wiki/Cough%20center | The cough center is a region of the brain which controls coughing. The cough center is located in the medulla oblongata in the brainstem. Cough suppressants focus their action on the cough center.
Structure
The exact location and functionality of the cough center has remained somewhat elusive: while Johannes Peter Müller observed in 1838 that the medulla coordinates the cough reflex, investigating it has been slow because the usual anaesthetics for experimental animals were morphine or opiates, drugs which strongly inhibit cough. In addition, the center likely overlaps with the respiratory rhythm generator networks. It is hence not so much a specific area, but a function within the respiration and reflex networks of the brainstem.
Cough receptors project to relay neurones in the solitary nucleus, which project to other parts of the respiratory networks. In particular, the pre-Bötzinger complex may act as a pattern generator for the cough response. Parts of the caudal medullary raphe nucleus (nucleus raphe obscurus and nucleus raphe magnus) are known to be essential for the cough response. Other systems that may be involved in pattern generation and regulation are the pontine respiratory group, the lateral tegmental field and the deep cerebellar nuclei. Successful joint models of medullary systems coordinating breathing, coughing and swallowing has been constructed based on this model.
Coughing can occur or be inhibited as a voluntary action, suggesting control from higher systems in the brain. Functional brain imaging of voluntary, suppressed, and induced coughing show that a number of cortical areas can get involved and may be important even for non-voluntary coughing. In contrast, voluntary coughing does not seem to activate medullary systems. |
https://en.wikipedia.org/wiki/Cardboard%20Heroes | Cardboard Heroes is a line of miniatures published in 1980 by Steve Jackson Games. An extension, Cardboard Heroes Champions Set 3: Enemies, was published in 1984.
Contents
Denis Loubet designed a set of miniatures called Cardboard Heroes (1980), a set of 40 full-color 25mm cardboard figures for use in fantasy roleplaying games, published by Steve Jackson Games (SJG).
Cardboard Heroes Champions Set 3: Enemies includes cardboard miniatures for 36 villains taken from Enemies I and Enemies II, adventure scenarios published in Space Gamer and Champions adventures published by Hero Games, and from the front of the Champions box.
Reception
Martin Feldman reviewed Cardboard Heroes in The Space Gamer No. 38. Feldman commented that "they are beautiful, they are inexpensive, and if you like them and have no objection to cardboard, they are certainly worthwhile."
Craig Sheeley reviewed Cardboard Heroes Champions Set 3: Enemies in The Space Gamer No. 73. Sheeley commented that "Enemies is a great set. Champions, being a movement game, needs representative counters for the heroes and villains, and the three dozen in the first Cardboard Heroes pack weren't enough. If you play Champions, or any superhero game using markers, this set is a must."
Reviews
Different Worlds #22 (July, 1982)
See also
List of lines of miniatures |
https://en.wikipedia.org/wiki/PAC-1 | PAC-1 (first procaspase activating compound) is a synthesized chemical compound that selectively induces apoptosis, in cancerous cells. It was granted orphan drug status by the FDA in 2016.
History
PAC-1 was discovered in Professor Paul Hergenrother's laboratory at the University of Illinois at Urbana–Champaign during a process that screened many chemicals for anti-tumor potential. This molecule, when delivered to cancer cells, signals the cells to self-destruct by activating an "executioner" protein, procaspase-3. Then, the activated executioner protein begins a cascade of events that destroys the machinery of the cell. In 2011, Vanquish Oncology Inc. was founded to move PAC-1 forward to a human clinical trial. In 2013, Vanquish announced a multimillion-dollar angel investment into the company. In 2015, a phase I clinical trial of PAC-1 opened for enrollment of cancer patients, and in 2016, it was announced that PAC-1 had been granted Orphan Drug Designation for treatment of glioblastoma by the FDA, and in late 2017 a Phase 1b trial began of PAC-1 plus temozolomide for treatment of patients with recurrent glioblastoma or anaplastic astrocytoma.
Mechanism of action
In cells, the executioner protein, caspase-3, is stored in its inactive form, procaspase-3. This way, the cell can quickly undergo apoptosis by activating the protein that is already there. This inactive form is called a zymogen. Procaspase-3 is known to be inhibited by low levels of zinc. PAC-1 activates procaspase-3 by chelating zinc, thus relieving the zinc-mediated inhibition. This allows procaspase-3 to be an active enzyme, and it can then cleave another molecule of procaspase-3 to active caspase-3. Caspase-3 can further activate other molecules of procaspase-3 in the cell, causing an exponential increase in caspase-3 concentration. PAC-1 facilitates this process and causes the cell to undergo apoptosis quickly.
This direct procaspase-3 activation mode-of-action for PAC-1 has been confirmed b |
https://en.wikipedia.org/wiki/Intestinal%20arteries | The intestinal arteries arise from the convex side of the superior mesenteric artery. They are usually from twelve to fifteen in number, and are distributed to the jejunum and ileum.
Nomenclature
The term "intestinal arteries" can be confusing, because these arteries only serve a small portion of the intestines.
They do not supply any of the large intestine. The large intestine is primarily supplied by the right colic artery, middle colic artery, and left colic artery.
They do not supply the duodenum of the small intestine. The duodenum is primarily supplied by the inferior pancreaticoduodenal artery and superior pancreaticoduodenal artery.
For clarity, some sources prefer instead using the more specific terms ileal arteries and jejunal arteries.
Path
They run nearly parallel with one another between the layers of the mesentery, each vessel dividing into two branches, which unite with adjacent branches, forming a series of arches (arterial arcades), the convexities of which are directed toward the intestine.
From this first set of arches branches arise, which unite with similar branches from above and below and thus a second series of arches is formed; from the lower branches of the artery, a third, a fourth, or even a fifth series of arches may be formed, diminishing in size the nearer they approach the intestine.
In the short, upper part of the mesentery only one set of arches exists, but as the depth of the mesentery increases, second, third, fourth, or even fifth groups are developed.
The differences between the ileal arteries and the jejunal arteries can be summarized as follows:
From the terminal arches numerous small straight vessels (vasa recta) arise which encircle the intestine, upon which they are distributed, ramifying between its coats.
From the intestinal arteries small branches are given off to the lymph glands and other structures between the layers of the mesentery.
Additional images |
https://en.wikipedia.org/wiki/Fastest | Fastest is a model-based testing tool that works with specifications written in the Z notation. The tool implements the Test Template Framework (TTF) proposed by Phil Stocks and David Carrington.
Usage
Fastest presents a command-line user interface. The user first needs to load a Z specification written in LaTeX format verifying the ISO standard. Then, the user has to enter a list of the operations to test as well as the testing tactics to apply to each of them. In a third step Fastest generates the testing tree of each operation. After testing trees have been generated, users can browse them and their test classes, and, more importantly, they can prune any test class both automatically or manually. Once testing trees have been pruned, users can instruct Fastest to find one abstract test case for each leaf in each testing tree.
Testing tactics supported by Fastest
Currently, Fastest supports the following testing tactics:
Disjunctive Normal Form (DNF). It is the only testing tactic applied by default (regardless of whether the user has added or not other testing tactics) and the first one to be applied.
Standard partitions (SP). The user can add, modify and delete standard partitions for any predefined Z mathematical operator by simply editing a text file.
Free Types (FT)
In Set Extension (ISE)
Proper Subset of Set Extension (PSSE)
Subset of Set Extension (SSE)
Pruning testing trees in Fastest
Fastest provides two ways of pruning testing trees:
Automatic pruning.
To prune a testing tree, Fastest analyzes the predicate of each leaf to determine if the predicate is a contradiction or not. Since this problem is undecidable, the tool implements a best-effort algorithm that can be improved by users. The most important aspect of the algorithm is a library of so called elimination theorems each of which represents a family of contradictions. This library can be extended by users by simply editing a text file. Elimination theorems are conjunctions of parametric Z at |
https://en.wikipedia.org/wiki/Filum%20terminale | The filum terminale ("terminal thread") is a delicate strand of fibrous tissue, about 20 cm in length, extending inferior-ward from the apex of the conus medullaris to attach onto the coccyx. The filum terminale acts to anchor the spinal cord and spinal meninges inferiorly.
The upper portion of the fila terminale is formed by spinal pia mater within a dilated dural sac, while the lower portion is formed by both pia and dura mater (with the outer dural layer closely adhering to the inner pial component).
Anatomy
The proximal/superior part - the filum terminale internum or pial part of terminal filum - measures 15 cm in length and extends as far as the inferior border of the second sacral vertebra (S2) (the inferior limit sacral canal). It is composed of the vestiges of neural tissue, connective tissue, and neuroglial tissue lined by pia mater. It is contained within a tubular sheath of the dura mater and is surrounded by the nerves of the cauda equina (from which it can be easily recognized by its bluish-white color).
The inferior/distal part - the filum terminale externum, dural part of terminal filum, or coccygeal ligament - is formed as the filum terminale internum reaches the inferior extremity of the dural sac; henceforth, the filum terminale becomes invested by a layer of dura mater.
The filum terminale ultimately terminates inferiorly by attaching to the dorsum of the coccyx at the first coccygeal segment, blending with the coccygeal periosteum.
Relations
The filum terminale is situated centrally amid the spinal nerve roots of the cauda equina (but is not itself a part of the cauda equina).
The inferior-most spinal nerve, the coccygeal nerve, leaves the spinal cord at the level of the conus medullaris via respective vertebrae through their intervertebral foramina, superior to the filum terminale. However, adhering to the outer surface of the filum terminale are a few strands of nerve fibres which probably represent rudimentary second and third coccyge |
https://en.wikipedia.org/wiki/Yumkaaxvirus | Yumkaaxvirus is a genus of viruses in the family Ahmunviridae and order Maximonvirales. It includes one species: Yumkaaxvirus pescaderoense. Yumkaaxvirus pescaderoense is a dsDNA virus with host archea.
Name
The order name, Maximonvirales, is named after Mayan god Maximon, a god of travelers, merchants, mecidine men/women, mischief and fertility.
The family name, Yumkaaxvirus, is named after Mayan god Yum Kaax, the god of the woods, the wild nature, and the hunt.
The genus name, Ahmunviridae, is named after Mayan god Ah Mun, the god of agriculture.
The species name, pescaderoense is named after Pescadero Basin. |
https://en.wikipedia.org/wiki/Erodium%20moschatum | Erodium moschatum is a species of flowering plant in the geranium family known by the common names musk stork's-bill and whitestem filaree. This is a weedy annual or biennial herb which is native to much of Eurasia and North Africa but can be found on most continents where it is an introduced species. The young plant starts with a flat rosette of compound leaves, each leaf up to 15 centimeters long with many oval-shaped highly lobed and toothed leaflets along a central vein which is hairy, white, and stemlike. The plant grows to a maximum of about half a meter in height with plentiful fuzzy green foliage. The small flowers have five sepals behind five purple or lavender petals, each petal just over a centimeter long. The filaree fruit has a small, glandular body with a long green style up to 4 centimeters in length.
Like Erodium cicutarium, the species is edible. |
https://en.wikipedia.org/wiki/Comet%20%28experiment%29 | COMET (Coherent Muon to Electron Transition) is a nuclear physics experiment in J-PARC, Tokai, Japan. In contrast to the usual muon decay to an electron and neutrino, COMET seeks to look for neutrinoless muon to electron conversion, where the electron flies away with an energy of 104.8 MeV. Muon to electron conversion is not forbidden in the Standard Model but the branching ratio is about considering neutrino oscillations. In beyond the Standard Model approaches the muon to electron conversion process can be as high as e.g. via the supersymmetric .
COMET will be using a new beamline connecting the J-PARC main ring and the J-PARC Nuclear and particle Physics Experimental Hall (NP hall).
The current spokesperson is Kuno Yoshitaka alongside project manager Mihara Satoshi. The collaboration consists of universities coming from 15 countries.
See also
Mu2e experiment |
https://en.wikipedia.org/wiki/General%20communication%20channel | The general communication channel (GCC) was defined by G.709 is an in-band side channel used to carry transmission management and signaling information within optical transport network elements.
Two types of GCC are available:
GCC0 – two bytes within OTUk overhead. GCC0 is terminated at every 3R (re-shaping, re-timing, re-amplification) point and used to carry GMPLS signaling protocol and/or management information.
GCC1/2 – four bytes (each of two bytes) within ODUk overhead. These bytes are used for client end-to-end information and shouldn't be touched by the OTN equipment.
In contrast to SONET/SDH where the data communication channel (DCC) has a constant data rate, GCC data rate depends on the OTN line rate. For example, GCC0 data rate in the case of OTU1 is ~333kbit/s, and for OTU2 its data rate is ~1.3 Mbit/s.
Computer networking
Optical Transport Network |
https://en.wikipedia.org/wiki/List%20of%20video%20game%20museums | This list of video game museums shows video game museums in the world.
Video game museums
Online video game museums
See also
List of museums
Video game
List of computer museums |
https://en.wikipedia.org/wiki/Steven%20J.%20Dick | Steven J. Dick (born October 24, 1949, Evansville, Indiana) is an American astronomer, author, and historian of science most noted for his work in the field of astrobiology. Dick served as the chief historian for the National Aeronautics and Space Administration from 2003 to 2009 and as the Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology from 2013 to 2014. Before that, he was an astronomer and historian of science at the United States Naval Observatory in Washington, DC, from 1979 to 2003.
Career
Steven J. Dick received a Bachelor of Science in astrophysics from Indiana University in 1971. In 1977, he earned a Master of Arts and a Ph.D. in the history and philosophy of science. For 24 years, Dick worked as an astronomer and historian of science for United States Naval Observatory in Washington, D.C., including three years at the Naval Observatory's Southern Hemisphere station in New Zealand. There he was part of a team using transit telescopes and astrographs to chart the northern and southern skies. During this time, he also wrote the history of the Observatory, the first national observatory of the United States, published as Sky and Ocean Joined: The U. S. Naval Observatory, 1830-2000.
In 2003, he was named the Chief Historian for the National Aeronautics and Space Administration (NASA). During his years at NASA, Dick wrote on the importance of exploration to society, commissioned numerous histories of spaceflight, and edited several volumes on the societal impact of space flight and on the occasion of the 50th anniversaries of NASA and the space age.
Dick served as Chairman of the Historical Astronomy Division of the American Astronomical Society (1993–1994), as President of the History of Astronomy Commission of the International Astronomical Union (1997-2000) and as President of the Philosophical Society of Washington. He is on the editorial board for the Journal for the History of Astronomy and the Journal of Astronomical History and |
https://en.wikipedia.org/wiki/Transcriptome-wide%20association%20study | Transcriptome-wide association study (TWAS) is a genetic methodology that can be used to compare the genetic components of gene expression and the genetic components of a trait to determine if an association is present between the two components. TWAS are useful for the identification and prioritization of candidate causal genes in candidate gene analysis following genome-wide association studies. TWAS looks at the RNA products of a specific tissue and gives researchers the abilities to look at the genes being expressed as well as gene expression levels, which varies by tissue type. TWAS are valuable and flexible bioinformatics tools that looks at the associations between the expressions of genes and complex traits and diseases. By looking at the association between gene expression and the trait expressed, genetic regulatory mechanisms can be investigated for the role that they play in the development of specific traits and diseases.
Transcriptome Analysis
A transcriptome is the sum of all RNA transcripts that are present in a given cell, tissue, or organ within an organism. Transcriptomes include both mRNA, which functions as an intermediate to the central dogma; as well as noncoding RNAs that may play other roles in protein synthesis. In the central dogma, it describes how DNA is able to make proteins through transcription and translation. RNAs are present in a cell in varied concentrations, and play various roles outside of the central dogma and are able to be identified based on length and function. It is through functional elements that the transcriptional and translational activities of genes is able to be regulated. Transcriptome analysis is beneficial for obtaining information about all RNAs present and can provide valuable insight into the genetic mechanisms that are tissue specific. The transcriptome was first investigated in the 1990s in an experiment performed to identify a partial transcriptome of the human brain. Researchers were able to identify 609 |
https://en.wikipedia.org/wiki/American%20Journal%20of%20Translational%20Research | The American Journal of Translational Research is an open-access medical journal published by e-Century Publishing Corporation. The journal covers translational research of medical science and the relevant biomedical research areas. It was established in 2009. The editor-in-chief is Wen-Hwa Lee (University of California, Irvine). The journal's main focus is original clinical and experimental research articles, but it also publishes review articles, editorials, hypotheses, letters to the editor, and meeting reports.
Abstracting and indexing
The journal is abstracted and indexed in:
According to Journal Citation Reports, the journal has a 2021 impact factor of 4.060 |
https://en.wikipedia.org/wiki/Stereoselectivity | In chemistry, stereoselectivity is the property of a chemical reaction in which a single reactant forms an unequal mixture of stereoisomers during a non-stereospecific creation of a new stereocenter or during a non-stereospecific transformation of a pre-existing one. The selectivity arises from differences in steric and electronic effects in the mechanistic pathways leading to the different products. Stereoselectivity can vary in degree but it can never be total since the activation energy difference between the two pathways is finite: both products are at least possible and merely differ in amount. However, in favorable cases, the minor stereoisomer may not be detectable by the analytic methods used.
An enantioselective reaction is one in which one enantiomer is formed in preference to the other, in a reaction that creates an optically active product from an achiral starting material, using either a chiral catalyst, an enzyme or a chiral reagent. The degree of selectivity is measured by the enantiomeric excess. An important variant is kinetic resolution, in which a pre-existing chiral center undergoes reaction with a chiral catalyst, an enzyme or a chiral reagent such that one enantiomer reacts faster than the other and leaves behind the less reactive enantiomer, or in which a pre-existing chiral center influences the reactivity of a reaction center elsewhere in the same molecule.
A diastereoselective reaction is one in which one diastereomer is formed in preference to another (or in which a subset of all possible diastereomers dominates the product mixture), establishing a preferred relative stereochemistry. In this case, either two or more chiral centers are formed at once such that one relative stereochemistry is favored, or a pre-existing chiral center (which needs not be optically pure) biases the stereochemical outcome during the creation of another. The degree of relative selectivity is measured by the diastereomeric excess.
Stereoconvergence can be consi |
https://en.wikipedia.org/wiki/Biosafety%20Clearing-House | The Biosafety Clearing-House is an international mechanism that exchanges information about the movement of genetically modified organisms, established under the Cartagena Protocol on Biosafety. It assists Parties (i.e. governments that have ratified the Protocol) to implement the protocol’s provisions and to facilitate sharing of information on, and experience with, living modified organisms (also known as genetically modified organisms, GMOs). It further assists Parties and other stakeholders to make informed decisions regarding the importation or release of GMOs.
The Biosafety Clearing-House Central Portal is accessible through the Web. The BCH is a distributed system, and information in it is owned and updated by the users themselves through an authenticated system to ensure timeliness and accuracy.
Mandate
Article 20, paragraph 1 of the Cartagena Protocol on Biosafety established the BCH as part of the clearing-house mechanism of the Convention on Biological Diversity, in order to:
(a) Facilitate the exchange of scientific, technical, environmental and legal information on, and experience with, living modified organisms; and
(b) Assist Parties to implement the Protocol, taking into account the special needs of developing country Parties, in particular the least developed and small island developing States among them, and countries with economies in transition as well as countries that are centres of origin and centres of genetic diversity.
First use in international law
The BCH differs from other similar mechanisms established under other international legal agreements because it is in fact essential for the successful implementation of its parent body, the Protocol. It was the first Internet-based information-exchange mechanism created that must be used to fulfil certain international legal obligations - not only do Parties to the Protocol have a legal obligation to provide certain types of information to the BCH within defined time-frames, but certa |
https://en.wikipedia.org/wiki/Pickled%20pepper | A pickled pepper is a Capsicum pepper preserved by pickling, which usually involves submersion in a brine of vinegar and salted water with herbs and spices, often including peppercorns, coriander, dill, and bay leaf.
Common pickled peppers are the banana pepper, the Cubanelle, the bell pepper, sweet and hot cherry peppers, the Hungarian wax pepper, the Greek pepper, the serrano pepper, and the jalapeño. They are often found in supermarkets alongside pickled cucumbers.
Pickled sliced jalapeños are also used frequently for topping nachos and other Mexican dishes. These peppers are a common ingredient used by sandwich shops such as Quiznos, Subway, and Wawa. Pickled peppers are found throughout the world, such as the Italian peperoncini sott'aceto and Indonesia's pickled bird's eye chili, besides the already-mentioned American and Latin American usages.
The flavored brine of hot yellow peppers is commonly used as a condiment in Southern cooking in the United States.
Information
To achieve the best results and minimize the risk of botulism, only fresh blemish-free peppers should be used and vinegar with acidity of at least 5%; reducing the acidic taste can be achieved by adding sugar. While larger peppers are sliced up to be pickled, smaller peppers are often placed into the pickling solution whole; however, they still require slits so that the vinegar can penetrate the pepper. To avoid botulism it is recommended that pickled pepper products be processed in boiling water if they are to be stored at room temperature; improperly processed peppers led to the largest outbreak of botulism in U.S. history.
As with pickled cucumbers, there are multiple ways of pickling peppers. The most common is as above, pickling in an acidic brine and canned; next is quick-pickled or refrigerator pickling, which skips the canning step and requires the peppers to be stored in the refrigerator as mentioned above. For lacto-fermented pickled peppers, vinegar is omitted from the salty b |
https://en.wikipedia.org/wiki/Kind%20%28type%20theory%29 | In the area of mathematical logic and computer science known as type theory, a kind is the type of a type constructor or, less commonly, the type of a higher-order type operator. A kind system is essentially a simply typed lambda calculus "one level up", endowed with a primitive type, denoted and called "type", which is the kind of any data type which does not need any type parameters.
A kind is sometimes confusingly described as the "type of a (data) type", but it is actually more of an arity specifier. Syntactically, it is natural to consider polymorphic types to be type constructors, thus non-polymorphic types to be nullary type constructors. But all nullary constructors, thus all monomorphic types, have the same, simplest kind; namely .
Since higher-order type operators are uncommon in programming languages, in most programming practice, kinds are used to distinguish between data types and the types of constructors which are used to implement parametric polymorphism. Kinds appear, either explicitly or implicitly, in languages whose type systems account for parametric polymorphism in a programmatically accessible way, such as C++, Haskell and Scala.
Examples
, pronounced "type", is the kind of all data types seen as nullary type constructors, and also called proper types in this context. This normally includes function types in functional programming languages.
is the kind of a unary type constructor, e.g. of a list type constructor.
is the kind of a binary type constructor (via currying), e.g. of a pair type constructor, and also that of a function type constructor (not to be confused with the result of its application, which itself is a function type, thus of kind )
is the kind of a higher-order type operator from unary type constructors to proper types.
Kinds in Haskell
(Note: Haskell documentation uses the same arrow for both function types and kinds.)
The kind system of Haskell 98 includes exactly two kinds:
, pronounced "type" is the kind o |
https://en.wikipedia.org/wiki/1%2C1-Dimethyldiborane | 1,1-Dimethyldiborane is the organoboron compound with the formula (CH3)2B(μ-H)2BH2. A pair of related 1,2-dimethyldiboranes are also known. It is a colorless gas that ignites in air.
Formation
The methylboranes were first prepared by H. I. Schlesinger and A. O. Walker in the 1930s. Methylboranes are formed by the reaction of diborane and trimethylborane. This reaction produces four different substitution of methyl with hydrogen on diborane. Produced are 1-methyldiborane, 1,1-dimethyldborane, 1,1,2-trimethyldiborane, and 1,1,2,2-tetramethyldiborane.
Tetramethyl lead reacts with diborane in a 1,2-dimethoxyethane solvent at room temperature to make a range of methyl substituted diboranes, ending up at trimethylborane, but including 1,1-dimethyldiborane, and trimethyldiborane. The other outputs of the reaction are hydrogen gas and lead metal.
Other methods to form methyldiboranes include heating trimethylborane with hydrogen. Alternatively trimethylborane reacts with borohydride salts with in the presence of hydrogen chloride, aluminium chloride, or boron trichloride. If the borohydride is sodium borohydride, then methane is a side product. If the metal is lithium then no methane is produced. dimethylchloroborane and methyldichloroborane are also produced as gaseous products.
When Cp2Zr(CH3)2 reacts with borane dissolved in tetrahydrofuran, a borohydro group inserts into the zirconium carbon bond, and methyl diboranes are produced.
In ether dimethylcalcium reacts with diborane to produce dimethyldiborane and calcium borohydride:
Ca(CH3)2 + 2 B2H6 → Ca(BH4)2 + B2H4(CH3)2
1,2-dimethyldiborane slowly converts on standing to 1,1-dimethyldiborane.
Gas chromatography can be used to determine the amounts of the methyl boranes in a mixture. The order they elute are diborane, monomethyldiborane, trimethylborane, 1,1-dimethyldiborane, 1,2-dimethyldiborane, trimethyldiborane, and finally tetramethyldiborane.
Selected properties
1,1-Dimethyldiborane has a dipole mom |
https://en.wikipedia.org/wiki/Case-ready%20meat | Case-ready meat, retail-ready meat, or pre-packaged meat refers to fresh meat that is processed and packaged at a central facility and delivered to the store ready to be put directly into the meat case.
Background
Traditionally, most meat was shipped as primal cuts from the slaughterhouse to the butcher. Meat was then cut to commonly used cuts and packaged at the store or was custom cut for consumers.
Case-ready meat is cut and packaged at central regional facilities and sent to retail stores ready for placement in refrigerated display cases. Local butchering, cutting, trimming, and overwrapping the meat at retail stores is greatly reduced.
Advantages of the centralized master-packager preparation include: efficiency of centralized operations, tight quality control, close control of sanitization, specialized packaging, etc.
Packaging
Centralized cutting and processing of meats has the potential of reducing the shelf life of the cuts. Specialized packaging is needed to regain and even extend that shelf life.
Packaging includes tray, absorbent pad, specialty plastic films, etc.
Oxygen scavengers and modified atmosphere packaging are used to keep the products visually appealing and consumer safe.
Distribution
Control of temperature during the distribution cold chain is critical to meat quality and safety.
See also
Food industry
Food science
Vacuum packaging
Skin pack |
https://en.wikipedia.org/wiki/Code%2039 | Code 39 (also known as Alpha39, Code 3 of 9, Code 3/9, Type 39, USS Code 39, or USD-3) is a variable length, discrete barcode symbology defined in ISO/IEC 16388:2007.
The Code 39 specification defines 43 characters, consisting of uppercase letters (A through Z), numeric digits (0 through 9) and a number of special characters (-, ., $, /, +, %, and space). An additional character (denoted '*') is used for both start and stop delimiters. Each character is composed of nine elements: five bars and four spaces. Three of the nine elements in each character are wide (binary value 1), and six elements are narrow (binary value 0). The width ratio between narrow and wide is not critical, and may be chosen between 1:2 and 1:3.
The barcode itself does not contain a check digit (in contrast to—for instance—Code 128), but it can be considered self-checking on the grounds that a single erroneously interpreted bar cannot generate another valid character. Possibly the most serious drawback of Code 39 is its low data density: It requires more space to encode data in Code 39 than, for example, in Code 128. This means that very small goods cannot be labeled with a Code 39 based barcode. However, Code 39 is still used by some postal services (although the Universal Postal Union recommends using Code 128 in all cases), and can be decoded with virtually any barcode reader. One advantage of Code 39 is that since there is no need to generate a check digit, it can easily be integrated into an existing printing system by adding a barcode font to the system or printer and then printing the raw data in that font.
Code 39 was developed by Dr. David Allais and Ray Stevens of Intermec in 1974. Their original design included two wide bars and one wide space in each character, resulting in 40 possible characters. Setting aside one of these characters as a start and stop pattern left 39 characters, which was the origin of the name Code 39. Four punctuation characters were later added, using no wi |
https://en.wikipedia.org/wiki/Dextroscope | The Dextroscope is a medical equipment system that creates a virtual reality (VR) environment in which surgeons can plan neurosurgical and other surgical procedures.
The Dextroscope is designed to show a patient's 3D anatomical relationships and pathology in great detail. Although its main purpose is for planning surgery, the dextroscope has also proven useful in research in cardiology, radiology and medical education.
History
The Dextroscope started as a research project in the mid-90s at the Kent Ridge Digital Labs research institute (part of Singapore's Agency for Science, Technology and Research (A*STAR)). It was initially named the Virtual Workbench and underwent commercialization in 2000 by the company Volume Interactions Pte Ltd with the name Dextroscope. The Dextroscope was selected in 2021 by A*STAR as one of the 30 innovations and inventions that pushed scientific boundaries, made an economic impact or improved lives over its 30 years history (A*STAR@30: 30 Innovations and Inventions Over Three Decades).
The Dextroscope was designed to be a practical variation of Virtual Reality which introduced an alternative to the prevalent trend of full immersion of the 1990s. Instead of immersing the whole user into a virtual reality, it just immersed the neurosurgeon into the patient data.
Description
The Dextroscope allows its user to interact intuitively with a Virtual Patient. This Virtual Patient is composed of computer-generated 3D multi-modal images obtained from any DICOM tomographic data including CT, MRI, MRA, MRV, functional MRI and CTA, PET, SPECT and Tractography. The Dextroscope can work with any multi-modality combination, supporting polygonal meshes as well.
The surgeon sits at the Dextroscope 3D interaction console and manipulates the Virtual Patient using both hands, similar to real life. Using stereoscopic visualisations displayed via a mirror, the surgeon sees the Virtual Patient floating behind the mirror but within easy reach of the han |
https://en.wikipedia.org/wiki/List%20of%20Barbadian%20flags | This is a list of flags used in Barbados.
National flag
Governmental flags
Military flags
Historical flags
See also
Flag of the British Windward Islands
Flag of the West Indies Federation
National symbols of Barbados
External links
Flag of the prime minister of Barbados
Flag of the governor-general of Barbados
History of Barbados
Colonial symbols of Barbados
Barbados Air Force roundel
Flags
Flags
Barbados |
https://en.wikipedia.org/wiki/Duqqa | Duqqa, du'ah, do'a, or dukkah ( , ) is an Egyptian and Middle Eastern condiment consisting of a mixture of herbs, nuts (usually hazelnut), and spices. It is typically used as a dip with bread or fresh vegetables for an hors d'œuvre. Pre-made versions of duqqa can be bought in the spice markets of Cairo, where they are sold in paper cones, with the simplest version being crushed mint, salt, and pepper. The packaged variety that is found in markets is composed of parched wheat flour mixed with cumin and caraway. In the Hejaz region it has been part of the regional cuisine for decades.
Etymology
The word is derived from the Arabic for "to pound" since the mixture of spices and nuts is pounded together after being dry roasted to a texture that is neither powdered nor paste-like. The actual composition of the spice mix can vary from family to family, vendor to vendor though there are common ingredients, such as sesame, coriander, cumin, salt and black pepper. Reference to a 19th-century text lists marjoram, mint, zaatar and chickpeas as further ingredients that can be used in the mixture. A report from 1978 indicates that even further ingredients can be used, such as nigella, millet flour and dried cheese. Some modern variants include pine nuts, pumpkin seeds or sunflower seeds.
Internationally
Duqqa is now becoming popular in some countries outside Egypt. In the United States it has gained exposure through such TV shows as Top Chef, Chopped and Iron Chef America. In Australia, several companies now make it in a variety of flavours. It has become popular in the past ten years, probably due to recent Lebanese and Arabic immigration as well as television cooking shows such as SBS Food Network. It can be found in supermarkets, specialty stores and many farmers' markets.
See also
List of Middle Eastern dishes
List of African dishes
Charoset
Notes |
https://en.wikipedia.org/wiki/Shit%20Museum | The Shit Museum () is a museum in the province of Piacenza, in the north of Italy, and is reported to be the world's first museum dedicated to faeces. The museum opened on 5 May 2015, having been founded by agricultural businessman Gianantonio Locatelli and three associates.
History
The museum, set in a medieval castle in the village of Castelbosco, was created by a local dairy farmer whose herd of 2,500 (some reports say 3,500) cows produce of milk a day, which is used to make Grana Padano cheese. The cows also produce around of dung, which is transformed into methane, fertiliser for the fields, as well as raw material for plaster and bricks. The dung is used to generate power to run the operation. The museum has a symbiotic relationship with the farm and cheesemaking facility. It is an eccentric byproduct of the huge aggregation of cows and their prodigious output of cow manure.
Eco-friendly recycling is an important theme of the museum. That includes the reuse of farmyard manure, but the museum also features many artefacts on display, including a lump of fossilised dinosaur faeces, jars of faeces, art works inspired by human waste, ancient Roman medicinal cures that featured animal excrement, and a collection of dung beetles.
An even broader motif (and goal) is "transformation" in an engineering, philosophical, scatological, sociological, and practical sense. As the organization's website offers: "The idea for a new museum slowly took shape, emerging from manure to deal with the broader theme of transformation. The museum would be an agent of change which, through educational and research activities, the production of objects of everyday use and the gathering of artefacts and stories concerning excrement in the modern world and throughout history, was to dismantle cultural norms and prejudices."
Transforming the site took more than twenty years. It started with paint. The museum commissioned artists David Tremlett and Anne and Patrick Poirier to tra |
https://en.wikipedia.org/wiki/Neural%20accommodation | Neural accommodation or neuronal accommodation occurs when a neuron or muscle cell is depolarised by slowly rising current (ramp depolarisation) in vitro. The Hodgkin–Huxley model also shows accommodation. Sudden depolarisation of a nerve evokes propagated action potential by activating voltage-gated fast sodium channels incorporated in the cell membrane if the depolarisation is strong enough to reach threshold. The open sodium channels allow more sodium ions to flow into the cell and resulting in further depolarisation, which will subsequently open even more sodium channels. At a certain moment this process becomes regenerative (vicious cycle) and results in the rapid ascending phase of action potential. In parallel with the depolarisation and sodium channel activation, the inactivation process of the sodium channels is also driven by depolarisation. Since the inactivation is much slower than the activation process, during the regenerative phase of action potential, inactivation is unable to prevent the "chain reaction"-like rapid increase in the membrane voltage.
During neuronal accommodation, the slowly rising depolarisation drives the activation and inactivation, as well as the potassium gates simultaneously and never evokes action potential. Failure to evoke action potential by ramp depolarisation of any strength had been a great puzzle until Hodgkin and Huxley created their physical model of action potential. Later in their life they received a Nobel Prize for their influential discoveries. Neuronal accommodation can be explained in two ways. "First, during the passage of a constant cathodal current through the membrane, the potassium conductance and the degree of inactivation will rise, both factors raising the threshold. Secondly, the steady state ionic current at all strengths of depolarization is outward, so that an applied cathodal current which rises sufficiently slowly will never evoke a regenerative response from the membrane, and excitation will not |
https://en.wikipedia.org/wiki/Argument%20of%20periapsis | The argument of periapsis (also called argument of perifocus or argument of pericenter), symbolized as ω, is one of the orbital elements of an orbiting body. Parametrically, ω is the angle from the body's ascending node to its periapsis, measured in the direction of motion.
For specific types of orbits, terms such as argument of perihelion (for heliocentric orbits), argument of perigee (for geocentric orbits), argument of periastron (for orbits around stars), and so on, may be used (see apsis for more information).
An argument of periapsis of 0° means that the orbiting body will be at its closest approach to the central body at the same moment that it crosses the plane of reference from South to North. An argument of periapsis of 90° means that the orbiting body will reach periapsis at its northmost distance from the plane of reference.
Adding the argument of periapsis to the longitude of the ascending node gives the longitude of the periapsis. However, especially in discussions of binary stars and exoplanets, the terms "longitude of periapsis" or "longitude of periastron" are often used synonymously with "argument of periapsis".
Calculation
In astrodynamics the argument of periapsis ω can be calculated as follows:
If ez < 0 then ω → 2 − ω.
where:
n is a vector pointing towards the ascending node (i.e. the z-component of n is zero),
e is the eccentricity vector (a vector pointing towards the periapsis).
In the case of equatorial orbits (which have no ascending node), the argument is strictly undefined. However, if the convention of setting the longitude of the ascending node Ω to 0 is followed, then the value of ω follows from the two-dimensional case:
If the orbit is clockwise (i.e. (r × v)z < 0) then ω → 2 − ω.
where:
ex and ey are the x- and y-components of the eccentricity vector e.
In the case of circular orbits it is often assumed that the periapsis is placed at the ascending node and therefore ω = 0. However, in the professional exoplanet com |
https://en.wikipedia.org/wiki/ST200%20family | The ST200 is a family of very long instruction word (VLIW) processor cores based on technology jointly developed by Hewlett-Packard Laboratories and STMicroelectronics under the name Lx. The main application of the ST200 family is embedded media processing.
Lx architecture
The Lx architecture is closer to the original VLIW architecture defined by the Trace processor series from Multiflow than to the EPIC architectures exemplified by the IA-64. Precisely, the Lx is a symmetric clustered architecture, where clusters communicate through explicit send and receive instructions. Each cluster executes up to 4 instructions per cycle with a maximum of one control instruction (goto, jump, call, return), one memory instruction (load, store, pre-fetch), and two multiply instructions per cycle. All arithmetic instructions operate on integer values with operands belonging either to the general register file (64 x 32-bit) or to the branch register file (8 x 1-bit). General register $r0 always reads as zero, while general register $r63 is the link register. In order to eliminate some conditional branches, the Lx architecture also provides partial predication support in the form of conditional selection instructions. There is no division instruction, but a divide step instruction is provided. All instructions are fully pipelined. The RAW latencies are single-cycle except for the load, multiply, compare to branch RAW latencies. The WAR latencies are zero cycles and the WAW latencies are single cycle.
The principal architects for the ST200 Lx implementation
were Paolo Faraboschi (HPL, architecture) and Fred Homewood (STM, microarchitecture). Key members of the architecture and microarchitecture team included Geoffrey Brown (HPL co-lead), Giuseppe Desoli (HP), Gary Vondran (HP), Trefor Southwell (ST), Tony Jarvis (ST), and Alex Starr (ST).
The architecture was really a true cross company development, co-sited for the early duration of the project, lasting some two years.
ST200 c |
https://en.wikipedia.org/wiki/Hyperdeterminant | In algebra, the hyperdeterminant is a generalization of the determinant. Whereas a determinant is a scalar valued function defined on an n × n square matrix, a hyperdeterminant is defined on a multidimensional array of numbers or tensor. Like a determinant, the hyperdeterminant is a homogeneous polynomial with integer coefficients in the components of the tensor. Many other properties of determinants generalize in some way to hyperdeterminants, but unlike a determinant, the hyperdeterminant does not have a simple geometric interpretation in terms of volumes.
There are at least three definitions of hyperdeterminant. The first was discovered by Arthur Cayley in 1843 presented to the Cambridge Philosophical Society. It is in two parts and Cayley's first hyperdeterminant is covered in the second part. It is usually denoted by det0. The second Cayley hyperdeterminant originated in 1845 and is often denoted "Det". This definition is a discriminant for a singular point on a scalar valued multilinear map.
Cayley's first hyperdeterminant is defined only for hypercubes having an even number of dimensions (although variations exist in odd dimensions). Cayley's second hyperdeterminant is defined for a restricted range of hypermatrix formats (including the hypercubes of any dimensions). The third hyperdeterminant, most recently defined by Glynn, occurs only for fields of prime characteristic p. It is denoted by detp and acts on all hypercubes over such a field.
Only the first and third hyperdeterminants are "multiplicative," except for the second hyperdeterminant in the case of "boundary" formats. The first and third hyperdeterminants also have closed formulae as polynomials and therefore their degrees are known, whereas the second one does not appear to have a closed formula or degree in all cases that are known.
The notation for determinants can be extended to hyperdeterminants without change or ambiguity. Hence the hyperdeterminant of a hypermatrix A may be written usin |
https://en.wikipedia.org/wiki/Pterygopalatine%20ganglion | The pterygopalatine ganglion (aka Meckel's ganglion, nasal ganglion, or sphenopalatine ganglion) is a parasympathetic ganglion in the pterygopalatine fossa. It is one of four parasympathetic ganglia of the head and neck, (the others being the submandibular, otic, and ciliary ganglion).
It is largely innervated by the greater petrosal nerve (a branch of the facial nerve). Its postsynaptic axons project to the lacrimal glands and nasal mucosa.
The flow of blood to the nasal mucosa, in particular the venous plexus of the conchae, is regulated by the pterygopalatine ganglion and heats or cools the air in the nose.
Structure
The pterygopalatine ganglion (of Meckel), the largest of the parasympathetic ganglia associated with the branches of the maxillary nerve, is deeply placed in the pterygopalatine fossa, close to the sphenopalatine foramen. It is triangular or heart-shaped, of a reddish-gray color, and is situated just below the maxillary nerve as it crosses the fossa.
The pterygopalatine ganglion supplies the lacrimal gland, paranasal sinuses, glands of the mucosa of the nasal cavity and pharynx, the gingiva, and the mucous membrane and glands of the hard palate. It communicates anteriorly with the nasopalatine nerve.
Roots
It receives a sensory, a parasympathetic, and a sympathetic root.
Sensory root
Its sensory root is derived from two sphenopalatine branches of the maxillary nerve; their fibers, for the most part, pass directly into the palatine nerves; a few, however, enter the ganglion, constituting its sensory root.
Parasympathetic root
Its parasympathetic root is derived from the nervus intermedius (a part of the facial nerve) through the greater petrosal nerve.
In the pterygopalatine ganglion, the preganglionic parasympathetic fibers from the greater petrosal branch of the facial nerve synapse with neurons whose postganglionic axons, vasodilator, and secretory fibers are distributed with the deep branches of the trigeminal nerve to the mucous membrane |
https://en.wikipedia.org/wiki/The%20Library%20of%20Babel | "The Library of Babel" () is a short story by Argentine author and librarian Jorge Luis Borges (1899–1986), conceiving of a universe in the form of a vast library containing all possible 410-page books of a certain format and character set.
The story was originally published in Spanish in Borges' 1941 collection of stories El jardín de senderos que se bifurcan (The Garden of Forking Paths). That entire book was, in turn, included within his much-reprinted Ficciones (1944). Two English-language translations appeared approximately simultaneously in 1962, one by James E. Irby in a diverse collection of Borges's works titled Labyrinths and the other by Anthony Kerrigan as part of a collaborative translation of the entirety of Ficciones.
Plot
Borges' narrator describes how his universe consists of an enormous expanse of adjacent hexagonal rooms. In each room, there is an entrance on one wall, the bare necessities for human survival on another wall, and four walls of bookshelves. Though the order and content of the books are random and apparently completely meaningless, the inhabitants believe that the books contain every possible ordering of just 25 basic characters (22 letters, the period, the comma, and space). Though the vast majority of the books in this universe are pure gibberish, the library also must contain, somewhere, every coherent book ever written, or that might ever be written, and every possible permutation or slightly erroneous version of every one of those books. The narrator notes that the library must contain all useful information, including predictions of the future, biographies of any person, and translations of every book in all languages. Conversely, for many of the texts, some language could be devised that would make it readable with any of a vast number of different contents.
Despite—indeed, because of—this glut of information, all books are totally useless to the reader, leaving the librarians in a state of suicidal despair. This leads some |
https://en.wikipedia.org/wiki/Oscillistor | An oscillistor is a semiconductor device, consisting of a semiconductor specimen placed in magnetic field, and a resistor after a power supply. The device produces high-frequency oscillations, which are very close to sinusoidal.
The basic principle of operation is the effect of spiral unsteadiness of electron-hole (p-n) plasmas.
See also
Electronic oscillator |
https://en.wikipedia.org/wiki/Amazon%20Machine%20Image | An Amazon Machine Image (AMI) is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud ("EC2"). It serves as the basic unit of deployment for services delivered using EC2.
Overview
Like all virtual appliances, the main component of an AMI is a read-only filesystem image that includes an operating system (e.g., Linux, Unix, or Windows) and any additional software required to deliver a service or a portion of it.
An AMI includes the following:
A template for the root volume for the instance (for example, an operating system, an application server, and applications)
Launch permissions that control which AWS accounts can use the AMI to launch instances
A block device mapping that specifies the volumes to attach to the instance when it's launched
The AMI filesystem is compressed, encrypted, signed, split into a series of 10 MB chunks and uploaded into Amazon S3 for storage. An XML manifest file stores information about the AMI, including name, version, architecture, default kernel id, decryption key and digests for all of the filesystem chunks.
Current AMIs are available for hardware virtualized machines (HVM) where the operating system is installed as it would be on real hardware. With the still available older paravirtualized virtual machines (PV), an AMI did not include a kernel image, only a pointer to the default kernel id, which could be chosen from an approved list of safe kernels maintained by Amazon and its partners (e.g., Red Hat, Canonical, Microsoft). Users could choose kernels other than the default when booting an PVM AMI.
Operating systems
When it launched in August 2006, the EC2 service offered Linux and later Sun Microsystems' OpenSolaris and Solaris Express Community Edition. In October 2008, EC2 added the Windows Server 2003 and Windows Server 2008 operating systems to the list of available operating systems. As of December 2010, it has also been reported to run FreeBSD; in Mar |
https://en.wikipedia.org/wiki/Manel%20Mu%C3%B1oz | Manel De Aguas Muñoz (born 10 October 1996 in Barcelona), known artistically as Manel De Aguas, is a Spanish cyborg artist and transpecies activist based in Barcelona, best known for developing and installing weather sensory fins in his head. The fins, formally known as 'Weather Fins', allow him to hear atmospheric pressure, humidity and temperature changes through implants at each side of his head. Depending on the changes he feels, he can predict weather changes as well as sense his current altitude.
De Aguas studied contemporary photography in Barcelona and became Cyborg Foundation's artist in residence in 2016. In 2017, he co-founded the Transpecies Society, an association that gives voice to people who do not identify as being 100% human and raises awareness on issues they face. The association, based in Barcelona, offers workshops specialized in the design and creation of new senses and organs.
De Aguas has shared his experience as a cyborg artist by performing and speaking in conferences and festivals in Germany, UK, Romania, Spain and The Netherlands among others. He also talks about it extensively in the Shaping Business Minds Through Art podcast.
See also |
https://en.wikipedia.org/wiki/Poisson%27s%20ratio | In materials science and solid mechanics, Poisson's ratio (nu) is a measure of the Poisson effect, the deformation (expansion or contraction) of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2–0.3. The ratio is named after the French mathematician and physicist Siméon Poisson.
Origin
Poisson's ratio is a measure of the Poisson effect, the phenomenon in which a material tends to expand in directions perpendicular to the direction of compression. Conversely, if the material is stretched rather than compressed, it usually tends to contract in the directions transverse to the direction of stretching. It is a common observation when a rubber band is stretched, it becomes noticeably thinner. Again, the Poisson ratio will be the ratio of relative contraction to relative expansion and will have the same value as above. In certain rare cases, a material will actually shrink in the transverse direction when compressed (or expand when stretched) which will yield a negative value of the Poisson ratio.
The Poisson's ratio of a stable, isotropic, linear elastic material must be between −1.0 and +0.5 because of the requirement for Young's modulus, the shear modulus and bulk modulus to have positive values. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. A perfectly incompressible isotropic material deformed elastically at small strains would have a Poi |
https://en.wikipedia.org/wiki/Marsh%20Awards%20for%20Ornithology | The five Marsh Awards for Ornithology are among over 40 Marsh Awards issued in the United Kingdom by the Marsh Charitable Trust and the British Trust for Ornithology (BTO), in the field of ornithology.
The Marsh Award for Ornithology
Given:
The Marsh Local Ornithology Award
Given:
The Marsh Award for Innovative Ornithology
Introduced in 2012 to celebrate:
The Marsh Award for International Ornithology
Introduced in 2013 and awarded to:
The Marsh Award for Young Ornithologist
Introduced in 2015 and awarded to:
2022: Anna Webberley
See also
List of ornithology awards |
https://en.wikipedia.org/wiki/Realization%20%28systems%29 | In systems theory, a realization of a state space model is an implementation of a given input-output behavior. That is, given an input-output relationship, a realization is a quadruple of (time-varying) matrices such that
with describing the input and output of the system at time .
LTI System
For a linear time-invariant system specified by a transfer matrix, , a realization is any quadruple of matrices such that .
Canonical realizations
Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system)):
Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:
.
The coefficients can now be inserted directly into the state-space model by the following approach:
.
This state-space realization is called controllable canonical form (also known as phase variable canonical form) because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).
The transfer function coefficients can also be used to construct another type of canonical form
.
This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).
General System
D = 0
If we have an input , an output , and a weighting pattern then a realization is any triple of matrices such that where is the state-transition matrix associated with the realization.
System identification
System identification techniques take the experimental data from a system and output a realization. Such techniques can utilize both input and output data (e.g. eigensystem realization algorithm) or can only include the output data (e.g. frequency domain decom |
https://en.wikipedia.org/wiki/Project%20Verona | Project Verona is an experimental research programming language developed by Microsoft and aimed at dealing with memory situations to make other programming languages safer.
The project is being supported by C# project manager Mads Torgensen and Microsoft Research Cambridge research software engineer Juliana Franco. Project Verona is also being aided by academics at Imperial College London. Unlike in Rust where the ownership model based on a single object, it is based on groups of objects in Verona.
According to Microsoft, the goal of the project is to create a safer platform for memory management.
Project Verona is open source released under MIT License and is under active development on GitHub.
Example
while_sum(x: List[U32]) : U32
{
var sum: U32 = 0;
let iter = x.values();
while { iter.has_value() }
{
// This has to be `a`, same as in the for loop above
let a = iter();
// Increments the iterator
next iter;
// This is the body of the for loop
sum = sum + a
}
sum
}
See also
List of programming language researchers
Go (programming language)
Rust (programming language)
Cyclone (programming language) |
https://en.wikipedia.org/wiki/George%20E.%20P.%20Box | George Edward Pelham Box (18 October 1919 – 28 March 2013) was a British statistician, who worked in the areas of quality control, time-series analysis, design of experiments, and Bayesian inference. He has been called "one of the great statistical minds of the 20th century".
Education and early life
He was born in Gravesend, Kent, England. Upon entering university he began to study chemistry, but was called up for service before finishing. During World War II, he performed experiments for the British Army exposing small animals to poison gas. To analyze the results of his experiments, he taught himself statistics from available texts. After the war, he enrolled at University College London and obtained a bachelor's degree in mathematics and statistics. He received a PhD from the University of London in 1953, under the supervision of Egon Pearson.
Career and research
From 1948 to 1956, Box worked as a statistician for Imperial Chemical Industries (ICI). While at ICI, he took a leave of absence for a year and served as a visiting professor at North Carolina State University at Raleigh. He later went to Princeton University where he served as Director of the Statistical Research Group.
In 1960, Box moved to the University of Wisconsin–Madison to create the Department of Statistics. In 1980, he was named Vilas Research Professor of Statistics, which is the highest honor given to a member of the University of Wisconsin-Madison faculty. Box and Bill Hunter co-founded the Center for Quality and Productivity Improvement at the University of Wisconsin–Madison in 1985. Box officially retired in 1992, becoming an emeritus professor.
Box published books including Statistics for Experimenters (2nd ed., 2005), Time Series Analysis: Forecasting and Control (4th ed., 2008, with Gwilym Jenkins and Gregory C. Reinsel) and Bayesian Inference in Statistical Analysis. (1973, with George Tiao).
Awards and honours
Box served as president of the American Statistical Association in |
https://en.wikipedia.org/wiki/Winlink | Winlink, or formally, Winlink Global Radio Email (registered US Service Mark), also known as the Winlink 2000 Network, is a worldwide radio messaging system that uses amateur-band radio frequencies and government frequencies to provide radio interconnection services that include email with attachments, position reporting, weather bulletins, emergency and relief communications, and message relay. The system is built and administered by volunteers and is financially supported by the Amateur Radio Safety Foundation.
Network
Winlink networking started by providing interconnection services for amateur radio (also known as ham radio). It is well known for its central role in emergency and contingency communications worldwide. The system used to employ multiple central message servers around the world for redundancy, but in 2017–2018 upgraded to Amazon Web Services that provides a geographically-redundant cluster of virtual servers with dynamic load balancers and global content-distribution. Gateway stations have operated on sub-bands of HF since 2013 as the Winlink Hybrid Network, offering message forwarding and delivery through a mesh-like smart network whenever Internet connections are damaged or inoperable. During the late 1990s and late 2000s, it increasingly became what is now the standard network system for amateur radio email worldwide. Additionally, in response to the need for better disaster response communications in the mid to later part of the 2000s, the network was expanded to provide separate parallel radio email networking systems for MARS, UK Cadet, Austrian Red Cross, the US Department of Homeland Security SHARES HF Program, and other groups.
Amateur radio HF e-mail
Generally, e-mail communications over amateur radio in the 21st century is now considered normal and commonplace. E-mail via high frequency (HF) can be used nearly everywhere on the planet, and is made possible by connecting an HF single sideband (SSB) transceiver system to a computer, mod |
https://en.wikipedia.org/wiki/Pronunciation%20assessment | Automatic pronunciation assessment is the use of speech recognition to verify the correctness of pronounced speech, as distinguished from manual assessment by an instructor or proctor. Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching (CAPT) when combined with computer-aided instruction for computer-assisted language learning (CALL), speech remediation, or accent reduction. Pronunciation assessment does not determine unknown speech (as in dictation or automatic transcription) but instead, knowing the expected word(s) in advance, it attempts to verify the correctness of the learner's pronunciation and ideally their intelligibility to listeners, sometimes along with often inconsequential prosody such as intonation, pitch, tempo, rhythm, and stress. Pronunciation assessment is also used in reading tutoring, for example in products such as Microsoft Teams and from Amira Learning. Automatic pronunciation assessment can also be used to help diagnose and treat speech disorders such as apraxia.
The earliest work on pronunciation assessment avoided measuring genuine listener intelligibility, a shortcoming corrected in 2011 at the Toyohashi University of Technology, and included in the Versant high-stakes English fluency assessment from Pearson and mobile apps from 17zuoye Education & Technology, but still missing in 2023 products from Google Search, Microsoft, Educational Testing Service, Speechace, and ELSA. Assessing authentic listener intelligibility is essential for avoiding inaccuracies from accent bias, especially in high-stakes assessments; from words with multiple correct pronunciations; and from phoneme coding errors in machine-readable pronunciation dictionaries. In the Common European Framework of Reference for Languages (CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all le |
https://en.wikipedia.org/wiki/Zion | Zion ( Ṣīyyōn, LXX , also variously transliterated Sion, Tzion, Tsion, Tsiyyon) is a placename in the Hebrew Bible used as a synonym for Jerusalem as well as for the Land of Israel as a whole.
The name is found in 2 Samuel (5:7), one of the books of the Hebrew Bible dated to before or close to the mid-6th century BCE. It originally referred to a specific hill in Jerusalem (Mount Zion), located to the south of Mount Moriah (the Temple Mount). According to the narrative of 2 Samuel 5, Mount Zion held the Jebusite fortress of the same name that was conquered by David and was renamed the City of David.
That specific hill ("mount") is one of the many squat hills that form Jerusalem, which also includes Mount Moriah (the Temple Mount), the Mount of Olives, etc. Over many centuries, until as recently as the Ottoman era, the city walls of Jerusalem were rebuilt many times in new locations, so that the particular hill known as Mount Zion is no longer inside the city wall, but its location is now just outside the portion of the Old City wall forming the southern boundary of the Jewish Quarter of the current Old City. Most of the original City of David itself is thus also outside the current city wall.
The term Tzion came to designate the area of Davidic Jerusalem where the fortress stood, and was used as well as synecdoche for the entire city of Jerusalem; and later, when Solomon's Temple was built on the adjacent Mount Moriah (which, as a result, came to be known as the Temple Mount) the meanings of the term Tzion were further extended by synecdoche to the additional meanings of the Temple itself, the hill upon which the Temple stood, the entire city of Jerusalem, the entire biblical Land of Israel, and "the World to Come", the Jewish understanding of the afterlife.
Etymology
The etymology of the word Zion (ṣiyôn) is uncertain.
Mentioned in the Old Testament in the Books of Samuel (2 Samuel 5:7) as the name of a Jebusite fortress conquered by David, its origin seems to |
https://en.wikipedia.org/wiki/Plasma%20display | A plasma display panel (PDP) is a type of flat panel display that uses small cells containing plasma: ionized gas that responds to electric fields. Plasma televisions were the first large (over 32 inches diagonal) flat panel displays to be released to the public.
Until about 2007, plasma displays were commonly used in large televisions. By 2013, they had lost nearly all market share due to competition from low-cost LCDs and more expensive but high-contrast OLED flat-panel displays. Manufacturing of plasma displays for the United States retail market ended in 2014, and manufacturing for the Chinese market ended in 2016. Plasma displays are obsolete, having been superseded in most if not all aspects by OLED displays.
General characteristics
Plasma displays are bright (1,000 lux or higher for the display module), have a wide color gamut, and can be produced in fairly large sizes—up to diagonally. They had a very low luminance "dark-room" black level compared with the lighter grey of the unilluminated parts of an LCD screen. (As plasma panels are locally lit and do not require a back light, blacks are blacker on plasma and grayer on LCD's.) LED-backlit LCD televisions have been developed to reduce this distinction. The display panel itself is about thick, generally allowing the device's total thickness (including electronics) to be less than . Power consumption varies greatly with picture content, with bright scenes drawing significantly more power than darker ones – this is also true for CRTs as well as modern LCDs where LED backlight brightness is adjusted dynamically. The plasma that illuminates the screen can reach a temperature of at least . Typical power consumption is 400 watts for a screen. Most screens are set to "vivid" mode by default in the factory (which maximizes the brightness and raises the contrast so the image on the screen looks good under the extremely bright lights that are common in big box stores), which draws at least twice the power (around |
https://en.wikipedia.org/wiki/Intertemporal%20choice | Intertemporal choice is the study of the relative value people assign to two or more payoffs at different points in time. This relationship is usually simplified to today and some future date. Intertemporal choice was introduced by John Rae in 1834 in the "Sociological Theory of Capital". Later, Eugen von Böhm-Bawerk in 1889 and Irving Fisher in 1930 elaborated on the model.
Fisher model
Assumptions of the model
consumer's income is constant
maximization of the utility
anything above the line is out of explanation
investments are generators of savings
any property is indivisible and unchangeable
According to this model there are three types of consumption: past, present and future.
When making decisions between present and future consumption, the consumer takes his/her previous consumption into account.
This decision making is based on an indifference map with negative slope because if he consumes something today it means that he can't consume it in the future and vice versa.
The revenue is in form of interest rate. Nominal interest rate - inflation = real interest rate
Denote
: interest rate
: income in time or a future income
: income in time or a present income
Then maximum present consumption is:
The maximum future consumption is:
See also
Choice modelling
Decision theory
Discount function
Discounted utility
Intertemporal budget constraint
Keynes–Ramsey rule
Temporal discounting |
https://en.wikipedia.org/wiki/Loye%20and%20Alden%20Miller%20Research%20Award | The Loye and Alden Miller Research Award, now known as the AOS Miller Award, was established in 1993 by the Cooper Ornithological Society (COS) to recognize lifetime achievement in ornithological research. The namesakes were Loye H. Miller and his son Alden H. Miller, both of whom focused largely on ornithology.
Since the merger of the Cooper Ornithological Society with the American Ornithologists' Union to form the American Ornithological Society in 2016 the award has been presented by the latter.
Recipients of the award
Source: American Ornithological Society
Cooper Ornithological Society
1993 – George Bartholomew
1994 – Storrs Olson
1995 – Barbara De Wolfe
1996 – William Dawson
1997 – Robert Storer
1998 – Russell Balda
1999 – Gordon Orians
2000 – Ernst Mayr
2001 – Frank Pitelka
2002 – Richard Holmes
2003 – Peter and Rosemary Grant
2004 – Alexander Skutch
2005 – John Wiens
2006 – Robert Ricklefs
2007 – Robert Payne
2008 – Peter Marler
2009 – Frances James
2010 – Keith A. Hobson
2011 – Susan Haig
2012 – Thomas Martin
2013 – Trevor Price
2014 – Ellen Ketterson
2015 – Jerram Brown
2016 – Walter D. Koenig
American Ornithological Society
2017 - Carol Vleck
2018 - Janis Dickinson
2019 - A. Townsend Peterson
See also
List of ornithology awards |
https://en.wikipedia.org/wiki/Stella%20d%27Italia | The Stella d'Italia ("Star of Italy"), popularly known as Stellone d'Italia ("Great Star of Italy"), is a five-pointed white star, which has symbolized Italy for many centuries. It is the oldest national symbol of Italy, since it dates back to Graeco-Roman mythology when Venus, associated with the West as an evening star, was adopted to identify the Italian peninsula. From an allegorical point of view, the Stella d'Italia metaphorically represents the shining destiny of Italy.
In the early 16th century it began to be frequently associated with Italia turrita, the national personification of the Italian peninsula. The Stella d'Italia was adopted as part of the emblem of Italy in 1947, where it is superimposed on a steel cogwheel, all surrounded by an oak branch and an olive branch.
Symbolic value
From an allegorical point of view, the Star of Italy metaphorically represents the shining destiny of Italy. Its unifying value is equal to that of the flag of Italy. In 1947, the Stella d'Italia was inserted at the center of the emblem of Italy, which was designed by Paolo Paschetto and which is the iconic symbol identifying the Italian State.
The Italian Star is also recalled by some honors. The Italian Star is recalled by the Colonial Order of the Star of Italy, decoration of the Kingdom of Italy which was intended to celebrate the Italian Empire, as well as by the Order of the Star of Italian Solidarity, the first decoration established by Republican Italy, which was replaced in 2011 by the Order of the Star of Italy, second civil honorary title in importance of the Italian State.
The Star of Italy is also recalled by the stars worn on the collars of Italian military uniforms and appears on the figurehead of the Italian Navy. In the civil sphere, the Italian Star is the central symbol of the emblem of the Club Alpino Italiano.
History
From ancient Greece to the Roman era
The symbolism of a star associated with Italy first appeared in the writings of the ancient G |
https://en.wikipedia.org/wiki/G.%20Marius%20Clore | G. Marius Clore MAE, FRSC, FRS is a British-born, Anglo-American molecular biophysicist and structural biologist. He was born in London, U.K. and is a dual U.S./U.K. Citizen. He is a Member of the National Academy of Sciences, a Fellow of the Royal Society, a NIH Distinguished Investigator, and the Chief of the Molecular and Structural Biophysics Section in the Laboratory of Chemical Physics of the National Institute of Diabetes and Digestive and Kidney Diseases at the U.S. National Institutes of Health. He is known for his foundational work in three-dimensional protein and nucleic acid structure determination by biomolecular NMR spectroscopy, for advancing experimental approaches to the study of large macromolecules and their complexes by NMR, and for developing NMR-based methods to study rare conformational states in protein-nucleic acid and protein-protein recognition. Clore's discovery of previously undetectable, functionally significant, rare transient states of macromolecules has yielded fundamental new insights into the mechanisms of important biological processes, and in particular the significance of weak interactions and the mechanisms whereby the opposing constraints of speed and specificity are optimized. Further, Clore's work opens up a new era of pharmacology and drug design as it is now possible to target structures and conformations that have been heretofore unseen.
Biography
Clore received his undergraduate degree with first class honours in biochemistry from University College London in 1976 and medical degree from UCL Medical School in 1979. After completing house physician and house surgeon appointments at University College Hospital and St Charles' Hospital (part of the St. Mary's Hospital group), respectively, he was a member of the scientific staff of the Medical Research Council National Institute for Medical Research from 1980 to 1984. He received his PhD from the National Institute for Medical Research in Physical Biochemistry in 1982. |
https://en.wikipedia.org/wiki/Frank%20Benford | Frank Albert Benford Jr. (July 10, 1883 – December 4, 1948) was an American electrical engineer and physicist best known for rediscovering and generalizing Benford's Law, an earlier statistical statement by Simon Newcomb, about the occurrence of digits in lists of data.
Benford is also known for having devised, in 1937, an instrument for measuring the refractive index of glass. An expert in optical measurements, he published 109 papers in the fields of optics and mathematics and was granted 20 patents on optical devices.
Early life
He was born in Johnstown, Pennsylvania. His date of birth is given variously as May 29 or July 10, 1883. At the age of 6 his family home was destroyed by the Johnstown Flood.
Education
He graduated from the University of Michigan in 1910.
Career
Benford worked for General Electric, first in the Illuminating Engineering Laboratory for 18 years, then the Research Laboratory for 20 years until retiring in July 1948. He was working as a research physicist when he made the rediscovery of Benford's law, and spent years collecting data before publishing in 1938, citing more than 20,000 values from a diverse set of sources including statistics from baseball, atomic weights, the areas of rivers and numbering of articles in magazines.
Death
He died suddenly at his home on December 4, 1948. |
https://en.wikipedia.org/wiki/PC1%20cipher | The Amazon Kindle e-book reader DRM system uses the PC1 cipher, also called the Kindle cipher or Pukall cipher 1.
History
The PC1 cipher was designed by Alexander Pukall in 1991.
Successors
Caracachs Cipher formerly known as PC3 Cipher was released in 2000.
PC4 was released in 2015. It's a block cipher specifically designed for DMR radio communication systems. It uses 253 rounds and the key size can vary from 8 bits to 2112 bits. The block size is 49 bits, the exact size of an AMBE+ DMR voiceframe. |
https://en.wikipedia.org/wiki/Disk%20staging | Disk staging is using disks as an additional, temporary stage of backup process before finally storing backup to tape. Backups stay on disk typically for a day or a week, before being copied to tape in a background process and deleted afterwards.
The process of disk staging is controlled by the same software that performs actual backups, which is different from virtual tape library where intermediate disk usage is hidden from main backup software. Both techniques are known as D2D2T (disk-to-disk-to-tape).
Restoring data
Data is restored from disk if possible. But if the data exists only on tape it is restored directly (no backward-staging on restore).
Reasons
Reasons behind using D2D2T:
increase performance of small, random-access restores: disk has much faster random access than tape
increase overall backup/restore performance: although disk and a tape have similar streaming throughput, you can easily scale disk throughput by the means of striping (and tape-striping is a much less established technique)
increase utilization of tape drives: tape shoe-shining effect is eliminated when staging (note that it may still happen on tape restores)
See also
Backup
Virtual tape library |
https://en.wikipedia.org/wiki/Egg%20of%20Columbus%20%28tangram%20puzzle%29 | The Egg of Columbus (Ei des Columbus in German) is a dissection puzzle consisting of a flat egg-like shape divided into 9 or 10 pieces by straight cuts. The goal of the puzzle is to rearrange the pieces to form other specific shapes, such as animals (see below).
The earliest known examples were produced by German toy manufacturer Richter. Production was ceased in 1963, but renewed at the start of the 21st century.
Because the two pieces coloured turquoise in these diagrams lack bilateral symmetry, some shapes in which both pieces have the same chirality, as in two of the examples below, require one of them to be flipped over.
See also
Egg of Columbus |
https://en.wikipedia.org/wiki/Gudkov%27s%20conjecture | In real algebraic geometry, Gudkov's conjecture, also called Gudkov’s congruence, (named after Dmitry Gudkov) was a conjecture, and is now a theorem, which states that an M-curve of even degree obeys the congruence
where is the number of positive ovals and the number of negative ovals of the M-curve. (Here, the term M-curve stands for "maximal curve"; it means a smooth algebraic curve over the reals whose genus is , where is the number of maximal components of the curve.)
The theorem was proved by the combined works of Vladimir Arnold and Vladimir Rokhlin.
See also
Hilbert's sixteenth problem
Tropical geometry |
https://en.wikipedia.org/wiki/Finite%20difference%20coefficient | In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward.
Central finite difference
This table contains the coefficients of the central differences, for several orders of accuracy and with uniform grid spacing:
For example, the third derivative with a second-order accuracy is
where represents a uniform grid spacing between each finite difference interval, and .
For the -th derivative with accuracy , there are central coefficients . These are given by the solution of the linear equation system
where the only non-zero value on the right hand side is in the -th row.
An open source implementation for calculating finite difference coefficients of arbitrary derivates and accuracy order in one dimension is available.
The theory of Lagrange polynomials provides explicit formulas for the finite difference coefficients. For the first six derivatives we have the following:
where are generalized harmonic numbers.
Forward finite difference
This table contains the coefficients of the forward differences, for several orders of accuracy and with uniform grid spacing:
For example, the first derivative with a third-order accuracy and the second derivative with a second-order accuracy are
while the corresponding backward approximations are given by
Backward finite difference
To get the coefficients of the backward approximations from those of the forward ones, give all odd derivatives listed in the table in the previous section the opposite sign, whereas for even derivatives the signs stay the same.
The following table illustrates this:
Arbitrary stencil points
For a given arbitrary stencil points of length with the order of derivatives , the finite difference coefficients can be obtained by solving the linear equations
where is the Kronecker delta, equal to one if , and zero otherwise.
Example, for , order of differen |
https://en.wikipedia.org/wiki/Surface%20states | Surface states are electronic states found at the surface of materials. They are formed due to the sharp transition from solid material that ends with a surface and are found only at the atom layers closest to the surface. The termination of a material with a surface leads to a change of the electronic band structure from the bulk material to the vacuum. In the weakened potential at the surface, new electronic states can be formed, so called surface states.
Origin at condensed matter interfaces
As stated by Bloch's theorem, eigenstates of the single-electron Schrödinger equation with a perfectly periodic potential, a crystal, are Bloch waves
Here is a function with the same periodicity as the crystal, n is the band index and k is the wave number. The allowed wave numbers for a given potential are found by applying the usual Born–von Karman cyclic boundary conditions. The termination of a crystal, i.e. the formation of a surface, obviously causes deviation from perfect periodicity. Consequently, if the cyclic boundary conditions are abandoned in the direction normal to the surface the behavior of electrons will deviate from the behavior in the bulk and some modifications of the electronic structure has to be expected.
A simplified model of the crystal potential in one dimension can be sketched as shown in Figure 1. In the crystal, the potential has the periodicity, a, of the lattice while close to the surface it has to somehow attain the value of the vacuum level. The step potential (solid line) shown in Figure 1 is an oversimplification which is mostly convenient for simple model calculations. At a real surface the potential is influenced by image charges and the formation of surface dipoles and it rather looks as indicated by the dashed line.
Given the potential in Figure 1, it can be shown that the one-dimensional single-electron Schrödinger equation gives two qualitatively different types of solutions.
The first type of states (see figure 2) extends into |
https://en.wikipedia.org/wiki/%CE%9C-law%20algorithm | The μ-law algorithm (sometimes written mu-law, often approximated as u-law) is a companding algorithm, primarily used in 8-bit PCM digital telecommunication systems in North America and Japan. It is one of the two companding algorithms in the G.711 standard from ITU-T, the other being the similar A-law. A-law is used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe.
The terms PCMU, G711u or G711MU are used for G711 μ-law.
Companding algorithms reduce the dynamic range of an audio signal. In analog systems, this can increase the signal-to-noise ratio (SNR) achieved during transmission; in the digital domain, it can reduce the quantization error (hence increasing the signal-to-quantization-noise ratio). These SNR increases can be traded instead for reduced bandwidth for equivalent SNR.
At the cost of a reduced peak SNR, it can be mathematically shown that μ-law's non-linear quantization effectively increases dynamic range by 33 dB or bits over a linearly-quantized signal, hence 13.5 bits (which rounds up to 14 bits) is the most resolution required for an input digital signal to be compressed for 8-bit μ-law.
Algorithm types
The μ-law algorithm may be described in an analog form and in a quantized digital form.
Continuous
For a given input , the equation for μ-law encoding is
where in the North American and Japanese standards, and is the sign function. It is important to note that the range of this function is −1 to 1.
μ-law expansion is then given by the inverse equation:
Discrete
The discrete form is defined in ITU-T Recommendation G.711.
G.711 is unclear about how to code the values at the limit of a range (e.g. whether +31 codes to 0xEF or 0xF0).
However, G.191 provides example code in the C language for a μ-law encoder. The difference between the positive and negative ranges, e.g. the negative range corresponding to +30 to +1 is −31 to −2. This is accounted for by the use of 1's complement (simple bit inversi |
https://en.wikipedia.org/wiki/PERQ | The PERQ, also referred to as the Three Rivers PERQ or ICL PERQ, was a pioneering workstation computer produced in the late 1970s through the early 1980s. In June 1979, the company took its very first order from the UK's Rutherford Appleton Laboratory and the computer was officially launched in August 1979 at SIGGRAPH in Chicago. It was the first commercially produced personal workstation with a Graphical User Interface. The design was heavily influenced by the original workstation computer, the Xerox Alto, which was never commercially produced. The origin of the name "PERQ" was chosen both as an acronym of "Pascal Engine that Runs Quicker," and to evoke the word perquisite commonly called perks, that is employee additional benefits.
The workstation was conceived by six former Carnegie Mellon University alumni and employees, Brian S. Rosen, James R. Teter, William H. Broadley, J. Stanley Kriz, Raj Reddy and Paul G. Newbury, who formed the startup Three Rivers Computer Corporation (3RCC) in 1974. Brian Rosen also worked at Xerox PARC on the Dolphin workstation. As a result of interest from the UK Science Research Council (later, the Science and Engineering Research Council), 3RCC entered into a relationship with the British computer company International Computers Limited (ICL) in 1981 for European distribution, and later co-development and manufacturing. The PERQ was used in a number of academic research projects in the UK during the 1980s. 3RCC was renamed PERQ System Corporation in 1984. It went out of business in 1986, largely due to competition from other workstation manufacturers such as Sun Microsystems, Apollo Computer and Silicon Graphics.
Hardware
Processor
The PERQ CPU was a microcoded discrete logic design, rather than a microprocessor. It was based around 74S181 bit-slice ALUs and an Am2910 microcode sequencer. The PERQ CPU was unusual in having 20-bit wide registers and a writable control store (WCS), allowing the microcode to be redefined. The CPU |
https://en.wikipedia.org/wiki/Spam%20and%20Open%20Relay%20Blocking%20System | SORBS ("Spam and Open Relay Blocking System") is a list of e-mail servers suspected of sending or relaying spam (a DNS Blackhole List). It has been augmented with complementary lists that include various other classes of hosts, allowing for customized email rejection by its users.
History
The SORBS DNSbl project was created in November 2001. It was maintained as a private list until 6 January 2002 when the DNSbl was officially launched to the public. The list consisted of 78,000 proxy relays and rapidly grew to over 3,000,000 alleged compromised spam relays.
In November 2009 SORBS was acquired by GFI Software, to enhance their mail filtering solutions.
In July 2011 SORBS was re-sold to Proofpoint, Inc.
DUHL
SORBS adds IP ranges that belong to dialup modem pools, dynamically allocated wireless, and DSL connections as well as DHCP LAN ranges by using reverse DNS PTR records, WHOIS records, and sometimes by submission from the ISPs themselves. This is called the DUHL or Dynamic User and Host List. SORBS does not automatically rescan DUHL listed hosts for updated rDNS so to remove an IP address from the DUHL the user or ISP has to request a delisting or rescan. If other blocks are scanned in the region of listings and the scan includes listed netspace, SORBS automatically removes the netspace marked as static.
Matthew Sullivan of SORBS proposed in an Internet Draft that generic reverse DNS addresses include purposing tokens such as static or dynamic, abbreviations thereof, and more. That naming scheme would have allowed end users to classify IP addresses without the need to rely on third party lists, such as the SORBS DUHL. The Internet Draft has since expired. Generally it is considered more appropriate for ISPs to simply block outgoing traffic to port 25 if they wish to prevent users from sending email directly, rather than specifying it in the reverse DNS record for the IP.
SORBS' dynamic IP list originally came from Dynablock but has been developed independen |
https://en.wikipedia.org/wiki/Fake%20projective%20space | In mathematics, a fake projective space is a complex algebraic variety that has the same Betti numbers as some projective space, but is not isomorphic to it.
There are exactly 50 fake projective planes. found four examples of fake projective 4-folds, and showed that no arithmetic examples exist in dimensions other than 2 and 4. |
https://en.wikipedia.org/wiki/Pauli%20group | In physics and mathematics, the Pauli group on 1 qubit is the 16-element matrix group consisting of the 2 × 2 identity matrix and all of the Pauli matrices
,
together with the products of these matrices with the factors and :
.
The Pauli group is generated by the Pauli matrices, and like them it is named after Wolfgang Pauli.
The Pauli group on qubits, , is the group generated by the operators described above applied to each of qubits in the tensor product Hilbert space .
As an abstract group, is the central product of a cyclic group of order 4 and the dihedral group of order 8.
The Pauli group is a representation of the gamma group in three-dimensional Euclidean space. It is not isomorphic to the gamma group; it is less free, in that its chiral element is whereas there is no such relationship for the gamma group. |
https://en.wikipedia.org/wiki/SingldOut | SingldOut was an online dating platform that claimed to use genetic testing to identify potential relationship matches. The company marketed its intention to bridge the gap between digital networking and biological compatibility. The service used the professional networking site LinkedIn as well as DNA testing company Instant Chemistry. Jana successfully wrote, orchestrated and implemented a highly targeted Public Relations campaign that resulted in national and international media acclaim valued at over $2M in earned media exposure. Jana Bayad launched the site in July 2014. It was shut down in November 2015.
Methodology
Upon registering with SingldOut, members were sent a DNA test kit. To assess the biological compatibility of its members, SingldOut claimed to examine immune system genes, saying that they play a role in attraction, as well as serotonin transporter genes, claiming these play a role in determining how someone might react in certain situations. Results from the DNA test were then posted on the user's profile and compared with the results of other users.
The company credited genetic testing with the ability to “identify up to 40 percent of the chemistry of attraction between two people.” |
https://en.wikipedia.org/wiki/Multiverse%20Computing | Multiverse Computing is a Spanish quantum computing software company headquartered in San Sebastian, Spain, with offices in Paris, Munich, London, Toronto and Sherbrooke, Canada. The Spanish startup applies quantum and quantum-inspired algorithms to problems in energy, logistics, manufacturing, mobility, life sciences, finance, cybersecurity, chemistry and aerospace.
Its flagship product, Singularity, is an industrial quantum and quantum-inspired software platform focused on solving real-world challenges for large enterprises. Among other features, Singularity’s user interface incorporates tools such as Microsoft Excel plug-ins that allow use of the platform’s core algorithms without prior knowledge of quantum computing.
History
Multiverse was co-founded in 2019 by Enrique Lizaso, Román Orús, Alfonso Rubio and Sam Mugel. Lizaso, treasurer and member of the governing board of the European Quantum Industry Consortium, and Orús, Ikerbasque Research Professor at DIPC and former Marie Curie Fellow at the Max Planck Institute of Quantum Optics, were chatting on WhatsApp when the idea for a quantum computing company for finance was born.
In 2021, the company announced €12.5 million in funding from the European Innovation Council Accelerator program.
In April 2022, the company partnered with the Bank of Canada to explore how quantum computing can provide new insights into economic problems via simulation of cryptocurrency adoption. This research made Canada the first G7 country to explore the model of complex networks and cryptocurrencies through quantum computing.
That July, Multiverse partnered with Bosch to integrate quantum algorithms into digital twin simulation workflow to scale simulations more efficiently and improve the accuracy of defect detection.
Later that year, BASF used Multiverse’s Singularity to develop models for foreign exchange trading optimization between U.S. and EU currency to improve profits.
Additional organizations exploring Multiverse’s Sing |
https://en.wikipedia.org/wiki/Nestedness | Nestedness is a measure of structure in an ecological system, usually applied to species-sites systems (describing the distribution of species across locations), or species-species interaction networks (describing the interactions between species, usually as bipartite networks such as hosts-parasites, plants-pollinators, etc.).
A system (usually represented as a matrix) is said to be nested when the elements that have a few items in them (locations with few species, species with few interactions) have a subset of the items of elements with more items. Imagine a series of islands that are ordered by their distance from the mainland. If the mainland has all species, the first island has a subset of mainland's species, the second island has a subset of the first island's species, and so forth, then this system is perfectly nested.
Measures of nestedness
One measurement unit for nestedness is a system's 'temperature' offered by Atmar and Patterson in 1993. This measures the order in which species' extinctions would occur in the system (or from the other side - the order of colonizing a system). The 'colder' the system is, the more fixed the order of extinction would be. In a warmer system, extinctions will take a more random order. Temperatures go from 0°, coldest and absolutely fixed, to 100° absolutely random.
For various reasons, the Nestedness Temperature Calculator is not mathematically satisfying (no unique solution, not conservative enough). A software (BINMATNEST) is available from the authors on request and from the Journal of Biogeography to correct these deficits In addition, ANINHADO solves problems of large matrix size and processing of a large number of randomized matrices; in addition it implements several null models to estimate the significance of nestedness.
Bastolla et al. introduced a simple measure of nestedness based on the number of common neighbours for each pair of nodes. They argue that this can help reduce the effective competition betwe |
https://en.wikipedia.org/wiki/List%20of%20theorems%20called%20fundamental | In mathematics, a fundamental theorem is a theorem which is considered to be central and conceptually important for some topic. For example, the fundamental theorem of calculus gives the relationship between differential calculus and integral calculus. The names are mostly traditional, so that for example the fundamental theorem of arithmetic is basic to what would now be called number theory. Some of these are classification theorems of objects which are mainly dealt with in the field. For instance, the fundamental theorem of curves describe classification of regular curves in space up to translation and rotation.
Likewise, the mathematical literature sometimes refers to the fundamental lemma of a field. The term lemma is conventionally used to denote a proven proposition which is used as a stepping stone to a larger result, rather than as a useful statement in-and-of itself.
Fundamental theorems of mathematical topics
Fundamental theorem of algebra
Fundamental theorem of algebraic K-theory
Fundamental theorem of arithmetic
Fundamental theorem of Boolean algebra
Fundamental theorem of calculus
Fundamental theorem of calculus for line integrals
Fundamental theorem of curves
Fundamental theorem of cyclic groups
Fundamental theorem of dynamical systems
Fundamental theorem of equivalence relations
Fundamental theorem of exterior calculus
Fundamental theorem of finitely generated abelian groups
Fundamental theorem of finitely generated modules over a principal ideal domain
Fundamental theorem of finite distributive lattices
Fundamental theorem of Galois theory
Fundamental theorem of geometric calculus
Fundamental theorem on homomorphisms
Fundamental theorem of ideal theory in number fields
Fundamental theorem of Lebesgue integral calculus
Fundamental theorem of linear algebra
Fundamental theorem of linear programming
Fundamental theorem of noncommutative algebra
Fundamental theorem of projective geometry
Fundamental theorem of random fields
Fu |
https://en.wikipedia.org/wiki/American%20Oil%20Chemists%27%20Society | The American Oil Chemists' Society (AOCS) is an international professional organization based in Urbana, Illinois dedicated to providing the support network for those involved with the science and technology related to fats, oils, surfactants, and other related materials.
Founded in 1909, AOCS has approximately 2,000 members in 90 countries who are active in a total of ten divisions and six sections, of which only one of the sections is within the United States.
History
The AOCS was started in May 1909 under the name Society of Cotton Products Analysts as a group that promoted recommended methods for chemical processes focused on the cottonseed industry. In 1920, the name was changed to American Oil Chemists' Society.<ref name="SATS">National Academy of Sciences - National Research Council (1961), Scientific and Technological Societies of the United States and Canada, 7th ed.; National Academy of Sciences, National Research Council, Washington D.C.</ref> In 1976, AOCS hosted the first World Conference on Oilseed and Vegetable Oils Processing Technologies in Amsterdam, presided over by the AOCS president Frank White.
According to the official AOCS site, "the mission of AOCS is to provide high standards of quality among those with a professional interest in the science and technology of fats, oils, surfactants, and related materials and is continually fulfilled by AOCS Technical Services. Its esteemed products and services help professionals maintain excellence in their industry".
Technical (Laboratory) Services
AOCS Technical has been facilitating global trade and laboratory integrity through its fine products, programs, and services since 1909.
Official Methods and Recommended Practices of the AOCS,6th Edition
AOCS methods are used in hundreds of laboratories on all six continents. The 6th Edition contains more than 400 fats, oils and lipid related methods critical for processing and trading.
Laboratory Proficiency Program (LPP)
The AOCS LPP is the world's |
https://en.wikipedia.org/wiki/Euler%20force | In classical mechanics, the Euler force is the fictitious tangential force
that appears when a non-uniformly rotating reference frame is used for analysis of motion and there is variation in the angular velocity of the reference frame's axes. The Euler acceleration (named for Leonhard Euler), also known as azimuthal acceleration or transverse acceleration is that part of the absolute acceleration that is caused by the variation in the angular velocity of the reference frame.
Intuitive example
The Euler force will be felt by a person riding a merry-go-round. As the ride starts, the Euler force will be the apparent force pushing the person to the back of the horse; and as the ride comes to a stop, it will be the apparent force pushing the person towards the front of the horse. A person on a horse close to the perimeter of the merry-go-round will perceive a greater apparent force than a person on a horse closer to the axis of rotation.
Mathematical description
The direction and magnitude of the Euler acceleration is given, in the rotating reference frame, by:
where ω is the angular velocity of rotation of the reference frame and r is the vector position of the point in the reference frame. The Euler force on an object of mass m in the rotating reference frame is then
See also
Fictitious force
Coriolis effect
Centrifugal force
Rotating reference frame
Angular acceleration
Notes and references
Fictitious forces
Rotation |
https://en.wikipedia.org/wiki/Vitelline%20envelope | The insect vitelline envelope is the outer proteinaceous layer outside the oocyte and egg. The vitelline envelope, not being a cellular structure, is commonly referred to as a membrane. However, this is a technical misnomer as the structure is composed of protein and is not a cellular component. It varies in thickness between different insects and even varies at different parts of the egg. It lies inside the outer shell of the egg, which is commonly referred to as the chorion.
The presence of the vitelline membrane defines the embryo's boundaries. It is a critical structural element required to resist the forces of morphogenesis and the mechanical pressures experienced during egg-laying.
Before egg activation, the vitelline membrane is permeable to water, ions, and small molecules. Egg activation is stimulated by mechanical deformation associated with traversing through the narrow channel in the oviduct and requires the presence of Ca2+. During egg activation, the vitelline membrane proteins are crosslinked via disulfide remodeling; the structure rigidifies and becomes impermeable to water but remains gas permeable. This process is hypothesized to have been selected to prevent polyspermy. The vitelline membrane is composed primarily of four glycoproteins proteins, collectively referred to as vitelline membrane proteins (VMPs). This class of proteins contains a conserved "VM domain": (CX7CX8C). VMPs are secreted during stages 9–10 of oogenesis and accumulate as vitelline bodies in the extracellular space; these bodies fuse to form a continuous layer at the end of stage 10. This layer thins as the oocyte grows to reach a final thickness of ~0.4 um.
Upon egg activation, peroxidase-mediated crosslinking occurs in the vitelline membrane resulting in a disulfide-linked network. After crosslinking, the envelope is impermeable to additional sperm, water, and other large molecules but remains permeable to gas exchange. Spatial information and developmental patterning are |
https://en.wikipedia.org/wiki/TomP2P | TomP2P is a distributed hash table which provides a decentralized key-value infrastructure for distributed applications. Each peer has a table that can be configured either to be disk-based or memory-based to store its values.
Overview and key concept
TomP2P stores key-value pairs in a distributed hash table. To find the peers and store the data in the distributed hash table, TomP2P uses an iterative routing approach. The underlying protocol for all the communication with other peers uses state-less request-reply messaging. Since TomP2P uses non-blocking communication, a future object is required to keep track of future results. This key concept is used for all the communication (iterative routing and DHT operations, such as storing a value on multiple peers) in TomP2P and it is also exposed in the API. Thus, an operation such as get(...) or put(...) will return immediately and the user of the API can either block the operation to wait for the completion or add a listener that gets notified when the operation completes.
Features
Java 6 DHT implementation with non-blocking IO (java.nio) and a binary protocol
XOR-based iterative routing with an ID space of 160-bit as in Kademlia
Data replication and best effort data protection
Distributed tracker and Mesh-based distributed tracker (B-Tracker)
NAT traversal via UPNP and NAT-PMP
See also
Chord (DHT)
Kademlia
Pastry (DHT) |
https://en.wikipedia.org/wiki/Genomes%20OnLine%20Database | The Genomes OnLine Database (GOLD) is a web-based resource for comprehensive information regarding genome and metagenome sequencing projects, and their associated metadata, around the world. Since 2011, the GOLD database has been run by the DOE Joint Genome Institute
The GOLD database was created in 1997; the first version of the database contained information for 350 sequencing projects, of which 48 had been completely sequenced with their analyses published.
GOLD v.5 was released on 28 May 2014. , the GOLD database contains information for 67,879 genome sequencing projects, of which 7,210 have been completed.
In order to facilitate comparative analysis between the information in GOLD and other databases (for example, GenBank and the EMBL), GOLD supports the minimum information standards metadata specifications recommended by the Genomic Standards Consortium, in particular, the MIxS (Minimum Information about any (x) Sequence) specification. GOLD also allows the annotation of genomes or metagenomes using the DOE JGI Integrated Microbial Genomes System and has links to the BioMed Central journal Standards in Genomic Sciences, allowing (meta)genomic data to be published.
See also
MicrobesOnline |
https://en.wikipedia.org/wiki/OptimJ | OptimJ is an extension for Java with language support for writing optimization models and abstractions for bulk data processing. The extensions and the proprietary product implementing the extensions were developed by Ateji which went out of business in September 2011.
OptimJ aims at providing a clear and concise algebraic notation for optimization modeling, removing compatibility barriers between optimization modeling and application programming tools, and bringing software engineering techniques such as object-orientation and modern IDE support to optimization experts.
OptimJ models are directly compatible with Java source code, existing Java libraries such as database access, Excel connection or graphical interfaces. OptimJ is compatible with development tools such as Eclipse, CVS, JUnit or JavaDoc. OptimJ is available free with the following solvers: lp_solve, glpk, LP or MPS file formats and also supports the following commercial solvers: MOSEK, IBM ILOG CPLEX Optimization Studio.
Language concepts
OptimJ combines concepts from object-oriented imperative languages with concepts from algebraic modeling languages for optimization problems. Here we will review the optimization concepts added to Java, starting with a concrete example.
The example of map coloring
The goal of a map coloring problem is to color a map so that regions sharing a common border have different colors. It can be expressed in OptimJ as follows.
package examples;
// a simple model for the map-coloring problem
public model SimpleColoring solver lpsolve
{
// maximum number of colors
int nbColors = 4;
// decision variables hold the color of each country
var int belgium in 1 .. nbColors;
var int denmark in 1 .. nbColors;
var int germany in 1 .. nbColors;
// neighbouring countries must have a different color
constraints {
belgium != germany;
germany != denmark;
}
// a main entry point to test our model
public static void main(String[] args)
{
// instant |
https://en.wikipedia.org/wiki/American%20Institute%20of%20Biological%20Sciences | The American Institute of Biological Sciences (AIBS) is a nonprofit scientific public charitable organization. The organization's mission is to promote the use of science to inform decision-making and advance biology for the benefit of science and society.
Overview
AIBS serves as a society of societies. AIBS has over 115 member organizations and is headquartered in Herndon, VA. Its staff work to achieve its mission by publishing the peer-reviewed journal BioScience, providing peer review and advisory support services for funding organizations, providing professional development for scientists and students, advocating for science policy and educating the public about biology. AIBS works with like-minded organizations, funding agencies, and nonprofit and for-profit entities to promote the use of science to inform decision-making.
AIBS is governed by an esteemed Board of Directors and a Council of representatives of our member organizations.
Background and history
AIBS was established in 1947 as a part of the National Academy of Sciences. The overarching goal was to unify the individuals and organizations that collectively represent the biological sciences, so that the community could address matters of common concern. In the 1950s, AIBS became an independent, member-governed, nonprofit 501(c)3 public charity scientific organization. The organization continues to work diligently to communicate biology to the scientific community, funders, policymakers, and other groups interested in exploring cross-cutting ideas in the biological sciences
Strategic priorities
AIBS works toward overarching outcomes through three strategic priorities:
Scientific Peer Advisory and Review Services for research proposals and programs sponsored by funding organizations, including the federal government, state agencies, private research foundations, other non-government organizations and educate the community about the science of peer review.
Publications and Communica |
https://en.wikipedia.org/wiki/Infinity%20symbol | The infinity symbol () is a mathematical symbol representing the concept of infinity. This symbol is also called a lemniscate, after the lemniscate curves of a similar shape studied in algebraic geometry, or "lazy eight", in the terminology of livestock branding.
This symbol was first used mathematically by John Wallis in the 17th century, although it has a longer history of other uses. In mathematics, it often refers to infinite processes (potential infinity) rather than infinite values (actual infinity). It has other related technical meanings, such as the use of long-lasting paper in bookbinding, and has been used for its symbolic value of the infinite in modern mysticism and literature. It is a common element of graphic design, for instance in corporate logos as well as in older designs such as the Métis flag.
Both the infinity symbol itself and several variations of the symbol are available in various character encodings.
History
The lemniscate has been a common decorative motif since ancient times; for instance it is commonly seen on Viking Age combs.
The English mathematician John Wallis is credited with introducing the infinity symbol with its mathematical meaning in 1655, in his De sectionibus conicis. Wallis did not explain his choice of this symbol. It has been conjectured to be a variant form of a Roman numeral, but which Roman numeral is unclear. One theory proposes that the infinity symbol was based on the numeral for 100 million, which resembled the same symbol enclosed within a rectangular frame. Another proposes instead that it was based on the notation CIↃ used to represent 1,000. Instead of a Roman numeral, it may alternatively be derived from a variant the lower-case form of omega, the last letter in the Greek alphabet.
Perhaps in some cases because of typographic limitations, other symbols resembling the infinity sign have been used for the same meaning. Leonhard Euler used an open letterform more closely resembling a reflected and sidewa |
https://en.wikipedia.org/wiki/Creamed%20corn | Creamed corn (which is also known by other names, such as cream-style sweet corn) is a type of creamed vegetable dish made by combining pieces of whole sweetcorn with a soupy liquid of milky residue from immature pulped corn kernels scraped from the cob. Originating in Native American cuisine, it is now most commonly eaten in the Midwestern and Southern United States, as well as being used in the French Canadian dish pâté chinois ('Chinese pie': a dish like shepherd's pie). It is a soupy version of sweetcorn, and unlike other preparations of sweetcorn, creamed corn is partially puréed, releasing the liquid contents of the kernels.
Additional ingredients
Canned creamed corn does not usually contain any cream, but some homemade versions may include milk or cream. Sugar and starch may also be added. Commercial, store-bought canned preparations may contain tapioca starch as a thickener.
Gallery
See also
Corn soup
Corn stew
Corn chowder
Cream soup
Grits
List of maize dishes
List of soups |
https://en.wikipedia.org/wiki/Fundamental%20thermodynamic%20relation | In thermodynamics, the fundamental thermodynamic relation are four fundamental equations which demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like G (Gibbs free energy) or H (enthalpy). The relation is generally expressed as a microscopic change in internal energy in terms of microscopic changes in entropy, and volume for a closed system in thermal equilibrium in the following way.
Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume.
This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in terms of the enthalpy H as
in terms of the Helmholtz free energy F as
and in terms of the Gibbs free energy G as
.
The first and second laws of thermodynamics
The first law of thermodynamics states that:
where and are infinitesimal amounts of heat supplied to the system by its surroundings and work done by the system on its surroundings, respectively.
According to the second law of thermodynamics we have for a reversible process:
Hence:
By substituting this into the first law, we have:
Letting be reversible pressure-volume work done by the system on its surroundings,
we have:
This equation has been derived in the case of reversible changes. However, since U, S, and V are thermodynamic state functions that depends on only the initial and final states of a thermodynamic process, the above relation holds also for non-reversible changes. If the composition, i.e. the amounts of the chemical components, in a system of uniform temperature and pressure can also change, e.g. due to a chemical reaction, the fundamental thermodynam |
https://en.wikipedia.org/wiki/Software%20calculator | A software calculator is a calculator that has been implemented as a computer program, rather than as a physical hardware device.
They are among the simpler interactive software tools, and, as such, they provide operations for the user to select one at a time. They can be used to perform any process that consists of a sequence of steps each of which applies one of these operations, and have no purpose other than these processes, because the operations are the sole, or at least the primary, features of the calculator, rather than being secondary features that support other functionality that is not normally known simply as calculation.
As a calculator, rather than a computer, they usually have a small set of relatively simple operations, perform short processes that are not compute intensive and do not accept large amounts of input data or produce many results.
Platforms
Software calculators are available for many different platforms, and they can be:
A program for, or included with an operating system.
A program implemented as server or client-side scripting (such as JavaScript) within a web page.
Embedded in a calculator watch.
Also complex software may have calculator-like dialogs, sometimes with the full calculator functionality, to enter data into the system.
History
Early years
Computers as we know them today first emerged in the 1940s and 1950s. The software that they ran was naturally used to perform calculations, but it was specially designed for a substantial application that was not limited to simple calculations. For example, the LEO computer was designed to run business application software such as payroll.
Software specifically to perform calculations as its main purpose was first written in the 1960s, and the first software package for general calculations to obtain widespread use was released in 1978. This was VisiCalc and it was called an interactive visible calculator, but it was actually a spreadsheet, and these are now not normally kno |
https://en.wikipedia.org/wiki/Design%20sprint | A design sprint is a time-constrained, five-phase process that uses design thinking with the aim of reducing the risk when bringing a new product, service or a feature to the market. The process aims to help teams to clearly define goals, validate assumptions and decide on a product roadmap before starting development. It seeks to address strategic issues using interdisciplinary expertise, rapid prototyping, and usability testing. This design process is similar to Sprints in an Agile development cycle.
How it started
There are multiple origins to the concept of mixing Agile and Design Thinking.
The most popular was developed by a multi-disciplinary team working out of Google Ventures. The initial iterations of the approach were created by Jake Knapp, and popularised by a series of blog articles outlining the approach and reporting on its successes within Google. As it gained industry recognition, the approach was further refined and added to by other Google staff including Braden Kowitz, Michael Margolis, John Zeratsky and Daniel Burka.
It was later published in a book published by Google Ventures called .
Possible uses
Claimed uses of the approach include
Launching a new product or a service.
Extending an existing experience to a new platform.
Existing MVP needing revised User experience design and/or UI Design.
Adding new features and functionality to a digital product.
Opportunities for improvement of a product (e.g. a high rate of cart abandonment)
Opportunities for improvement of a service.
Supporting organizations in their transformation towards new technologies (e.g., AI).
Phases
The creators of the design sprint approach, recommend preparation by picking the proper team, environment, materials and tools working with six key 'ingredients'.
Understand: Discover the business opportunity, the audience, the competition, the value proposition, and define metrics of success.
Diverge: Explore, develop and iterate creative ways of solving the pr |
https://en.wikipedia.org/wiki/SOASTA | SOASTA, Inc. is an American subsidiary of Akamai Technologies that provides services to test websites and web applications.
It is a provider of cloud-based testing services, and created a browser-based website testing product. Website tests include load testing, software performance testing, functional testing and user interface testing. SOASTA provides cloud website testing with its product CloudTest, which simulates thousands of users visiting a website simultaneously
using the Amazon Elastic Compute Cloud (EC2) service. SOASTA allows customers to use predefined tests or create customized tests to automatically test their web applications.
The SOASTA platform further enables digital businesses to gain continuous performance insight into real user experiences on mobile and web devices in real time and at scale, using real user monitoring technology coupled with its mPulse product. SOASTA serves mobile application testing needs through its product TouchTest, which provides functional mobile app testing automation for multitouch, gesture-based mobile applications.
History
SOASTA was founded by Ken Gardner and is based in Mountain View, California. Tom Lounibos has been CEO since September 2006.
In September 2008, SOASTA raised USD $6.4 million in financing from Formative Ventures, Canaan Partners, and The Entrepreneur's Fund.
In December 2008, SOASTA announced an alliance with SAVVIS to provide SAVVIS customer's with SOASTA's cloud testing services.
In the first half of 2010, SOASTA was selected by The Wall Street Journal as a “Top 50” Venture-backed company, by AlwaysOn as an OnDemand Top 100 Winner, and by the Red Herring as a Top 100 North America Tech Startup.
In October 2012, SOASTA became a Professional Services partner with ProtoTest.
In April 2017, Akamai completed a purchase of SOASTA in an all-cash transaction. |
https://en.wikipedia.org/wiki/Vanadium%20cycle | The global vanadium cycle is controlled by physical and chemical processes that drive the exchange of vanadium between its two main reservoirs: the upper continental crust and the ocean. Anthropogenic processes such as coal and petroleum production release vanadium to the atmosphere.
Sources
Natural sources
Vanadium is a trace metal that is relatively abundant in the Earth (~100 part per million in the upper crust). Vanadium is mobilized from minerals through weathering and transported to the ocean. Vanadium can enter the atmosphere through wind erosion and volcanic emissions and will remain there until it is removed by precipitation.
Anthropogenic sources
Human activity has increased the amount of vanadium emissions to the atmosphere. Vanadium is abundant in fossil fuels because it is incorporated in porphyrins during organic matter degradation. Coal and petroleum factory pollution release significant vanadium to the atmosphere. Vanadium is also mined and using for industrial purposes including for steel reinforcement, electronics, and batteries.
Sink
Vanadium is removed from the ocean by burial marine sediments and incorporation into iron oxides at hydrothermal vents.
Biological processes
Biological processes play a relatively minor role in the global vanadium cycle. Vanadium bromoperoxidase is present in some marine bacteria and algae. Vanadium can also takes the place of molybdenum in alternative nitrogenases. |
https://en.wikipedia.org/wiki/Leonard%20Bosack | Leonard X. Bosack (born 1952) is a co-founder of Cisco Systems, an American-based multinational corporation that designs and sells consumer electronics, networking and communications technology, and services. His net worth is approximately $200 million. He was awarded the Computer Entrepreneur Award in 2009 for co-founding Cisco Systems and pioneering and advancing the commercialization of routing technology and the profound changes this technology enabled in the computer industry.
He is largely responsible for pioneering the widespread commercialization of local area network (LAN) technology to connect geographically disparate computers over a multiprotocol router system, which was an unheard-of technology at the time. In 1990, Cisco's management fired Cisco co-founder Sandy Lerner and Bosack resigned. , Bosack was the CEO of XKL LLC, a privately funded engineering company which explores and develops optical networks for data communications.
Background
Born in Pennsylvania in 1952 to Polish Catholic family, Bosack graduated from La Salle College High School in 1969. In 1973, Bosack graduated from the University of Pennsylvania School of Engineering and Applied Science, and joined the Digital Equipment Corporation (DEC) as a hardware engineer. In 1979, he was accepted into Stanford University, and began to study computer science. During his time at Stanford, he was credited for becoming a support engineer for a 1981 project to connect all of Stanford's mainframes, minis, LISP machines, and Altos.
His contribution was to work on the network router that allowed the computer network under his management to share data from the Computer Science Lab with the Business School's network. He met his wife Sandra Lerner at Stanford, where she was the manager of the Business School lab, and the couple married in 1980. Together in 1984, they started Cisco in Menlo Park.
Cisco
In 1984, Bosack co-founded Cisco Systems with his then partner (and now ex-wife) Sandy Lerner. Their |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.