source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Dynamite%20Dan | Dynamite Dan is a platform game written by Rod Bowkett for the ZX Spectrum and published by Mirrorsoft in 1985. It was ported to the Amstrad CPC, Commodore 64, and MSX.
A sequel, Dynamite Dan II, was released the following year.
Gameplay
The game starts where Dan lands his airship on the top of the evil Dr Blitzen's hideout. The aim of the game is to find eight sticks of dynamite that are placed randomly around the playing area whilst avoiding the perils of the game such as moving monsters, drowning and falling from great heights. Once Dan has all eight sticks of dynamite, the player must make their way to the central safe to blow it open and steal the plans for the evil doctor's Death Ray and escape to his airship.
The playing area is one large building split up into multiple screens that wrap around a central elevator. Each screen contains a number of moving monsters that, once the player moves into them, are destroyed but take off a life in return. The only exception to being destroyed once walked into are Dr Blitzen and his assistant Donner (Donner and Blitzen) who are both located on the same screen as the safe. Other perils to Dan's life include running out of energy (caused by not collecting enough food, falling from heights and being hit by laser beams). If Dan should also fall into the underground river that flows beneath the building, the player will receive a game over unless Dan had picked up oxygen, in which case they will be sent back to the start of the game.
Once completed, the game provides a secret code to be deciphered and a telephone number to call with the answer. While the number no longer works now, the prize was a ride in the Mirrorsoft blimp.
The background music when choosing the game settings and waiting for the game to start is the third movement (Rondo Alla Turca) from Wolfgang Amadeus Mozart's Piano Sonata No. 11 in A major, K. 331, which had been used the previous year in Jet Set Willy in the Commodore 64 conversion on some screen |
https://en.wikipedia.org/wiki/Thermosynthesis | Thermosynthesis is a theoretical mechanism proposed by Anthonie Muller for biological use of the free energy in a temperature gradient to drive energetically uphill anabolic reactions. It makes use of this thermal gradient, or the dissipative structure of convection in this gradient, to drive a microscopic heat engine that performs condensation reactions. Thus negative entropy is generated. The components of the biological thermosynthesis machinery concern progenitors of today's ATP synthase, which functions according to the binding change mechanism, driven by chemiosmosis. Resembling primitive free energy generating physico-chemical processes based on temperature-dependent adsorption to inorganic materials such as clay, this simple type of energy conversion is proposed to have sustained the origin of life, including the emergence of the RNA World. For this RNA World it gives a model that describes the stepwise acquisition of the set of transfer RNAs that sustains the Genetic code. The phylogenetic tree of extant transfer RNAs is consistent with the idea.
Thermosynthesis may still occur in some terrestrial
and extraterrestrial environments. However, no organisms are known at present that use thermosynthesis as a source of energy, although it is possible that it might occur in extraterrestrial environments where no light is available, such as on the subsurface ocean that may exist on the moon Europa. Thermosynthesis also permits a simple model for the origin of photosynthesis. It has moreover been used to explain the origin of animals by symbiogenesis of benthic sessile thermosynthesizers at hydrothermal vents during the Snowball Earths of the Precambrian. Preliminary experiments have started to attempt to isolate thermosynthetic organisms.
Muller's Biothermosynthesis
A Dutch biochemist and physicist Anthonie Muller[1] wrote many papers on thermosynthesis since 1983.
He defined thermosynthesis as:
"Biological heat engines working on thermal cycling."
also as:
"Th |
https://en.wikipedia.org/wiki/Alpha%20glucan | α-Glucans (alpha-glucans) are polysaccharides of D-glucose monomers
linked with glycosidic bonds of the alpha form. α-Glucans use cofactors in a cofactor site in order to activate a glucan phosphorylase enzyme. This enzyme causes a reaction that transfers a glucosyl portion between orthophosphate and α-I,4-glucan. The position of the cofactors to the active sites on the enzyme are critical to the overall reaction rate thus, any alteration to the cofactor site leads to the disruption of the glucan binding site.
Alpha-glucan is also commonly found in bacteria, yeasts, plants, and insects. Whereas the main pathway of α-glucan synthesis is via glycosidic bonds of glucose monomers, α-glucan can be comparably synthesized via the maltosyl transferase GlgE and branching enzyme GlgB. This alternative pathway is common in many bacteria, which use GlgB and GlgE or the GlgE pathway exclusively for the biosynthesis of α-glucan. The GlgE pathway is especially prominent in actinomycetes, such as mycobacteria and streptomycetes. However, α-glucans in mycobacteria have a slight variation in the length of linear chains, which point to the fact that the branching enzyme in mycobacteria makes shorter branches compared to glycogen synthesis. For organisms that can utilize both classic glycogen synthesis and the GlgE pathway, only GlgB enzyme is present, which indicates that the GlgB enzyme is shared between both pathways.
Other uses for α-glucan have been developed based on its availability in bacteria. The accumulation of glycogen Neisseria polysacchera and other bacteria are able to use in α-glucan to catalyze glucose units to form α-1,4-glucan and liberating fructose in the process. To regulate carbohydrate metabolism, more resistant starch was necessary. An α-glucan coated starch molecule produced from Neisseria polysacchera was able to improve some of the physiochemical properties in comparison to raw normal starch, especially in loading efficiency of bioactive molecules. Alpha- |
https://en.wikipedia.org/wiki/Bioimage%20informatics | Bioimage informatics is a subfield of bioinformatics and computational biology. It focuses on the use of computational techniques to analyze bioimages, especially cellular and molecular images, at large scale and high throughput. The goal is to obtain useful knowledge out of complicated and heterogeneous image and related metadata.
Automated microscopes are able to collect large numbers of images with minimal intervention. This has led to a data explosion, which absolutely requires automatic processing. Additionally, and surprisingly, for several of these tasks, there is evidence that automated systems can perform better than humans. In addition, automated systems are unbiased, unlike human based analysis whose evaluation may (even unconsciously) be influenced by the desired outcome.
There has been an increasing focus on developing novel image processing, computer vision, data mining, database and visualization techniques to extract, compare, search and manage the biological knowledge in these data-intensive problems.
Data Modalities
Several data collection systems and platforms are used, which require different methods to be handled optimally.
Fluorescent Microscopy
Fluorescent microscopy allows the direct visualization of molecules at the subcellular level, in both live and fixed cells. Molecules of interest are marked with either green fluorescent protein (GFP), another fluorescent protein, or a fluorescently-labeled antibody. Several types of microscope are regularly used: widefield, confocal, or two-photon. Most microscopy system will also support the collection of time-series (movies).
In general, filters are used so that each dye is imaged separately (for example, a blue filter is used to image Hoechst, then rapidly switched to a green filter to image GFP). For consumption, the images are often displayed in false color by showing each channel in a different color, but these may not even be related to the original wavelengths used. In some cases, the |
https://en.wikipedia.org/wiki/Trusted%20Data%20Format | The Trusted Data Format (TDF) is a data object encoding specification for the purposes of enabling data tagging and cryptographic security features. These features include assertion of data properties or tags, cryptographic binding and data encryption. The TDF is freely available with no restrictions and requires no use of proprietary or patented technology and is thus open for anyone to use.
Overview
The TDF Specification is based on a Trusted Data Object (TDO) which can be grouped together into a Trusted Data Collection (TDC). Each TDO consists of a data payload which can be associated with an unlimited number of metadata objects. The TDO supports the cryptographic binding of the metadata objects to the payload data object. In addition, both data and metadata objects can be associated with a block of encryption information which is used by any TDF consumer to decrypt the associated data or metadata if it had been encrypted. A TDC allows for additional metadata objects to apply to a set of TDOs.
Implementations
The United States Intelligence Community maintains the IC-TDF, which includes government-specific tagging requirements on top of the core TDF capabilities mentioned above, in an XML Data Encoding Specification.
Virtru offers client-side email and file encryption based on the TDF.
The United States Department of Defense uses TDF to implement the Department of Defense Discovery Metadata Specification (DDMS). |
https://en.wikipedia.org/wiki/LOTS%20%28personality%20psychology%29 | LOTS is an acronym, suggested by Cattell in 1957 and later elaborated by Block, to provide a broad classification of data source for personality psychology assessment. Each data source has its advantage and disadvantage. Research on personality commonly employ different data source so as to represent better the pattern of one's distinctive features.
L-data, refer to the life-outcome data, such as age, education, income, student grades at school, criminal and conviction record
O-data, refer to observational data, such as observer rating from friends and family
T-data, refer to standardised and objective test measurement, such as scored test, physiological response, reaction times (RT), implicit association test (IAT)
S-data, refer to self-reports, such as questionnaires, personality test, structured interview |
https://en.wikipedia.org/wiki/Ascending%20colon | In the anatomy of humans and homologous primates, the ascending colon is the part of the colon located between the cecum and the transverse colon.
Characteristics and structure
The ascending colon is smaller in calibre than the cecum from where it starts. It passes upward, opposite the colic valve, to the under surface of the right lobe of the liver, on the right of the gall-bladder, where it is lodged in a shallow depression, the colic impression; here it bends abruptly forward and to the left, forming the right colic flexure (hepatic) where it becomes the transverse colon.
It is retained in contact with the posterior wall of the abdomen by the peritoneum, which covers its anterior surface and sides, its posterior surface being connected by loose areolar tissue with the iliacus, quadratus lumborum, aponeurotic origin of transversus abdominis, and with the front of the lower and lateral part of the right kidney.
Sometimes the peritoneum completely invests it and forms a distinct but narrow mesocolon.
It is in relation, in front, with the convolutions of the ileum and the abdominal walls.
Parasympathetic innervation to the ascending colon is supplied by the vagus nerve. Sympathetic innervation is supplied by the thoracic splanchnic nerves.
Location
The ascending colon is on the right side of the body (barring any malformations). The term right colon is hypernymous to ascending colon in precise use; many casual mentions of the right colon chiefly concern the ascending colon.
Additional images
See also
Descending colon |
https://en.wikipedia.org/wiki/2010%20flash%20crash | The May 6, 2010, flash crash, also known as the crash of 2:45 or simply the flash crash, was a United States trillion-dollar flash crash (a type of stock market crash) which started at 2:32 p.m. EDT and lasted for approximately 36 minutes.
Overview
Stock indices, such as the S&P 500, Dow Jones Industrial Average and Nasdaq Composite, collapsed and rebounded very rapidly. The Dow Jones Industrial Average had its second biggest intraday point decline (from the opening) up to that point, plunging 998.5 points (about 9%), most within minutes, only to recover a large part of the loss. It was also the second-largest intraday point swing (difference between intraday high and intraday low) up to that point, at 1,010.14 points. The prices of stocks, stock index futures, options and exchange-traded funds (ETFs) were volatile, thus trading volume spiked. A CFTC 2014 report described it as one of the most turbulent periods in the history of financial markets.
New regulations put in place following the 2010 flash crash proved to be inadequate to protect investors in the August 24, 2015, flash crash — "when the price of many ETFs appeared to come unhinged from their underlying value" — and ETFs were subsequently put under greater scrutiny by regulators and investors.
On April 21, 2015, nearly five years after the incident, the U.S. Department of Justice laid "22 criminal counts, including fraud and market manipulation against Navinder Singh Sarao, a British Indian financial trader. Among the charges included was the use of spoofing algorithms; just prior to the flash crash, he placed orders for thousands of E-mini S&P 500 stock index futures contracts which he planned on canceling later. These orders amounting to about "$200 million worth of bets that the market would fall" were "replaced or modified 19,000 times" before they were canceled. Spoofing, layering, and front running are now banned.
The Commodity Futures Trading Commission (CFTC) investigation concluded that Sarao |
https://en.wikipedia.org/wiki/Dispose%20pattern | In object-oriented programming, the dispose pattern is a design pattern for resource management. In this pattern, a resource is held by an object, and released by calling a conventional method – usually called close, dispose, free, release depending on the language – which releases any resources the object is holding onto. Many programming languages offer language constructs to avoid having to call the dispose method explicitly in common situations.
The dispose pattern is primarily used in languages whose runtime environment have automatic garbage collection (see motivation below).
Motivation
Wrapping resources in objects
Wrapping resources in objects is the object-oriented form of encapsulation, and underlies the dispose pattern.
Resources are typically represented by handles (abstract references), concretely usually integers, which are used to communicate with an external system that provides the resource. For example, files are provided by the operating system (specifically the file system), which in many systems represents open files with a file descriptor (an integer representing the file).
These handles can be used directly, by storing the value in a variable and passing it as an argument to functions that use the resource. However, it is frequently useful to abstract from the handle itself (for example, if different operating systems represent files differently), and to store additional auxiliary data with the handle, so handles can be stored as a field in a record, along with other data; if this in an opaque data type, then this provides information hiding and the user is abstracted from the actual representation.
For example, in C file input/output, files are represented by objects of the FILE type (confusingly called "file handles": these are a language-level abstraction), which stores an (operating system) handle to the file (such as a file descriptor), together with auxiliary information like I/O mode (reading, writing) and position in the stream. |
https://en.wikipedia.org/wiki/Tetrasodium%20EDTA | Tetrasodium EDTA is the salt resulting from the neutralization of ethylenediaminetetraacetic acid with four equivalents of sodium hydroxide (or an equivalent sodium base). It is a white solid that is highly soluble in water. Commercial samples are often hydrated, e.g. Na4EDTA.4H2O. The properties of solutions produced from the anhydrous and hydrated forms are the same, provided they are at the same pH.
It is used as a source of the chelating agent EDTA4-. A 1% aqueous solution has a pH of approximately 11.3. When dissolved in neutral water, it converts partially to H2EDTA2-. Ethylenediaminetetraacetic acid is produced commercially via the intermediacy of tetrasodium EDTA.
Products
The substance is also known as Dissolvine E-39. It is a salt of edetic acid. It has been known at least since 1954. It is sometimes used as a chelating agent.
The assignee on 5% of patents at the USPTO containing the substance is the firm Procter and Gamble. It is used most notably in cosmetics and hair and skin care products.
The substance has been used to aid in formulation of a removal product for rust, corrosion, and scale from ferrous metal, copper, brass, and other surfaces.
At a concentration of 6%, it is the main active ingredient in some types of engine coolant system flushes. |
https://en.wikipedia.org/wiki/Norm%20%28artificial%20intelligence%29 | Norms can be considered from different perspectives in artificial intelligence to create computers and computer software that are capable of intelligent behaviour.
In artificial intelligence and law, legal norms are considered in computational tools to automatically reason upon them. In multi-agent systems (MAS), a branch of artificial intelligence (AI), a norm is a guide for the common conduct of agents, thereby easing their decision-making, coordination and organization.
Since most problems concerning regulation of the interaction of autonomous agents are linked to issues traditionally addressed by legal studies, and since law is the most pervasive and developed normative system, efforts to account for norms in artificial intelligence and law and in normative multi-agent systems often overlap.
Artificial intelligence and law
With the arrival of computer applications into the legal domain, and especially artificial intelligence applied to it, logic has been used as the major tool to formalize legal
reasoning and has been developed in many directions, ranging from deontic logics to formal systems of argumentation.
The knowledge base of legal reasoning systems usually includes legal norms (such as governmental regulations and contracts), and as a consequence, legal rules are the focus of knowledge representation and reasoning approaches to automatize and solve complex legal tasks. Legal norms are typically represented into a logic-based formalism, such a deontic logic.
Artificial intelligence and law applications using an explicit representation of norms range from checking the compliance of business processes and the automatic execution of smart contracts to legal expert systems advising people on legal matters.
Multi-agent systems
Norms in multi-agent systems may appear with different degrees of explicitness ranging from fully unambiguous written prescriptions to implicit unwritten norms or tacit emerging patterns. Computer scientists’ studies mirror this |
https://en.wikipedia.org/wiki/Glossary%20of%20machine%20vision | The following are common definitions related to the machine vision field.
General related fields
Machine vision
Computer vision
Image processing
Signal processing
0-9
1394. FireWire is Apple Inc.'s brand name for the IEEE 1394 interface. It is also known as i.Link (Sony's name) or IEEE 1394 (although the 1394 standard also defines a backplane interface). It is a personal computer (and digital audio/digital video) serial bus interface standard, offering high-speed communications and isochronous real-time data services.
1D. One-dimensional.
2D computer graphics. The computer-based generation of digital images—mostly from two-dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific to them.
3D computer graphics. 3D computer graphics are different from 2D computer graphics in that a three-dimensional representation of geometric data is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques.
3D scanner. This is a device that analyzes a real-world object or environment to collect data on its shape and possibly color. The collected data can then be used to construct digital, three dimensional models useful for a wide variety of applications.
A
Aberration. Optically, defocus refers to a translation along the optical axis away from the plane or surface of best focus. In general, defocus reduces the sharpness and contrast of the image. What should be sharp, high-contrast edges in a scene become gradual transiti |
https://en.wikipedia.org/wiki/IA-64 | IA-64 (Intel Itanium architecture) is the instruction set architecture (ISA) of the discontinued Itanium family of 64-bit Intel microprocessors. The basic ISA specification originated at Hewlett-Packard (HP), and was subsequently implemented by Intel in collaboration with HP. The first Itanium processor, codenamed Merced, was released in 2001.
The Itanium architecture is based on explicit instruction-level parallelism, in which the compiler decides which instructions to execute in parallel. This contrasts with superscalar architectures, which depend on the processor to manage instruction dependencies at runtime. In all Itanium models, up to and including Tukwila, cores execute up to six instructions per clock cycle.
In 2008, Itanium was the fourth-most deployed microprocessor architecture for enterprise-class systems, behind x86-64, Power ISA, and SPARC.
In 2019, Intel announced the discontinuation of the last of the CPUs supporting the IA-64 architecture.
History
Development
In 1989, HP began to become concerned that reduced instruction set computing (RISC) architectures were approaching a processing limit at one instruction per cycle. Both Intel and HP researchers had been exploring computer architecture options for future designs and separately began investigating a new concept known as very long instruction word (VLIW) which came out of research by Yale University in the early 1980s.
VLIW is a computer architecture concept (like RISC and CISC) where a single instruction word contains multiple instructions encoded in one very long instruction word to facilitate the processor executing multiple instructions in each clock cycle. Typical VLIW implementations rely heavily on sophisticated compilers to determine at compile time which instructions can be executed at the same time and the proper scheduling of these instructions for execution and also to help predict the direction of branch operations. The value of this approach is to do more useful work in fewer |
https://en.wikipedia.org/wiki/CFU-Baso | CFU-Baso is a colony forming unit. that gives rise to basophils. Some sources use the term "CFU-Bas". |
https://en.wikipedia.org/wiki/Conserved%20quantity | A conserved quantity is a property or value that remains constant over time in a system even when changes occur in the system. In mathematics, a conserved quantity of a dynamical system is formally defined as a function of the dependent variables, the value of which remains constant along each trajectory of the system.
Not all systems have conserved quantities, and conserved quantities are not unique, since one can always produce another such quantity by applying a suitable function, such as adding a constant, to a conserved quantity.
Since many laws of physics express some kind of conservation, conserved quantities commonly exist in mathematical models of physical systems. For example, any classical mechanics model will have mechanical energy as a conserved quantity as long as the forces involved are conservative.
Differential equations
For a first order system of differential equations
where bold indicates vector quantities, a scalar-valued function H(r) is a conserved quantity of the system if, for all time and initial conditions in some specific domain,
Note that by using the multivariate chain rule,
so that the definition may be written as
which contains information specific to the system and can be helpful in finding conserved quantities, or establishing whether or not a conserved quantity exists.
Hamiltonian mechanics
For a system defined by the Hamiltonian , a function f of the generalized coordinates q and generalized momenta p has time evolution
and hence is conserved if and only if . Here denotes the Poisson bracket.
Lagrangian mechanics
Suppose a system is defined by the Lagrangian L with generalized coordinates q. If L has no explicit time dependence (so ), then the energy E defined by
is conserved.
Furthermore, if , then q is said to be a cyclic coordinate and the generalized momentum p defined by
is conserved. This may be derived by using the Euler–Lagrange equations.
See also
Conservative system
Lyapunov function
Hamiltonian sy |
https://en.wikipedia.org/wiki/Nest%20box | A nest box, also spelled nestbox, is a man-made enclosure provided for animals to nest in. Nest boxes are most frequently utilized for birds, in which case they are also called birdhouses or a birdbox/bird box, but some mammals such as bats may also use them. Placing nestboxes or roosting boxes may also be used to help maintain populations of particular species in an area.
Nest boxes were used since Roman times to capture birds for meat. The use of nest boxes for other purposes began in the mid-18th century, and naturalist August von Berlepsch was the first to produce nest boxes on a commercial scale.
Nest boxes are getting more attention because increasing industrialization, urban growth, modern construction methods, deforestation and other human activities since the mid-20th century have caused severe declines in birds' natural habitats, introducing hurdles to breeding. Nest boxes can help prevent bird extinction, as it was shown in the case of scarlet macaws in the Peruvian Amazon.
Construction
General construction
Nest boxes are usually wooden, although the purple martin will nest in metal. Some boxes are made from a mixture of wood and concrete, called woodcrete. Ceramic and plastic nestboxes are not suitable.
Nest boxes should be made from untreated wood with an overhanging, sloped roof, a recessed floor, drainage and ventilation holes, a way to access the interior for monitoring and cleaning, and have no outside perches which could assist predators. Boxes may either have an entrance hole or be open-fronted. Some nest boxes can be highly decorated and complex, sometimes mimicking human houses or other structures. They may also contain nest box cameras so that use of, and activity within, the box can be monitored.
Bird nest box construction
The diameter of the opening in a nest-box has a very strong influence on the species of birds that will use the box. Many small birds select boxes with a hole only just large enough for an adult bird to pass through |
https://en.wikipedia.org/wiki/Bernard%20Price%20Memorial%20Lecture | The Bernard Price Memorial Lecture is the premier annual lecture of the South African Institute of Electrical Engineers. It is of general scientific or engineering interest and is given by an invited guest, often from overseas, at several of the major centres on South Africa. The main lecture and accompanying dinner are usually held at the University of Witwatersrand and it is also presented in the space of one week at other centres, typically Cape Town, Durban, East London and Port Elizabeth.
The Lecture is named in memory of the eminent electrical engineer Bernard Price. The first Lecture was held in 1951 and it has occurred as an annual event ever since.
Lecturers
1951 Basil Schonland
1952 A M Jacobs
1953 H J Van Eck
1954 J M Meek
1955 Frank Nabarro
1956 A L Hales
1957 P G Game
1958 Colin Cherry
1959 Thomas Allibone
1960 M G Say
1961 Willis Jackson
1963 W R Stevens
1964 William Pickering
1965 G H Rawcliffe
1966 Harold Bishop
1967 Eric Eastwood
1968 F J Lane
1969 A H Reeves
1970 Andrew R Cooper
1971 Herbert Haslegrave
1972 W J Bray
1973 R Noser
1974 D Kind
1975 L Kirchmayer
1976 S Jones
1977 J Johnson
1978 T G E Cockbain
1979 A R Hileman
1980 James Redmond
1981 L M Muntzing
1982 K F Raby
1983 R Isermann
1984 M N John
1985 J W L de Villiers
1986 Derek Roberts
1987 Wolfram Boeck
1988 Karl Gehring
1989 Leonard Sagan
1990 GKF Heyner
1991 P S Blythin
1992 P M Neches
1993 P Radley
1994 P R Rosen
1995 F P Sioshansi
1996 J Taylor
1997 M Chamia
1998 C Gellings
1999 M W Kennedy
2000 John Midwinter
2001 Pragasen Pillay
2002 Polina Bayvel
2003 Case Rijsdijk
2004 Frank Larkins
2005 Igor Aleksander
2006 Kevin Warwick
2007 Skip Hatfield
2008 Sami Solanki
2009 William Gruver
2010 Glenn Ricart
2011 Philippe Paelinck
2012 Nick Frydas
2013 Vint Cerf
2014 Ian Jandrell
2015 Saurabh Sinha
2016 Tshilidzi Marwala
2017 Fulufhelo Nelwamondo
2018 Ian Craig
2019 Robert Metcalfe
2020 Roger Price |
https://en.wikipedia.org/wiki/Food%20defense | Food defense is the protection of food products from intentional contamination or adulteration by biological, chemical, physical, or radiological agents introduced for the purpose of causing harm. It addresses additional concerns including physical, personnel and operational security.
Food defense is one of the four categories of the food protection risk matrix which include: food safety, which is based on unintentional or environmental contamination that can cause harm; Food fraud, which is based on intentional deception for economic gain; and Food quality, which may also be affected by profit-driven behavior but without intention to cause harm.
Overarching these four categories is food security, which deals with individuals having access to enough food for an active, healthy life. Food protection is the umbrella term encompassing both food defense and food safety. These six terms are often conflated.
Along with protecting the food system, food defense also deals with prevention, protection, mitigation, response and recovery from intentional acts of adulteration.
History in the United States
1906: The Federal Meat Inspection Act places requirements on the slaughter, processing and labeling of meat and meat products, domestic and imported.
1938: The Federal Food, Drug and Cosmetic Act establishes definitions and regulation for the safety of food, drugs, and cosmetics.
1957: The Poultry Products Inspection Act requires the Food Safety and Inspection Service (FSIS) to inspect all domesticated birds meant for human consumption.
November 2002: The Homeland Security Act passed by congress creates the Department of Homeland Security in response to the September 11 attacks.
December 2003: Homeland Security Presidential Directive 7 establishes a policy to identify and prioritize critical infrastructures. Food and Agriculture is identified as one of these infrastructures
January 2004: The Homeland Security Presidential Directive 9 establishes policy to protect agri |
https://en.wikipedia.org/wiki/Pexels | Pexels is a provider of stock photography and stock footage. It was founded in Germany in 2014 and maintains a library with over 3.2 million free stock photos and videos.
History
Pexels was founded by twin brothers Ingo and Bruno Joseph in Fuldabrück, Hesse. The brothers started the platform in 2014 with around 800 photos. Since 2015 Daniel Frese is part of the team. The graphic design platform Canva acquired Pexels in 2018.
Business model
Pexels provides media for online download, maintaining a library that contains over 3.2 million photos and videos, growing each month by roughly 200,000 files. The content is uploaded by the users and reviewed manually. Using and downloading the media is free, the website generates income through advertisements for paid content databases. There is also a donation option for users, and while attribution of the content creator is not required, it is appreciated. Through the merger with Canva, Pexels' database is available in the Canva application.
Pexels is committed to providing a diverse database, for example by including LGBTQ+ stock content, and through a partnership with Nappy, a platform that focuses on POC content.
License
Like Pixabay, Pexels originally offered photos under the CC0 Creative Commons license. Today, Pexels doesn't offer media under CC0 but has their own set of rules for the use of their photos and footage. Importantly, their license does not permit the user to sell unaltered copies of a photo or video or to resell the content on other stock platforms.
Staff
The Pexels staff consists of the three founders, who live in Berlin, Germany, and a team of 40 who are based in Germany, other parts of Europe, and North and South America. The company does not have headquarters; all staff work from their respective homes. Bruno and Ingo Joseph were CEOs until November 2018, when Clifford Obrecht, founder of Canva, became CEO of the company. Bruno Joseph was reinstated as CEO in July 2020.
External links
See als |
https://en.wikipedia.org/wiki/Quantum%20Artificial%20Intelligence%20Lab | The Quantum Artificial Intelligence Lab (also called the Quantum AI Lab or QuAIL) is a joint initiative of NASA, Universities Space Research Association, and Google (specifically, Google Research) whose goal is to pioneer research on how quantum computing might help with machine learning and other difficult computer science problems. The lab is hosted at NASA's Ames Research Center.
History
The Quantum AI Lab was announced by Google Research in a blog post on May 16, 2013. At the time of launch, the Lab was using the most advanced commercially available quantum computer, D-Wave Two from D-Wave Systems.
On October 10, 2013, Google released a short film describing the current state of the Quantum AI Lab.
On October 18, 2013, Google announced that it had incorporated quantum physics into Minecraft.
In January 2014, Google reported results comparing the performance of the D-Wave Two in the lab with that of classical computers. The results were ambiguous and provoked heated discussion on the Internet.
On 2 September 2014, it was announced that the Quantum AI Lab, in partnership with UC Santa Barbara, would be launching an initiative to create quantum information processors based on superconducting electronics.
On the 23rd of October 2019, the Quantum AI Lab announced in a paper that it had achieved quantum supremacy.
See also
Artificial intelligence
Glossary of artificial intelligence
Google Brain
Google X |
https://en.wikipedia.org/wiki/Index%20of%20evolutionary%20biology%20articles | This is a list of topics in evolutionary biology.
A
abiogenesis – adaptation – adaptive mutation – adaptive radiation – allele – allele frequency – allochronic speciation – allopatric speciation – altruism – : anagenesis – anti-predator adaptation – applications of evolution – aposematism – Archaeopteryx – aquatic adaptation – artificial selection – atavism
B
Henry Walter Bates – biological organisation – Brassica oleracea – breed
C
Cambrian explosion – camouflage – Sean B. Carroll – catagenesis – gene-centered view of evolution – cephalization – Sergei Chetverikov – chronobiology – chronospecies – clade – cladistics – climatic adaptation – coalescent theory – co-evolution – co-operation – coefficient of relationship – common descent – convergent evolution – creation–evolution controversy – cultivar – conspecific song preference
D
Darwin (unit) – Charles Darwin – Darwinism – Darwin's finches – Richard Dawkins – directed mutagenesis – Directed evolution – directional selection – Theodosius Dobzhansky – dog breeding – domestication – domestication of the horse
E
E. coli long-term evolution experiment – ecological genetics – ecological selection – ecological speciation – Endless Forms Most Beautiful – endosymbiosis – error threshold (evolution) – evidence of common descent – evolution – evolutionary arms race – evolutionary capacitance
Evolution: of ageing – of the brain – of cetaceans – of complexity – of dinosaurs – of the eye – of fish – of the horse – of insects – of human intelligence – of mammalian auditory ossicles – of mammals – of monogamy – of sex – of sirenians – of tetrapods – of the wolf
evolutionary developmental biology – evolutionary dynamics – evolutionary game theory – evolutionary history of life – evolutionary history of plants – evolutionary medicine – evolutionary neuroscience – evolutionary psychology – evolutionary radiation – evolutionarily stable strategy – evolutionary taxonomy – evolutionary tree – evolvability – experimental evol |
https://en.wikipedia.org/wiki/List%20of%20sauces | The following is a list of notable culinary and prepared sauces used in cooking and food service.
General
(salsa roja)
– a velouté sauce flavored with tomato
– prepared using mushrooms and lemon
Prepared sauces
By type
Brown sauces
include:
Butter sauces
Beurre noisette
Emulsified sauces
(w/ chilli)
Fish sauces
Green sauces
See
Tomato sauces
Hot sauces
Pepper sauces
Mustard sauces
Chile pepper-tinged sauces
s include:
sauce
sauce
Meat-based sauces
Pink sauces
See Pink sauce
Sauces made of chopped fresh ingredients
Latin American Salsa cruda of various kinds
Sweet sauces
not liquid, but called a sauce nonetheless
White sauces
By region
Africa
Sauces in African cuisine include:
Asia
East Asian sauces
Sauces in East Asian cuisine include:
(Chinese; see umeboshi paste below for Japanese pickled plum sauce)
, or Japanese pickled plum sauce, a thick sauce from a fruit called a plum in English but which is closer to an apricot
Cooked sauces
– a way of cooking in Japan, a branch of sauces in North America
Southeast Asian sauces
Sauces in Southeast Asian cuisine include:
Caucasus
Sauces in Caucasian cuisine include:
Mediterranean
Sauces in Mediterranean cuisine include:
Middle East
Sauces in Middle Eastern cuisine include:
South America
Sauces in South American cuisine include:
By country
Argentina
Sauces in Argentine cuisine include:
Barbados
Sauces in the cuisine of Barbados include:
Belgium
Sauces in Belgian cuisine include:
Andalouse sauce – a mildly spiced sauce made from mayonnaise, tomatoes and peppers
Brasil sauce – mayonnaise with pureed pineapple, tomato and spices
– A "gypsy" sauce of tomatoes, paprika and chopped bell peppers, borrowed from Germany
Bolivia
Sauces in Bolivian cuisine include:
Brazil
Sauces in Brazilian cuisine include:
Canada
|
https://en.wikipedia.org/wiki/IBM%207040 | The IBM 7040 was a historic but short-lived model of transistor computer built in the 1960s.
History
It was announced by IBM in December 1961, but did not ship until April 1963. A later member of the IBM 700/7000 series of scientific computers, it was a scaled-down version of the IBM 7090. It was not fully compatible with the 7090. Some 7090 features, including index registers, character instructions and floating point, were extra-cost options. It also featured a different input/output architecture, based on the IBM 1414 data synchronizer, allowing more modern IBM peripherals to be used. A model designed to be compatible with the 7040 with more performance was announced as the 7044 at the same time.
Peter Fagg headed the development of the 7040 under executive Bob O. Evans.
A number of IBM 7040 and 7044 computers were shipped, but it was eventually made obsolete by the IBM System/360 family, announced in 1964. The schedule delays caused by IBM's multiple incompatible architectures provided motivation for the unified System/360 family.
The 7040 proved popular for use at universities, due to its comparatively low price. For example, one was installed in May 1965 at Columbia University.
One of the first in Canada was at the University of Waterloo, bought by professor J. Wesley Graham. A team of students was frustrated with the slow performance of the Fortran compiler. In the summer of 1965 they wrote the WATFOR compiler for their 7040, which became popular with many newly formed computer science departments.
IBM also offered the 7040 (or 7044) as an input-output processor attached to a 7090, in a configuration known as the 7090/7040 Direct Coupled System (DCS). Each computer was slightly modified to be able to interrupt the other.
IBM used similar numbers for a model of its eServer pSeries 690 RS/6000 architecture much later. The 7040-681, for example, was withdrawn in 2005.
See also
List of IBM products
IBM mainframe
History of IBM |
https://en.wikipedia.org/wiki/Flutemetamol%20%2818F%29 | {{DISPLAYTITLE:Flutemetamol (18F)}}
Flutemetamol (18F) (trade name Vizamyl, by GE Healthcare) is a PET scanning radiopharmaceutical containing the radionuclide fluorine-18, used as a diagnostic tool for Alzheimer's disease.
Adverse effects
Adverse effects of flutemetamol include headache, nausea, dizziness, flushing and increased blood pressure.
Mechanism of action
After the substance is given intravenously, it accumulates in beta amyloid plaques in the patient's brain, which thus become visible via positron emission tomography (PET).
Manufacturing and distribution
Flutemetamol (18F) can be produced within five to six hours. It then undergoes a quality check and is ready to be distributed immediately after. The product must be used within a certain time frame for maximum efficacy. Because of the limited time window, flutemetamol is not produced until an order has been placed.
Flutemetamol is typically administered intravenously in 1 to 10 mL doses. Average costs for PET scans without insurance coverage are around $3,000. Currently Medicare does not cover use of amyloid imaging agents except for in clinical trials. Because of this, there is a low market for flutemetamol.
History
Flutemetamol was first approved for use in the US by the Food and Drug Administration (FDA) in 2013 for intravenous use.
Clinical trials
Two clinical trials were conducted for flutemetamol (18F). The first compared PET scans of terminally ill patients with flutemetamol to post mortem standard-of-truth assessments of cerebral cortical neuritic plaque density. The second trial assessed intra-reader reproducibility of PET scans using flutemetamol.
Clinical trial 1
Of the 176 patients imaged in this trial had a median age was 82, with 57 of patients being female. The initial flutemetamol PET scan resulted in 43 positive and 25 negative results for cerebral cortisol amyloid status. 69 of the initial patients died within 13 months of the flutemetamol PET scan. The autopsy for 67 of those |
https://en.wikipedia.org/wiki/Scriptol | Scriptol is an object-oriented programming language that allows users to declare an XML document as a class. The language is universal and allows users to create dynamic web pages, as well as create scripts and binary applications. |
https://en.wikipedia.org/wiki/Computational%20lexicology | Computational lexicology is a branch of computational linguistics, which is concerned with the use of computers in the study of lexicon. It has been more narrowly described by some scholars (Amsler, 1980) as the use of computers in the study of machine-readable dictionaries. It is distinguished from computational lexicography, which more properly would be the use of computers in the construction of dictionaries, though some researchers have used computational lexicography as synonymous.
History
Computational lexicology emerged as a separate discipline within computational linguistics with the appearance of machine-readable dictionaries, starting with the creation of the machine-readable tapes of the Merriam-Webster Seventh Collegiate Dictionary and the Merriam-Webster New Pocket Dictionary in the 1960s by John Olney et al. at System Development Corporation. Today, computational lexicology is best known through the creation and applications of WordNet. As the computational processing of the researchers increased over time, the use of computational lexicology has been applied ubiquitously in the text analysis. In 1987, amongst others Byrd, Calzolari, Chodorow have developed computational tools for text analysis. In particular the model was designed for coordinating the associations involving the senses of polysemous words.
Study of lexicon
Computational lexicology has contributed to the understanding of the content and limitations of print dictionaries for computational purposes (i.e. it clarified that the previous work of lexicography was not sufficient for the needs of computational linguistics). Through the work of computational lexicologists almost every portion of a print dictionary entry has been studied ranging from:
what constitutes a headword - used to generate spelling correction lists;
what variants and inflections the headword forms - used to empirically understand morphology;
how the headword is delimited into syllables;
how the headword is prono |
https://en.wikipedia.org/wiki/Expression%20Atlas | The Expression Atlas is a database maintained by the European Bioinformatics Institute that provides information on gene expression patterns from RNA-Seq and Microarray studies, and protein expression from Proteomics studies. The Expression Atlas allows searches by gene, splice variant, protein attribute, disease, treatment or organism part (cell types/tissues). Individual genes or gene sets can be searched for. All datasets in Expression Atlas have its metadata manually curated and its data analysed through standardised analysis pipelines. There are two components to the Expression Atlas, the Baseline Atlas and the Differential Atlas:
Baseline Atlas
The Baseline Atlas provides information about which gene products are present (and at what abundance) under "normal" conditions. This component of the Expression Atlas consists of RNA-seq experiments from ArrayExpress repositories. It aims to answer questions such as:
Which genes are specifically expressed in kidney?
What is the expression pattern for gene SAA4 in normal tissues?
Differential Atlas
The Differential Atlas allows users to identify genes that are up- or down-regulated in different experimental conditions.
See also
Human Protein Atlas |
https://en.wikipedia.org/wiki/Reentrant%20superconductivity | In physics, reentrant superconductivity is an effect observed in systems that lie close to the boundary between ferromagnetic and superconducting. By its very nature (normal) superconductivity (condensation of electrons into the BCS ground state) cannot exist together with ferromagnetism (condensation of electrons into the same spin state, all pointing in the same direction). Reentrance is when while changing a continuous parameter, superconductivity is first observed, then destroyed by the ferromagnetic order, and later reappears.
An example is the changing of the thickness of the ferromagnetic layer in a bilayer of a superconductor and a ferromagnet. At a certain thickness superconductivity is destroyed by the Andreev reflected electrons in the ferromagnet, but if the thickness increases, this effect disappears again.
Another example are materials with a Curie temperature below the superconducting transition temperature. When cooling, first superconducting order appears in the electron system. Cooling further, the ferromagnetic order energetically wins over the superconducting order in the electron system. At even lower energy superconductivity reenters, and a nonuniform magnetic order appears. there is ferromagnetic order on short length scales, but superconducting order on large length scales.
Examples
Uranium ditelluride, (UTe2) a spin-triplet superconductor. Discovered to be a superconductor in 2018.
See also
Ferromagnetic superconductor
Further reading
Ferromagnetism and reentrant superconductivity 1998
Reentrant Superconductivity of CeRu2 1993
Reentrant superconductivity in Eu(Fe1−xIrx)2As2 2013 |
https://en.wikipedia.org/wiki/Visual%20angle | Visual angle is the angle a viewed object subtends at the eye, usually stated in degrees of arc.
It also is called the object's angular size.
The diagram on the right shows an observer's eye looking at a frontal extent (the vertical arrow) that has a linear size , located in the distance from point .
For present purposes, point can represent the eye's nodal points at about the center of the lens, and also represent the center of the eye's entrance pupil that is only a few millimeters in front of the lens.
The three lines from object endpoint heading toward the eye indicate the bundle of light rays that pass through the cornea, pupil and lens to form an optical image of endpoint on the retina at point .
The central line of the bundle represents the chief ray.
The same holds for object point and its retinal image at .
The visual angle is the angle between the chief rays of and .
Measuring and computing
The visual angle can be measured directly using a theodolite placed at point .
Or, it can be calculated (in radians) using the formula, .
However, for visual angles smaller than about 10 degrees, this simpler formula provides very close approximations:
The retinal image and visual angle
As the above sketch shows, a real image of the object is formed on the retina between points and . (See visual system). For small angles, the size of this retinal image is
where is the distance from the nodal points to the retina, about 17 mm.
Examples
If one looks at a one-centimeter object at a distance of one meter and a two-centimeter object at a distance of two meters, both subtend the same visual angle of about 0.01 rad or 0.57°. Thus they have the same retinal image size .
That is just a bit larger than the retinal image size for the moon, which is about , because, with moon's mean diameter , and earth to moon mean distance averaging (), .
Also, for some easy observations, if one holds one's index finger at arm's length, the width of the index |
https://en.wikipedia.org/wiki/PCF%20theory | PCF theory is the name of a mathematical theory, introduced by Saharon , that deals with the cofinality of the ultraproducts of ordered sets. It gives strong upper bounds on the cardinalities of power sets of singular cardinals, and has many more applications as well. The abbreviation "PCF" stands for "possible cofinalities".
Main definitions
If A is an infinite set of regular cardinals, D is an ultrafilter on A, then
we let denote the cofinality of the ordered set of functions
where the ordering is defined as follows:
if .
pcf(A) is the set of cofinalities that occur if we consider all ultrafilters on A, that is,
Main results
Obviously, pcf(A) consists of regular cardinals. Considering ultrafilters concentrated on elements of A, we get that
. Shelah proved, that if , then pcf(A) has a largest element, and there are subsets of A such that for each ultrafilter D on A, is the least element θ of pcf(A) such that . Consequently, .
Shelah also proved that if A is an interval of regular cardinals (i.e., A is the set of all regular cardinals between two cardinals), then pcf(A) is also an interval of regular cardinals and |pcf(A)|<|A|+4.
This implies the famous inequality
assuming that ℵω is strong limit.
If λ is an infinite cardinal, then J<λ is the following ideal on A. B∈J<λ if holds for every ultrafilter D with B∈D. Then J<λ is the ideal generated by the sets . There exist scales, i.e., for every λ∈pcf(A) there is a sequence of length λ of elements of which is both increasing and cofinal mod J<λ. This implies that the cofinality of under pointwise dominance is max(pcf(A)).
Another consequence is that if λ is singular and no regular cardinal less than λ is Jónsson, then also λ+ is not Jónsson. In particular, there is a Jónsson algebra on ℵω+1, which settles an old conjecture.
Unsolved problems
The most notorious conjecture in pcf theory states that |pcf(A)|=|A| holds for every set A of regular cardinals with |A|<min(A). This would imply that if ℵω is stro |
https://en.wikipedia.org/wiki/Ring%20of%20sets | In mathematics, there are two different notions of a ring of sets, both referring to certain families of sets.
In order theory, a nonempty family of sets is called a ring (of sets) if it is closed under union and intersection. That is, the following two statements are true for all sets and ,
implies and
implies
In measure theory, a nonempty family of sets is called a ring (of sets) if it is closed under union and relative complement (set-theoretic difference). That is, the following two statements are true for all sets and ,
implies and
implies
This implies that a ring in the measure-theoretic sense always contains the empty set. Furthermore, for all sets and ,
which shows that a family of sets closed under relative complement is also closed under intersection, so that a ring in the measure-theoretic sense is also a ring in the order-theoretic sense.
Examples
If is any set, then the power set of (the family of all subsets of ) forms a ring of sets in either sense.
If is a partially ordered set, then its upper sets (the subsets of with the additional property that if belongs to an upper set U and , then must also belong to ) are closed under both intersections and unions. However, in general it will not be closed under differences of sets.
The open sets and closed sets of any topological space are closed under both unions and intersections.
On the real line , the family of sets consisting of the empty set and all finite unions of half-open intervals of the form , with is a ring in the measure-theoretic sense.
If is any transformation defined on a space, then the sets that are mapped into themselves by are closed under both unions and intersections.
If two rings of sets are both defined on the same elements, then the sets that belong to both rings themselves form a ring of sets.
Related structures
A ring of sets in the order-theoretic sense forms a distributive lattice in which the intersection and union operations correspond to the |
https://en.wikipedia.org/wiki/Agent%20Vinod%20%281977%20film%29 | Agent Vinod is a 1977 Indian Hindi-language action spy film directed by Deepak Bahry. The film stars Mahendra Sandhu as a dashing Indian secret agent and Jagdeep as a comic side-kick. This movie turned out to be a surprise hit. Its title was reused in the 2012 Hindi film of the same name, however, it is not a remake.
Story
The kidnapping of a prominent scientist, Ajay Saxena (Nazir Hussain) prompts the Chief of Secret Services (K.N. Singh) to assign flamboyant Agent Vinod (Mahendra Sandhu) to this case. While on this assignment, Vinod meets with Ajay's daughter, Anju (Asha Sachdev), who insists on assisting him. The duo are then further assisted by Chandu "James Bond" (Jagdeep) and his gypsy girlfriend (Jayshree T.). The two couples will soon have numerous challenges thrust on them, and will realize that their task is not only very difficult, but also life-threatening.
Cast
Mahendra Sandhu as Agent Vinod
Asha Sachdev as Anju Saxena
Rehana Sultan as Zarina
Jagdeep as Chandu alias James Bond
Iftekhar as Madanlal
Pinchoo Kapoor as Chacha of Agent Vinod
Nazir Hussain as Ashok Saxena (Anju's dad) (as Nazir Husain)
Ravindra Kapoor
K.N. Singh as Chief of Agent Vinod
Helen as Dancer - Lovelina
Jayshree T. as Gypsy Sardar's daughter
Leena Das as Leena (Chacha's assistant)
Viju Khote as Madanlal's henchman
Sharat Saxena as Madanlal's henchman
Birbal as Room Service Boy
Bhagwan Dada as Gypsy Sardar
V. Gopal as Hotel Manager
Sunder as Priest
Music |
https://en.wikipedia.org/wiki/Distributed%20version%20control | In software development, distributed version control (also known as distributed revision control) is a form of version control in which the complete codebase, including its full history, is mirrored on every developer's computer. Compared to centralized version control, this enables automatic management branching and merging, speeds up most operations (except pushing and pulling), improves the ability to work offline, and does not rely on a single location for backups. Git, the world's most popular version control system, is a distributed version control system.
In 2010, software development author Joel Spolsky described distributed version control systems as "possibly the biggest advance in software development technology in the [past] ten years".
Distributed vs. centralized
Distributed version control systems (DVCS) use a peer-to-peer approach to version control, as opposed to the client–server approach of centralized systems. Distributed revision control synchronizes repositories by transferring patches from peer to peer. There is no single central version of the codebase; instead, each user has a working copy and the full change history.
Advantages of DVCS (compared with centralized systems) include:
Allows users to work productively when not connected to a network.
Common operations (such as commits, viewing history, and reverting changes) are faster for DVCS, because there is no need to communicate with a central server. With DVCS, communication is necessary only when sharing changes among other peers.
Allows private work, so users can use their changes even for early drafts they do not want to publish.
Working copies effectively function as remote backups, which avoids relying on one physical machine as a single point of failure.
Allows various development models to be used, such as using development branches or a Commander/Lieutenant model.
Permits centralized control of the "release version" of the project
On FOSS software projects it is much easi |
https://en.wikipedia.org/wiki/GLACIER%20%28refrigerator%29 | GLACIER (General Laboratory Active Cryogenic ISS Experiment Refrigerator) was designed and developed by University of Alabama at Birmingham (UAB) Center for Biophysical Sciences and Engineering (CBSE) for NASA Cold Stowage. Glacier was originally designed for use on board the Space Shuttle, but is now used for storing scientific samples on ISS in the EXpedite the PRocessing of Experiments to Space Station (EXPRESS) rack, and transporting samples to/from orbit via the SpaceX Dragon or Cygnus spacecraft. GLACIER is a double middeck locker equivalent payload designed to provide thermal control between +4 °C and -160 °C.
Development
In 2002 NASA began development of several spaceflight cold stowage systems to work in conjunction with the large (ISS Rack sized) ESA MELFI and Cryosystem freezers. One of these was for a system capable of rapidly freezing bagged irregularly shaped science samples to below -160°C in as fast as 1°C/min for a 100ml sample, being able to maintain a complement of frozen samples without electrical power for several hours, and to be of a compact double middeck locker format, to enable transfer between the ISS and the Space Shuttle Orbiter cabin for transport to/from orbit. The combination of these goals presented several significant technical challenges, and prompted NASA to implement a two-phase development approach. In the first phase, two competing designs were matured through Preliminary Design Review (PDR) and completed functional demonstration of key freezer components. In the second phase one of the designs was then developed through to the completed flight freezers. NASA awarded a contract to UAB CBSE to build the GLACIER freezer system in 2005. The first GLACIER freezers flew on STS-126 in 2008.
Description
GLACIER can use air or water to reject heat depending on the temperatures required for the scientific samples.
GLACIER can maintain temperatures from +4 to -95 °C using only air cooling, and can cool to -160 °C when connected to the |
https://en.wikipedia.org/wiki/Smoothing%20spline | Smoothing splines are function estimates, , obtained from a set of noisy observations of the target , in order to balance a measure of goodness of fit of to with a derivative based measure of the smoothness of . They provide a means for smoothing noisy data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case where is a vector quantity.
Cubic spline definition
Let be a set of observations, modeled by the relation where the are independent, zero mean random variables (usually assumed to have constant variance). The cubic smoothing spline estimate of the function is defined to be the minimizer (over the class of twice differentiable functions) of
Remarks:
is a smoothing parameter, controlling the trade-off between fidelity to the data and roughness of the function estimate. This is often estimated by generalized cross-validation, or by restricted marginal likelihood (REML) which exploits the link between spline smoothing and Bayesian estimation (the smoothing penalty can be viewed as being induced by a prior on the ).
The integral is often evaluated over the whole real line although it is also possible to restrict the range to that of .
As (no smoothing), the smoothing spline converges to the interpolating spline.
As (infinite smoothing), the roughness penalty becomes paramount and the estimate converges to a linear least squares estimate.
The roughness penalty based on the second derivative is the most common in modern statistics literature, although the method can easily be adapted to penalties based on other derivatives.
In early literature, with equally-spaced ordered , second or third-order differences were used in the penalty, rather than derivatives.
The penalized sum of squares smoothing objective can be replaced by a penalized likelihood objective in which the sum of squares terms is replaced by another log-likelihood based measure of fidelity to the data. The sum of s |
https://en.wikipedia.org/wiki/Levashovism | Levashovism is a doctrine and healing system of Rodnovery (Slavic neopaganism) that emerged in Russia, formulated by the physics theorist, occultist and psychic healer Nikolay Viktorovich Levashov (1961–2012), one of the most prominent leaders of Slavic Neopaganism after the collapse of the Soviet Union. The movement was incorporated in 2007 as the Russian Public Movement of Renaissance–Golden Age (Russian: Русское Общественное Движение "Возрождение. Золотой Век"; acronym: РОД ВЗВ, ROD VZV). Levashovite doctrine is based on a mathematical cosmology, a melting of science and spirituality which has been compared to a "Pythagorean" worldview, and is pronouncedly eschatological. Levashovism is influenced by Ynglism, especially sharing the latter's historiosophical narrative about the Slavic Aryan past of the Russians, and like Ynglism it has been formally rejected by mainstream Russian Rodnover organisations. The movement is present in many regions of Russia, as well as in Ukraine, Belarus, Romania, Moldova and Finland.
Overview
Nikolay V. Levashov was educated in advanced physics and quantum mechanics. He began to practise psychic healing in Russia in the 1980s, and in 1990–1991 he held seminars on the subject. In 1991 he moved to California, in the United States, where he lived until 2006 and where he wrote his main books. In 2006 he returned to Russia where in 2007 he founded the Russian Public Movement of Renaissance–Golden Age, formally incorporating the movement of his followers. A few months before dying, Levashov ran for the 2012 Russian presidential election.
Levashov claimed to be a bearer of genuine "Vedic" sacred knowledge of the "Slavic Aryans", and called on his followers to live in rational harmony with nature following the path of evolution represented by ancient Vedic culture. Levashovism is based on the Book of Veles and on the Slavo-Aryan Vedas first popularised by the Ynglist Church in the 1990s; Levashov reworked the teachings of these books into |
https://en.wikipedia.org/wiki/Sequential%20pattern%20mining | Sequential pattern mining is a topic of data mining concerned with finding statistically relevant patterns between data examples where the values are delivered in a sequence. It is usually presumed that the values are discrete, and thus time series mining is closely related, but usually considered a different activity. Sequential pattern mining is a special case of structured data mining.
There are several key traditional computational problems addressed within this field. These include building efficient databases and indexes for sequence information, extracting the frequently occurring patterns, comparing sequences for similarity, and recovering missing sequence members. In general, sequence mining problems can be classified as string mining which is typically based on string processing algorithms and itemset mining which is typically based on association rule learning. Local process models extend sequential pattern mining to more complex patterns that can include (exclusive) choices, loops, and concurrency constructs in addition to the sequential ordering construct.
String mining
String mining typically deals with a limited alphabet for items that appear in a sequence, but the sequence itself may be typically very long. Examples of an alphabet can be those in the ASCII character set used in natural language text, nucleotide bases 'A', 'G', 'C' and 'T' in DNA sequences, or amino acids for protein sequences. In biology applications analysis of the arrangement of the alphabet in strings can be used to examine gene and protein sequences to determine their properties. Knowing the sequence of letters of a DNA or a protein is not an ultimate goal in itself. Rather, the major task is to understand the sequence, in terms of its structure and biological function. This is typically achieved first by identifying individual regions or structural units within each sequence and then assigning a function to each structural unit. In many cases this requires comparing a giv |
https://en.wikipedia.org/wiki/Pressure%20reference%20system | Pressure reference system (PRS) is an enhancement of the inertial reference system and attitude and heading reference system designed to provide position angles measurements which are stable in time and do not suffer from long term drift caused by the sensor imperfections. The measurement system uses behavior of the International Standard Atmosphere where atmospheric pressure descends with increasing altitude and two pairs of measurement units. Each pair measures pressure at two different positions that are mechanically connected with known distance between units, e.g. the units are mounted at the tips of the wing. In horizontal flight, there is no pressure difference measured by the measurement system which means the position angle is zero. In case the airplane banks (to turn), the tips of the wings mutually change their positions, one is going up and the second one is going down, and the pressure sensors in every unit measure different values which are translated into a position angle.
Overview
The strapdown inertial navigation system uses double integration of the accelerations measured by an inertial measurement unit (IMU). This process sums the sensors outputs together with all the sensor and measurement errors. The precision and long-term stability of the INS system depends on the quality of sensors used within the IMU. The sensor quality can be evaluated by Allan Variance technique. A precise IMU uses laser gyroscopes and precise accelerometers which are expensive. The INS is a sole system with no other inputs. Nowadays the trend of the modern navigation is to integrate signals from IMU together with data provided by Global Positioning System (GPS). This approach gives long term stability to the INS output by suppressing sensor error influence on the calculation of the airplane position. The measurement system becomes attitude and heading reference system which can relax requirement on the sensor precision because the long-term stability is assured by GPS. |
https://en.wikipedia.org/wiki/Dimethyl%20dicarbonate | Dimethyl dicarbonate (DMDC) is a colorless liquid and a pungent odor at high concentration at room temperature. It is primarily used as a beverage preservative, processing aid, or sterilant (INS No. 242) being highly active against typical beverage spoiling microorganisms like yeast, bacteria, or mould.
Usage
Dimethyl dicarbonate is used to stabilize beverages by preventing microbial spoilage. It can be used in various non-alcoholic as well as alcoholic drinks like wine, cider, beer-mix beverages or hard seltzers. Beverage spoiling microbes are killed by methoxycarbonylation of proteins.
It acts by inhibiting enzymes involved in the microbial metabolism, e.g. acetate kinase and L-glutamic acid decarboxylase. It has also been proposed that DMDC inhibits the enzymes alcohol dehydrogenase and glyceraldehyde 3-phosphate dehydrogenase by causing the methoxycarbonylation of their histidine components.
In wine, it is often used to replace potassium sorbate, as it inactivates wine spoilage yeasts such as Brettanomyces. Once it has been added to beverages, the efficacy of the chemical is provided by the following reactions:
DMDC + water → methanol + carbon dioxide
DMDC + ethanol → ethyl methyl carbonate
DMDC + ammonia → methyl carbamate
DMDC + amino acid → derived carboxymethyl
The application of DMDC is particularly useful when wine needs to be sterilized but cannot be sterile filtered, pasteurized, or sulfured. DMDC is also used to stabilize non-alcoholic beverages such as carbonated or non-carbonated juice beverages, isotonic sports beverages, iced teas and flavored waters. DMDC is produced by Lanxess under the trade name Velcorin®
DMDC is added before the filling of the beverage. It then breaks down into small amounts of methanol and carbon dioxide, which are both natural constituents of fruit and vegetable juices.
The EU Scientific Committee on Food, the FDA in the United States and the JECFA of the WHO have confirmed the safe use in beverages. The FDA approved i |
https://en.wikipedia.org/wiki/Mechanophilia | Mechanophilia (or mechaphilia) is a paraphilia involving a sexual attraction to machines such as bicycles, cars, helicopters, and airplanes.
Mechanophilia is treated as a crime in some nations with perpetrators being placed on a sex-offenders' register after prosecution. Motorcycles are often portrayed as sexualized fetish objects to those who desire them.
Incidents
In 2015 a man in Thailand was on caught on CCTV masturbating himself on the front end of a Porsche.
In 2008, an American named Edward Smith admitted to 'having sex' with 1000 cars, and the helicopter used in the television show Airwolf.
Art, culture and design
Mechanophilia has been used to describe important works of the early modernists, including in the Eccentric Manifesto (1922), written by Leonid Trauberg, Sergei Yutkevich, Grigori Kozintsev and othersmembers of the Factory of the Eccentric Actor, a modernist avant-garde movement that spanned Russian futurism and constructivism.
The term has entered into the realms of science fiction and popular fiction.
Scientifically, in BiophiliaThe Human Bond with Other Species by Edward O. Wilson, Wilson is quoted describing mechanophilia, the love of machines, as "a special case of biophilia", whereas psychologists such as Erich Fromm would see it as a form of necrophilia.
Designers such as Francis Picabia and Filippo Tommaso Marinetti have been said to have exploited the sexual attraction of automobiles.
Culturally, critics have described it as "all-pervading" within contemporary Western society and that it seems to overwhelm our society and all too often our better judgment. Although not all such uses are sexual in intent, the terms are also used for specifically erotogenic fixation on machinery and taken to its extreme in hardcore pornography as Fucking Machines. This mainly involves women being sexually penetrated by machines for male consumption, which are seen as being the limits of current sexual biopolitics.
Arse Elektronika, an annual confe |
https://en.wikipedia.org/wiki/Rheophile | A rheophile is an animal that prefers to live in fast-moving water.
Examples of rheophilic animals
Insects
Many aquatic insects living in riffles require current to survive.
Epeorus sylvicola, a rheophilic mayfly species (Ephemeroptera)
Some African (Elattoneura) and Asian threadtail (Prodasineura) species
Birds
Dippers (Cinclus)
Grey wagtail (Motacilla cinerea) and mountain wagtail (Motacilla clara)
A few swifts often nest behind waterfalls, including American black swift (Cypseloides niger), giant swiftlet (Hydrochous gigas), great dusky swift (Cypseloides senex) and white-collared swift (Streptoprocne zonaris)
Some waterfowl, including African black duck (Anas sparsa), blue duck (Hymenolaimus malacorhynchos), Brazilian merganser (Mergus octosetaceus), bronze-winged duck (Speculanas specularis), harlequin duck (Histrionicus histrionicus), Salvadori's teal (Salvadorina waigiuensis) and torrent duck (Merganetta armata)
Fish
A very large number of rheophilic fish species are known and include members of at least 419 genera in 60 families. Examples include:
Many species in the family Balitoridae, also known as the hill stream loaches.
Many species in the family Loricariidae from South and Central America
Many Chiloglanis species, which are freshwater catfish from Africa
The family Gyrinocheilidae.
Rheophilic cichlid genera/species:
The Lamena group in the genus Paretroplus from Madagascar.
Oxylapia polli from Madagascar.
Retroculus species from the Amazon Basin and rivers in the Guianas in South America.
Steatocranus species from the Congo River Basin in Africa.
Teleocichla species from the Amazon Basin in South America.
Teleogramma species from the Congo River Basin in Africa.
Mylesinus, Myleus, Ossubtus, Tometes and Utiaritichthys, which are serrasalmids from tropical South America
The Danube streber (Zingel streber), family Percidae.
Molluscs
Ancylus fluviatilis
Aylacostoma species
Lymnaea ovata
Amphibians
Neurergus strauchii, a newt from Turkey
Pach |
https://en.wikipedia.org/wiki/Distributive%20polytope | In the geometry of convex polytopes, a distributive polytope is a convex polytope for which coordinatewise minima and maxima of pairs of points remain within the polytope. For example, this property is true of the unit cube, so the unit cube is a distributive polytope. It is called a distributive polytope because the coordinatewise minimum and coordinatewise maximum operations form the meet and join operations of a continuous distributive lattice on the points of the polytope.
Every face of a distributive polytope is itself a distributive polytope. The distributive polytopes all of whose vertex coordinates are 0 or 1 are exactly the order polytopes.
See also
Stable matching polytope, a convex polytope that defines a distributive lattice on its points in a different way |
https://en.wikipedia.org/wiki/K-trivial%20set | In mathematics, a set of natural numbers is called a K-trivial set if its initial segments viewed as binary strings are easy to describe: the prefix-free Kolmogorov complexity is as low as possible, close to that of a computable set. Solovay proved in 1975 that a set can be K-trivial without being computable.
The Schnorr–Levin theorem says that random sets have a high initial segment complexity. Thus the K-trivials are far from random. This is why these sets are studied in the field of algorithmic randomness, which is a subfield of Computability theory and related to algorithmic information theory in computer science.
At the same time, K-trivial sets are close to computable. For instance, they are all superlow, i.e. sets whose Turing jump is computable from the Halting problem, and form a Turing ideal, i.e. class of sets closed under Turing join and closed downward under Turing reduction.
Definition
Let K be the prefix-free Kolmogorov Complexity, i.e. given a string x, K(x) outputs the least length of the input string under a prefix-free universal machine. Such a machine, intuitively, represents a universal programming language with the property that no valid program can be obtained as a proper extension of another valid program. For more background of K, see e.g. Chaitin's constant.
We say a set A of the natural numbers is K-trivial via a constant b ∈ if
.
A set is K-trivial if it is K-trivial via some constant.
Brief history and development
In the early days of the development of K-triviality, attention was paid to separation of K-trivial sets and computable sets.
Chaitin in his 1976 paper mainly studied sets such that there exists b ∈ with
where C denotes the plain Kolmogorov complexity. These sets are known as C-trivial sets. Chaitin showed they coincide with the computable sets. He also showed that the K-trivials are computable in the halting problem. This class of sets is commonly known as sets in arithmetical hierarchy.
|
https://en.wikipedia.org/wiki/1-Wire | 1-Wire is a wired half duplex serial bus designed by Dallas Semiconductor that provides low-speed (16.3 kbit/s) data communication and supply voltage over a single conductor.
1-Wire is similar in concept to I²C, but with lower data rates and longer range. It is typically used to communicate with small inexpensive devices such as digital thermometers and weather instruments. A network of 1-Wire devices with an associated master device is called a MicroLAN. The protocol is also used in small electronic keys known as a Dallas key or .
One distinctive feature of the bus is the possibility of using only two conductors — data and ground. To accomplish this, 1-Wire devices integrate a small capacitor (~800pF) to store charge, which powers the device during periods when the data line is active.
Usage example
1-Wire devices are available in different packages: integrated circuits, a TO-92-style package (as typically used for transistors), and a portable form called an or Dallas key which is a small stainless-steel package that resembles a watch battery. Manufacturers also produce devices more complex than a single component that use the 1-Wire bus to communicate.
1-Wire devices can fit in different places in a system. It might be one of many components on a circuit board within a product. It also might be a single component within a device such as a temperature probe. It could be attached to a device being monitored. Some laboratory systems connect to 1-Wire devices using cables with modular connectors or CAT-5 cable. In such systems, RJ11 (6P2C or 6P4C modular plugs, commonly used for telephones) are popular.
Systems of sensors and actuators can be built by wiring together many 1-Wire components. Each 1-Wire component contains all of the logic needed to operate on the 1-Wire bus. Examples include temperature loggers, timers, voltage and current sensors, battery monitors, and memory. These can be connected to a PC using a bus converter. USB, RS-232 serial, and paralle |
https://en.wikipedia.org/wiki/Kutta%E2%80%93Joukowski%20theorem | The Kutta–Joukowski theorem is a fundamental theorem in aerodynamics used for the calculation of lift of an airfoil (and any two-dimensional body including circular cylinders) translating in a uniform fluid at a constant speed large enough so that the flow seen in the body-fixed frame is steady and unseparated. The theorem relates the lift generated by an airfoil to the speed of the airfoil through the fluid, the density of the fluid and the circulation around the airfoil. The circulation is defined as the line integral around a closed loop enclosing the airfoil of the component of the velocity of the fluid tangent to the loop. It is named after Martin Kutta and Nikolai Zhukovsky (or Joukowski) who first developed its key ideas in the early 20th century. Kutta–Joukowski theorem is an inviscid theory, but it is a good approximation for real viscous flow in typical aerodynamic applications.
Kutta–Joukowski theorem relates lift to circulation much like the Magnus effect relates side force (called Magnus force) to rotation. However, the circulation here is not induced by rotation of the airfoil. The fluid flow in the presence of the airfoil can be considered to be the superposition of a translational flow and a rotating flow. This rotating flow is induced by the effects of camber, angle of attack and the sharp trailing edge of the airfoil. It should not be confused with a vortex like a tornado encircling the airfoil. At a large distance from the airfoil, the rotating flow may be regarded as induced by a line vortex (with the rotating line perpendicular to the two-dimensional plane). In the derivation of the Kutta–Joukowski theorem the airfoil is usually mapped onto a circular cylinder. In many textbooks, the theorem is proved for a circular cylinder and the Joukowski airfoil, but it holds true for general airfoils.
Lift force formula
The theorem applies to two-dimensional flow around a fixed airfoil (or any shape of infinite span). The lift per unit span of the |
https://en.wikipedia.org/wiki/Stream%20metabolism | Stream metabolism, often referred to as aquatic ecosystem metabolism in both freshwater (lakes, rivers, wetlands, streams, reservoirs) and marine ecosystems, includes gross primary productivity (GPP) and ecosystem respiration (ER) and can be expressed as net ecosystem production (NEP = GPP - ER). Analogous to metabolism within an individual organism, stream metabolism represents how energy is created (primary production) and used (respiration) within an aquatic ecosystem. In heterotrophic ecosystems, GPP:ER is <1 (ecosystem using more energy than it is creating); in autotrophic ecosystems it is >1 (ecosystem creating more energy than it is using). Most streams are heterotrophic. A heterotrophic ecosystem often means that allochthonous (coming from outside the ecosystem) inputs of organic matter, such as leaves or debris fuel ecosystem respiration rates, resulting in respiration greater than production within the ecosystem. However, autochthonous (coming from within the ecosystem) pathways also remain important to metabolism in heterotrophic ecosystems. In an autotrophic ecosystem, conversely, primary production (by algae, macrophytes) exceeds respiration, meaning that ecosystem is producing more organic carbon than it is respiring.
Stream metabolism can be influenced by a variety of factors, including physical characteristics of the stream (slope, width, depth, and speed/volume of flow), biotic characteristics of the stream (abundance and diversity of organisms ranging from bacteria to fish), light and nutrient availability to fuel primary production, organic matter to fuel respiration, water chemistry and temperature, and natural or human-caused disturbance, such as dams, removal of riparian vegetation, nutrient pollution, wildfire or flooding.
Measuring stream metabolic state is important to understand how disturbance may change the available primary productivity, and whether and how that increase or decrease in NEP influences foodweb dynamics, allochthonous/a |
https://en.wikipedia.org/wiki/Synthetic%20radioisotope | A synthetic radioisotope is a radionuclide that is not found in nature: no natural process or mechanism exists which produces it, or it is so unstable that it decays away in a very short period of time. Examples include technetium-95 and promethium-146. Many of these are found in, and harvested from, spent nuclear fuel assemblies. Some must be manufactured in particle accelerators.
Production
Some synthetic radioisotopes are extracted from spent nuclear reactor fuel rods, which contain various fission products. For example, it is estimated that up to 1994, about 49,000 terabecquerels (78 metric ton) of technetium was produced in nuclear reactors, which is by far the dominant source of terrestrial technetium.
Some synthetic isotopes are produced in significant quantities by fission but are not yet being reclaimed. Other isotopes are manufactured by neutron irradiation of parent isotopes in a nuclear reactor (for example, Tc-97 can be made by neutron irradiation of Ru-96) or by bombarding parent isotopes with high energy particles from a particle accelerator.
Many isotopes are produced in cyclotrons, for example fluorine-18 and oxygen-15 which are widely used for positron emission tomography.
Uses
Most synthetic radioisotopes have a short half-life. Though a health hazard, radioactive materials have many medical and industrial uses.
Nuclear medicine
The field of nuclear medicine covers use of radioisotopes for diagnosis or treatment.
Diagnosis
Radioactive tracer compounds, radiopharmaceuticals, are used to observe the function of various organs and body systems. These compounds use a chemical tracer which is attracted to or concentrated by the activity which is being studied. That chemical tracer incorporates a short lived radioactive isotope, usually one which emits a gamma ray which is energetic enough to travel through the body and be captured outside by a gamma camera to map the concentrations. Gamma cameras and other similar detectors are highly efficient |
https://en.wikipedia.org/wiki/Solid-state%20electronics | Solid-state electronics are semiconductor electronics: electronic equipment that use semiconductor devices such as transistors, diodes and integrated circuits (ICs). The term is also used as an adjective for devices in which semiconductor electronics that have no moving parts replace devices with moving parts, such as the solid-state relay in which transistor switches are used in place of a moving-arm electromechanical relay, or the solid-state drive (SSD) a type of semiconductor memory used in computers to replace hard disk drives, which store data on a rotating disk.
History
The term "solid-state" became popular at the beginning of the semiconductor era in the 1960s to distinguish this new technology. A semiconductor device works by controlling an electric current consisting of electrons or holes moving within a solid crystalline piece of semiconducting material such as silicon, while the thermionic vacuum tubes it replaced worked by controlling a current of electrons or ions in a vacuum within a sealed tube.
Although the first solid-state electronic device was the cat's whisker detector, a crude semiconductor diode invented around 1904, solid-state electronics started with the invention of the transistor in 1947. Before that, all electronic equipment used vacuum tubes, because vacuum tubes were the only electronic components that could amplify—an essential capability in all electronics. The transistor, which was invented by John Bardeen and Walter Houser Brattain while working under William Shockley at Bell Laboratories in 1947, could also amplify, and replaced vacuum tubes. The first transistor Hi-Fi system was developed by engineers at GE and demonstrated at the University of Philadelphia in 1955. In terms of commercial production, The Fisher TR-1 was the first "All Transistor" preamplifier, which became available mid-1956. In 1961, a company named Transis-tronics released a solid-state amplifier, the TEC S-15.
The replacement of bulky, fragile, energy-h |
https://en.wikipedia.org/wiki/Moderne%20Algebra | Moderne Algebra is a two-volume German textbook on graduate abstract algebra by , originally based on lectures given by Emil Artin in 1926 and by from 1924 to 1928. The English translation of 1949–1950 had the title Modern algebra, though a later, extensively revised edition in 1970 had the title Algebra.
The book was one of the first textbooks to use an abstract axiomatic approach to groups, rings, and fields, and was by far the most successful, becoming the standard reference for graduate algebra for several decades. It "had a tremendous impact, and is widely considered to be the major text on algebra in the twentieth century."
In 1975 van der Waerden described the sources he drew upon to write the book.
In 1997 Saunders Mac Lane recollected the book's influence:
Upon its publication it was soon clear that this was the way that algebra should be presented.
Its simple but austere style set the pattern for mathematical texts in other subjects, from Banach algebras to topological group theory.
[Van der Waerden's] two volumes on modern algebra ... dramatically changed the way algebra is now taught by providing a decisive example of a clear and perspicacious presentation. It is, in my view, the most influential text of algebra of the twentieth century.
Publication history
Moderne Algebra has a rather confusing publication history, because it went through many different editions, several of which were extensively rewritten with chapters and major topics added, deleted, or rearranged. In addition the new editions of first and second volumes were issued almost independently and at different times, and the numbering of the English editions does not correspond to the numbering of the German editions. In 1955 the title was changed from "Moderne Algebra" to "Algebra" following a suggestion of Brandt, with the result that the two volumes of the third German edition do not even have the same title.
For volume 1, the first German edition was published in 1930, the sec |
https://en.wikipedia.org/wiki/%C5%81ojasiewicz%20inequality | In real algebraic geometry, the Łojasiewicz inequality, named after Stanisław Łojasiewicz, gives an upper bound for the distance of a point to the nearest zero of a given real analytic function. Specifically, let ƒ : U → R be a real analytic function on an open set U in Rn, and let Z be the zero locus of ƒ. Assume that Z is not empty. Then for any compact set K in U, there exist positive constants α and C such that, for all x in K
Here α can be large.
The following form of this inequality is often seen in more analytic contexts: with the same assumptions on f, for every p ∈ U there is a possibly smaller open neighborhood W of p and constants θ ∈ (0,1) and c > 0 such that
A special case of the Łojasiewicz inequality, due to , is commonly used to prove linear convergence of gradient descent algorithms. |
https://en.wikipedia.org/wiki/Phage%20r1t%20holin%20family | The Lactococcus lactis Phage r1t Holin (r1t Holin) Family (TC# 1.E.18) is a family of putative pore-forming proteins that typically range in size between about 65 and 95 amino acyl residues (aas) in length, although a few r1t holins have been found to be significantly larger (i.e., 168 aa, 4 TMS, uncharacterized holin of Rhodococcus opacus; TC# 1.E.18.1.9). Phage r1t holins exhibit between 2 and 4 transmembrane segments (TMSs), with the 4 TMS proteins resulting from an intragenic duplication of a 2 TMS region. A representative list of the proteins belonging to the r1t holin family can be found in the Transporter Classification Database.
Function and expression
The Lactococcus lactis phage r1t genome includes two adjacent genes, orf48 and orf49, which encode Orf48 (TC# 1.E.18.1.1; 75 aas) and a lysin Orf49 (270 aas), probably an N-acetyl-muramoyl-L-alanine amidase, respectively. Orf48 exhibits 2 putative hydrophobic transmembrane segments (TMSs) separated by a short β-turn region. It also has a hydrophobic N-terminus and a highly charged C-terminus. Orf48/Orf49 constitute the phage r1t lysis cassette. An essential role of Orf49 in cell lysis by Orf48 has been demonstrated.
Orf48 is homologous to the Gp4 holin of Mycobacterium phage Ms6 (TC# 1.E.18.1.2). Like most double-stranded (ds) DNA phages, mycobacteriophage Ms6 uses the holin-endolysin system to achieve lysis of its host. In addition to endolysin (lysA) and holin (hol) genes, Ms6 encodes three accessory lysis proteins. The lysis function of Gp1, encoded by the gp1 gene that lies immediately upstream of lysA, was revealed.
Catalão et al. observed Escherichia coli lysis after coexpression of LysA and Gp1 in the absence of the Ms6 holin. Gp1 does not belong to the holin class of proteins, but it shares several characteristics with molecular chaperones. The authors suggest that Gp1 interacts with LysA, and that this interaction is necessary for LysA delivery to its target. PhoA fusions showed that in Mycobacte |
https://en.wikipedia.org/wiki/Mary%20Tsingou | Mary Tsingou (married name: Mary Tsingou-Menzel; born October 14, 1928) is an American physicist and mathematician of Greek descent. She was one of the first programmers on the MANIAC computer at Los Alamos National Laboratory and is best known for having coded the celebrated computer experiment with Enrico Fermi, John Pasta, and Stanislaw Ulam which became an inspiration for the fields of chaos theory and scientific computing and was a turning point in soliton theory.
Life
Mary Tsingou was born in Milwaukee, Wisconsin, her Greek parents having moved to the US from Bulgaria. In the aftermath of the Great Depression, the family left the US to spend several years in Bulgaria. In 1940, they returned to the US, where Tsingou attended high school and college. She graduated in mathematics and education in 1951 from the University of Wisconsin. She then studied at the University of Michigan, receiving a master's degree in mathematics in 1955. In 1958, she married Joseph Menzel.
Career
Tsingou joined the T1 division of the Los Alamos National Laboratory, then transferred to the T7, where she became one of the first programmers on the MANIAC. Besides working on weapons, the group also studied fundamental physics. Following Fermi's suggestion to analyze numerically the predictions of a statistical model of solids, Tsingou came up with an algorithm to simulate the relaxation of energy in a model crystal, which she implemented on the MANIAC. The analysis became known in the computational physics community as the Fermi–Pasta–Ulam–Tsingou problem (FPUT), and Tsingou's contributions have since been recognised. The result was an important stepping stone for chaos theory.
After Fermi's death, James L. Tuck and Tsingou-Menzel repeated the original FPU results and provided strong indication that the nonlinear FPU problem might be integrable.
Tsingou-Menzel continued her computational career at Los Alamos. She was an early expert on Fortran. In the 1980s, she worked on calculations |
https://en.wikipedia.org/wiki/Interaction%20design%20pattern | Interaction design patterns are design patterns applied in the context human-computer interaction, describing common designs for graphical user interfaces.
A design pattern is a formal way of documenting a solution to a common design problem. The idea was introduced by the architect Christopher Alexander for use in urban planning and building architecture and has been adapted for various other disciplines, including teaching and pedagogy, development organization and process, and software architecture and design.
Thus, interaction design patterns are a way to describe solutions to common usability or accessibility problems in a specific context. They document interaction models that make it easier for users to understand an interface and accomplish their tasks.
History
Patterns originated as an architectural concept by Christopher Alexander. Patterns are ways to describe best practices, explain good designs, and capture experience so that other people can reuse these solutions.
Design patterns in computer science are used by software engineers during the actual design process and when communicating designs to others. Design patterns gained popularity in computer science after the book Design Patterns: Elements of Reusable Object-Oriented Software was published. Since then a pattern community has emerged that specifies patterns for problem domains including architectural styles and object-oriented frameworks. The Pattern Languages of Programming Conference (annual, 1994—) proceedings includes many examples of domain-specific patterns.
Applying a pattern language approach to interaction design was first suggested in Norman and Draper's book User Centered System Design (1986). The Apple Computer's Macintosh Human Interface Guidelines also quotes Christopher Alexander's works in its recommended reading.
Libraries
Alexander envisioned a pattern language as a structured system in which the semantic relationships between the patterns create a whole that is greater |
https://en.wikipedia.org/wiki/Enzybiotics | Enzybiotics are an experimental antibacterial therapy first described by Nelson, Loomis, and Fischetti. The term is derived from a combination of the words “enzyme” and “antibiotics.” Enzymes have been extensively utilized for their antibacterial and antimicrobial properties. Proteolytic enzymes called endolysins have demonstrated particular effectiveness in combating a range of bacteria and are the basis for enzybiotic research. Endolysins are derived from bacteriophages and are highly efficient at lysing bacterial cells. Enzybiotics are being researched largely to address the issue of antibiotic resistance, which has allowed for the proliferation of drug-resistant pathogens posing great risk to animal and human health across the globe.
Classification
Mechanism
Endolysins are specialized enzymes derived from bacteriophages, viruses that infect bacterial cells in order to replicate within them. Because phages have coevolved with their bacterial hosts, the endolysin system is very efficient at degrading bacterial cell walls. Phages release endolysins from inside bacterial host cells that cleave the peptidoglycan bonds of the bacterial cell wall. Once the cell is lysed, the bacteriophage is able to release progeny virions into the environment which in turn infect more bacterial cells. In addition to degrading bacterial cell walls from within, endolysins are effective when applied externally and can lyse Gram-positive bacteria that lack an outer cell membrane. Enzybiotics utilize endolysins to combat pathogens, exploiting their ability to home in on specific bacterial cells, their nontoxicity toward eukaryotic cells, and a decreased risk of pathogen resistance because they target highly conserved peptidoglycan bonds. A rapid killing rate of bacteria has also been observed upon administration of endolysins due to the enzymatic mechanism, as have synergistic effects among different endolysins and in combination with antibiotics, improving treatment outcomes of bacteri |
https://en.wikipedia.org/wiki/Ofqual%20exam%20results%20algorithm | In 2020, Ofqual, the regulator of qualifications, exams and tests in England, produced a grades standardisation algorithm to combat grade inflation and moderate the teacher-predicted grades for A level and GCSE qualifications in that year, after examinations were cancelled as part of the response to the COVID-19 pandemic.
History
In late March 2020, Gavin Williamson, the secretary of state for education in Boris Johnson's Conservative government, instructed the head of Ofqual, Sally Collier, to "ensure, as far as is possible, that qualification standards are maintained and the distribution of grades follows a similar profile to that in previous years". On 31 March, he issued a ministerial direction under the Children and Learning Act 2009.
Then, in August, 82% of 'A level' grades were computed using an algorithm devised by Ofqual. More than 4.6 million GCSEs in England – about 97% of the total – were assigned solely by the algorithm. Teacher rankings were taken into consideration, but not the teacher-assessed grades submitted by schools and colleges.
On 25 August, Collier, who oversaw the development of Williamson's algorithm calculation, resigned from the post of chief regulator of Ofqual following mounting pressure.
Vocational qualifications
The algorithm was not applied to vocational and technical qualifications (VTQs), such as BTECs, which are assessed on coursework or as short modules are completed, and in some cases adapted assessments were held. Nevertheless, because of the high level of grade inflation resulting from Ofqual's decision not to apply the algorithm to A levels and GCSEs, Pearson Edexcel, the BTEC examiner, decided to cancel the release of BTEC results on 19 August, the day before they were due to be released, to allow them to be re-moderated in line with Ofqual's grade inflation.
The algorithm
Ofqual's Direct Centre Performance model is based on the record of each centre (school or college) in the subject being assessed. Details of the alg |
https://en.wikipedia.org/wiki/Seismic%20inverse%20Q%20filtering | Seismic inverse Q filtering is a data processing technology for enhancing the resolution of reflection seismology images. Q is the anelastic attenuation factor or the seismic quality factor, a measure of the energy loss as the seismic wave moves.
Basics
Seismic inverse Q-filtering employs a wave propagation reversal procedure that compensates for energy absorption and corrects wavelet distortion due to velocity dispersion. By compensating for amplitude attenuation with a model of the visco-elastic attenuation model type, seismic data can provide true relative-amplitude information for amplitude inversion and subsequent reservoir characterization. By correcting the phase distortion due to velocity dispersion, seismic data with enhanced vertical resolution can yield correct timings for lithological identification.
However, Wang's outline of the subject is excellent and to follow his path, inverse Q filtering can be introduced based on the 1-D one-way propagation wave equation. He introduce this equation:.
where U(r,w) is the plane wave of radial frequency w at travel distance r, k(w) is the wavenumber and i is the imaginary unit. Reflection seismograms record the reflection wave along the propagation path r from the source to reflector and back to the surface. With this approach Wang assumes that the plane wave U(r,w) has already been attenuated by a Q filter through travel distance r. We must have this in mind when we go to the step of finding a solution of (1.1). It is necessary that the initial U(r,w) either is already created by a forward synthetic Q-filtering process or taken directly from seismic surface data. Wang has introduced this concept in chapter 5 in his book. I think it is necessary to have this in mind also when the inverse theory is developed. Equation (1.1) has an analytical solution given by
Kolsky's attenuation-dispersion model
The wavenumber k(w) is an important variable in the solution (1.2). To obtain a solution that can be applied to sei |
https://en.wikipedia.org/wiki/Navassa%20curly-tailed%20lizard | The Navassa curly-tailed lizard or Navassa curlytail lizard (Leiocephalus eremitus) is an extinct lizard species from the family of curly-tailed lizard (Leiocephalidae). It is known only from the holotype, a female specimen from which it was described in 1868. A possible second specimen which was collected by Rollo Beck in 1917 was instead identified as a Tiburon curly-tailed lizard (Leiocephalus melanochlorus) by herpetologist Richard Thomas in 1966.
Geographic range
Leiocephalus eremitus was endemic to Navassa Island.
Description
The size of the holotype is given as snout–vent length (SVL). The head and ventral scales are smooth. The dorsal scales are larger than the scales on the flanks and the ventral scales. The dorsum is dark gray with nine dark transverse bars. The tail is pale with transverse bars on the basal half and uniformly dark gray to black on the posterior half. Throat, breast, belly and the extremities are brown with pale-tipped scales.
Behavior and habitat
Navassa has xeric forest vegetation, but nothing specific is known about biology of this species. The reason for its extinction is also unknown, but predation by cats is a possible reason. |
https://en.wikipedia.org/wiki/Lebesgue%27s%20decomposition%20theorem | In mathematics, more precisely in measure theory, Lebesgue's decomposition theorem states that for every two σ-finite signed measures and on a measurable space there exist two σ-finite signed measures and such that:
(that is, is absolutely continuous with respect to )
(that is, and are singular).
These two measures are uniquely determined by and
Refinement
Lebesgue's decomposition theorem can be refined in a number of ways.
First, the decomposition of a regular Borel measure on the real line can be refined:
where
νcont is the absolutely continuous part
νsing is the singular continuous part
νpp is the pure point part (a discrete measure).
Second, absolutely continuous measures are classified by the Radon–Nikodym theorem, and discrete measures are easily understood. Hence (singular continuous measures aside), Lebesgue decomposition gives a very explicit description of measures. The Cantor measure (the probability measure on the real line whose cumulative distribution function is the Cantor function) is an example of a singular continuous measure.
Related concepts
Lévy–Itō decomposition
The analogous decomposition for a stochastic processes is the Lévy–Itō decomposition: given a Lévy process X, it can be decomposed as a sum of three independent Lévy processes where:
is a Brownian motion with drift, corresponding to the absolutely continuous part;
is a compound Poisson process, corresponding to the pure point part;
is a square integrable pure jump martingale that almost surely has a countable number of jumps on a finite interval, corresponding to the singular continuous part.
See also
Decomposition of spectrum
Hahn decomposition theorem and the corresponding Jordan decomposition theorem
Citations |
https://en.wikipedia.org/wiki/Marine%20microorganisms | Marine microorganisms are defined by their habitat as microorganisms living in a marine environment, that is, in the saltwater of a sea or ocean or the brackish water of a coastal estuary. A microorganism (or microbe) is any microscopic living organism or virus, that is too small to see with the unaided human eye without magnification. Microorganisms are very diverse. They can be single-celled or multicellular and include bacteria, archaea, viruses and most protozoa, as well as some fungi, algae, and animals, such as rotifers and copepods. Many macroscopic animals and plants have microscopic juvenile stages. Some microbiologists also classify viruses as microorganisms, but others consider these as non-living.
Marine microorganisms have been variously estimated to make up about 70%, or about 90%, of the biomass in the ocean. Taken together they form the marine microbiome. Over billions of years this microbiome has evolved many life styles and adaptations and come to participate in the global cycling of almost all chemical elements. Microorganisms are crucial to nutrient recycling in ecosystems as they act as decomposers. They are also responsible for nearly all photosynthesis that occurs in the ocean, as well as the cycling of carbon, nitrogen, phosphorus and other nutrients and trace elements. Marine microorganisms sequester large amounts of carbon and produce much of the world's oxygen.
A small proportion of marine microorganisms are pathogenic, causing disease and even death in marine plants and animals. However marine microorganisms recycle the major chemical elements, both producing and consuming about half of all organic matter generated on the planet every year. As inhabitants of the largest environment on Earth, microbial marine systems drive changes in every global system.
In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all life on the planet, including the marine microorganisms. Despite |
https://en.wikipedia.org/wiki/Somatotropin%20family | The Somatotropin family is a protein family whose titular representative is somatotropin, also known as growth hormone, a hormone that plays an important role in growth control. Other members include choriomammotropin (lactogen), its placental analogue; prolactin, which promotes lactation in the mammary gland, and placental prolactin-related proteins; proliferin and proliferin related protein; and somatolactin from various fishes. The 3D structure of bovine somatotropin has been predicted using a combination of heuristics and energy minimisation.
Human peptides from this family
CSH1; CSH2; CSHL1; GH1; GH2 (hGH-V); PRL; |
https://en.wikipedia.org/wiki/Leibniz%20formula%20for%20determinants | In algebra, the Leibniz formula, named in honor of Gottfried Leibniz, expresses the determinant of a square matrix in terms of permutations of the matrix elements. If is an matrix, where is the entry in the -th row and -th column of , the formula is
where is the sign function of permutations in the permutation group , which returns and for even and odd permutations, respectively.
Another common notation used for the formula is in terms of the Levi-Civita symbol and makes use of the Einstein summation notation, where it becomes
which may be more familiar to physicists.
Directly evaluating the Leibniz formula from the definition requires operations in general—that is, a number of operations asymptotically proportional to factorial—because is the number of order- permutations. This is impractically difficult for even relatively small . Instead, the determinant can be evaluated in operations by forming the LU decomposition (typically via Gaussian elimination or similar methods), in which case and the determinants of the triangular matrices and are simply the products of their diagonal entries. (In practical applications of numerical linear algebra, however, explicit computation of the determinant is rarely required.) See, for example, . The determinant can also be evaluated in fewer than operations by reducing the problem to matrix multiplication, but most such algorithms are not practical.
Formal statement and proof
Theorem.
There exists exactly one function which is alternating multilinear w.r.t. columns and such that .
Proof.
Uniqueness: Let be such a function, and let be an matrix. Call the -th column of , i.e. , so that
Also, let denote the -th column vector of the identity matrix.
Now one writes each of the 's in terms of the , i.e.
.
As is multilinear, one has
From alternation it follows that any term with repeated indices is zero. The sum can therefore be restricted to tuples with non-repeating indices, i.e. permutation |
https://en.wikipedia.org/wiki/Hughes%20Medal | The Hughes Medal is awarded by the Royal Society of London "in recognition of an original discovery in the physical sciences, particularly electricity and magnetism or their applications". Named after David E. Hughes, the medal is awarded with a gift of £1000. The medal was first awarded in 1902 to J. J. Thomson "for his numerous contributions to electric science, especially in reference to the phenomena of electric discharge in gases", and has since been awarded over one-hundred times. Unlike other Royal Society medals, the Hughes Medal has never been awarded to the same individual more than once.
The medal has on occasion been awarded to multiple people at a time; in 1938 it was won by John Cockcroft and Ernest Walton "for their discovery that nuclei could be disintegrated by artificially produced bombarding particles", in 1981 by Peter Higgs and Tom Kibble "for their international contributions about the spontaneous breaking of fundamental symmetries in elementary-particle theory", in 1982 by Drummond Matthews and Frederick Vine "for their elucidation of the magnetic properties of the ocean floors which subsequently led to the plate tectonic hypothesis" and in 1988 by Archibald Howie and M. J. Whelan "for their contributions to the theory of electron diffraction and microscopy, and its application to the study of lattice defects in crystals".
List of recipients
Source: Royal Society
See also
List of physics awards |
https://en.wikipedia.org/wiki/Triply%20periodic%20minimal%20surface | In differential geometry, a triply periodic minimal surface (TPMS) is a minimal surface in ℝ3 that is invariant under a rank-3 lattice of translations.
These surfaces have the symmetries of a crystallographic group. Numerous examples are known with cubic, tetragonal, rhombohedral, and orthorhombic symmetries. Monoclinic and triclinic examples are certain to exist, but have proven hard to parametrise.
TPMS are of relevance in natural science. TPMS have been observed as biological membranes, as block copolymers, equipotential surfaces in crystals etc. They have also been of interest in architecture, design and art.
Properties
Nearly all studied TPMS are free of self-intersections (i.e. embedded in ℝ3): from a mathematical standpoint they are the most interesting (since self-intersecting surfaces are trivially abundant).
All connected TPMS have genus ≥ 3, and in every lattice there exist orientable embedded TPMS of every genus ≥3.
Embedded TPMS are orientable and divide space into two disjoint sub-volumes (labyrinths). If they are congruent the surface is said to be a balance surface.
History
The first examples of TPMS were the surfaces described by Schwarz in 1865, followed by a surface described by his student E. R. Neovius in 1883.
In 1970 Alan Schoen came up with 12 new TPMS based on skeleton graphs spanning crystallographic cells.
While Schoen's surfaces became popular in natural science the construction did not lend itself to a mathematical existence proof and remained largely unknown in mathematics, until H. Karcher proved their existence in 1989.
Using conjugate surfaces many more surfaces were found. While Weierstrass representations are known for the simpler examples, they are not known for many surfaces. Instead methods from Discrete differential geometry are often used.
Families
The classification of TPMS is an open problem.
TPMS often come in families that can be continuously deformed into each other. Meeks found an explicit 5-parameter fa |
https://en.wikipedia.org/wiki/Stability%20%28algebraic%20geometry%29 | In mathematics, and especially algebraic geometry, stability is a notion which characterises when a geometric object, for example a point, an algebraic variety, a vector bundle, or a sheaf, has some desirable properties for the purpose of classifying them. The exact characterisation of what it means to be stable depends on the type of geometric object, but all such examples share the property of having a minimal amount of internal symmetry, that is such stable objects have few automorphisms. This is related to the concept of simplicity in mathematics, which measures when some mathematical object has few subobjects inside it (see for example simple groups, which have no non-trivial normal subgroups). In addition to stability, some objects may be described with terms such as semi-stable (having a small but not minimal amount of symmetry), polystable (being made out of stable objects), or unstable (having too much symmetry, the opposite of stable).
Background
In many areas of mathematics, and indeed within geometry itself, it is often very desirable to have highly symmetric objects, and these objects are often regarded as aesthetically pleasing. However, high amounts of symmetry are not desirable when one is attempting to classify geometric objects by constructing moduli spaces of them, because the symmetries of these objects cause the formation of singularities, and obstruct the existence of universal families.
The concept of stability was first introduced in its modern form by David Mumford in 1965 in the context of geometric invariant theory, a theory which explains how to take quotients of algebraic varieties by group actions, and obtain a quotient space that is still an algebraic variety, a so-called categorical quotient. However the ideas behind Mumford's work go back to the invariant theory of David Hilbert in 1893, and the fundamental concepts involved date back even to the work of Bernhard Riemann on constructing moduli spaces of Riemann surfaces. Since the |
https://en.wikipedia.org/wiki/Tomato%20effect | The tomato effect occurs when effective therapies for a condition are rejected, usually because they do not make sense in the context of the current understanding or theory of the disease in question. The name refers to the fact that tomatoes were rejected as a food source by most North Americans until the end of the 19th century, because the prevailing belief at the time was that they were poisonous.
A parallel concern is medical reversal which is new clinical information based on new clinical trials or understanding of a disease contradicting clinical practice. Medical reversal implies the original clinical practice failed to achieve success or had harms that outweighed benefits. That is contrasted with the phenomenon of replacement where a useful clinical practice is replaced by one that works better.
Examples
Tomatoes were becoming a staple food in Europe by the 1560s; they were shunned in North America since they were considered poisonous until the 1820s. Similarly, willow tree bark extract was ignored to provide relief of pain and fever, and it was not until the late 1800s with the commercial production of salicylate (also known as Aspirin) that this treatment was prescribed to patients.
In 1753, it was established that scurvy can be treated with lemon juice. Despite this knowledge, it was considered an imbalance of the humors until the mid 1800s. |
https://en.wikipedia.org/wiki/36%20%28number%29 | 36 (thirty-six) is the natural number following 35 and preceding 37.
In mathematics
36 is both the square of six, and the eighth triangular number or sum of the first eight non-zero positive integers, which makes 36 the first non-trivial square triangular number. Aside from being the smallest square triangular number other than 1, it is also the only triangular number (other than 1) whose square root is also a triangular number. 36 is also the eighth refactorable number, as it has exactly nine positive divisors, and 9 is one of them; in fact, it is the smallest number with exactly nine divisors, which leads 36 to be the 7th highly composite number. It is the sum of the fourth pair of twin-primes (17 + 19), and the 18th Harshad number in decimal, as it is divisible by the sum of its digits (9).
It is the smallest number with exactly eight solutions (37, 57, 63, 74, 76, 108, 114, 126) to the Euler totient function . Adding up some subsets of its divisors (e.g., 6, 12, and 18) gives 36; hence, it is also the eighth semiperfect number.
This number is the sum of the cubes of the first three positive integers and also the product of the squares of the first three positive integers.
36 is the number of degrees in the interior angle of each tip of a regular pentagram.
The thirty-six officers problem is a mathematical puzzle with no solution.
The number of possible outcomes (not summed) in the roll of two distinct dice.
36 is the largest numeric base that some computer systems support because it exhausts the numerals, 0–9, and the letters, A-Z. See Base 36.
The truncated cube and the truncated octahedron are Archimedean solids with 36 edges.
The number of domino tilings of a 4×4 checkerboard is 36.
Since it is possible to find sequences of 36 consecutive integers such that each inner member shares a factor with either the first or the last member, 36 is an Erdős–Woods number.
The sum of the integers from 1 to 36 is 666 (see number of the beast).
Measurements
|
https://en.wikipedia.org/wiki/Energy%20%28signal%20processing%29 | In signal processing, the energy of a continuous-time signal x(t) is defined as the area under the squared magnitude of the considered signal i.e., mathematically
Unit of will be (unit of signal)2.
And the energy of a discrete-time signal x(n) is defined mathematically as
Relationship to energy in physics
Energy in this context is not, strictly speaking, the same as the conventional notion of energy in physics and the other sciences. The two concepts are, however, closely related, and it is possible to convert from one to the other:
where Z represents the magnitude, in appropriate units of measure, of the load driven by the signal.
For example, if x(t) represents the potential (in volts) of an electrical signal propagating across a transmission line, then Z would represent the characteristic impedance (in ohms) of the transmission line. The units of measure for the signal energy would appear as volt2·seconds, which is not dimensionally correct for energy in the sense of the physical sciences. After dividing by Z, however, the dimensions of E would become volt2·seconds per ohm,
which is equivalent to joules, the SI unit for energy as defined in the physical sciences.
Spectral energy density
Similarly, the spectral energy density of signal x(t) is
where X(f) is the Fourier transform of x(t).
For example, if x(t) represents the magnitude of the electric field component (in volts per meter) of an optical signal propagating through free space, then the dimensions of X(f) would become volt·seconds per meter and would represent the signal's spectral energy density (in volts2·second2 per meter2) as a function of frequency f (in hertz). Again, these units of measure are not dimensionally correct in the true sense of energy density as defined in physics. Dividing by Zo, the characteristic impedance of free space (in ohms), the dimensions become joule-seconds per meter2 or, equivalently, joules per meter2 per hertz, which is dimensionally correct in SI |
https://en.wikipedia.org/wiki/Quantum%20machine%20learning | Quantum machine learning is the integration of quantum algorithms within machine learning programs.
The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer, i.e. quantum-enhanced machine learning. While machine learning algorithms are used to compute immense quantities of data, quantum machine learning utilizes qubits and quantum operations or specialized quantum systems to improve computational speed and data storage done by algorithms in a program. This includes hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device. These routines can be more complex in nature and executed faster on a quantum computer. Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data.
Beyond quantum computing, the term "quantum machine learning" is also associated with classical machine learning methods applied to data generated from quantum experiments (i.e. machine learning of quantum systems), such as learning the phase transitions of a quantum system or creating new quantum experiments.
Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa.
Furthermore, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory".
Machine learning with quantum computers
Quantum-enhanced machine learning refers to quantum algorithms that solve tasks in machine learning, thereby improving and often expediting classical machine learning techniques. Such algorithms typically require one to encode the given classical dat |
https://en.wikipedia.org/wiki/The%20Tower%20of%20Hanoi%20%E2%80%93%20Myths%20and%20Maths | The Tower of Hanoi – Myths and Maths is a book in recreational mathematics, on the tower of Hanoi, baguenaudier, and related puzzles. It was written by Andreas M. Hinz, Sandi Klavžar, Uroš Milutinović, and Ciril Petr, and published in 2013 by Birkhäuser, with an expanded second edition in 2018. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics
Although this book is in recreational mathematics, it takes its subject seriously, and brings in material from automata theory, computational complexity, the design and analysis of algorithms, graph theory, and group theory, topology, fractal geometry, chemical graph theory, and even psychology (where related puzzles have applications in psychological testing).
The 1st edition of the book had 10 chapters, and the 2nd edition has 11. In both cases they begin with chapter zero, on the background and history of the Tower of Hanoi puzzle, covering its real-world invention by Édouard Lucas and in the mythical backstory he invented for it. Chapter one considers the Baguenaudier puzzle (or, as it is often called, the Chinese rings), related to the tower of Hanoi both in the structure of its state space and in the fact that it takes an exponential number of moves to solve, and likely the inspiration for Lucas. Chapter two introduces the main topic of the book, the tower of Hanoi, in its classical form in which one must move disks one-by-one between three towers, always keeping the disks on each tower sorted by size. It provides several different algorithms for solving the classical puzzle (in which the disks begin and end all on a single tower) in as few moves as possible, and for collecting all disks on a single tower when they begin in other configurations, again as quickly as possible. It introduces the Hanoi graphs describing the state space of the puzzle, and relates numbers of puzzle steps to distances within this graph. After |
https://en.wikipedia.org/wiki/Society%20of%20Southwestern%20Entomologists | The Society of Southwestern Entomologists was founded as the Southwestern Entomological Society in 1976 with the objective of fostering entomological accomplishment in the southwestern United States and Mexico. The society's name was changed in 2003 to avoid confusion with the Southwestern Branch of the Entomological Society of America, with whom they meet annually. A primary function of the Society is the publication of the journal Southwestern Entomologist, published quarterly in March, June, September and December. |
https://en.wikipedia.org/wiki/Ecological%20corridor%20%28Brazil%29 | An ecological corridor () in Brazil is a collection of natural or semi-natural areas that link protected areas and allow gene flow between them.
Definition
The National System of Conservation Units (SNUC) law recognises ecological corridors as portions of natural or semi-natural ecosystems linking protected areas that allow gene flow and movement of biota, recolonization of degraded areas and maintenance of viable populations larger than would be possible with individual units.
The federal Ecological Corridor Project has its roots at least as far back as 1993.
It has identified seven major corridors, with focus on implementing and learning from the Central Amazon Corridor and the Central Atlantic Forest Corridor.
Examples |
https://en.wikipedia.org/wiki/Marine%20ecoregion | A marine ecoregion is an ecoregion, or ecological region, of the oceans and seas identified and defined based on biogeographic characteristics.
Introduction
A more complete definition describes them as “Areas of relatively homogeneous species composition, clearly distinct from adjacent systems” dominated by “a small number of ecosystems and/or a distinct suite of oceanographic or topographic features”. Ecologically they “are strongly cohesive units, sufficiently large to encompass ecological or life history processes for most sedentary species.”
Marine Ecoregions of the World—MEOW
The global classification system Marine Ecoregions of the World—MEOW was devised by an international team, including major conservation organizations, academic institutions and intergovernmental organizations. The system covers coastal and continental shelf waters of the world, and does not include deep ocean waters. The MEOW system integrated the biogeographic regionalization systems in use at national or continental scale, like Australia's Integrated Marine and Coastal Regionalisation of Australia and the Nature Conservancy’s system in the Americas, although it often uses different names for the subdivisions.
This system has a strong biogeographic basis, but was designed to aid in conservation activities for marine ecosystems. Its subdivisions include both the seafloor (benthic) and shelf pelagic (neritic) biotas of each marine region.
The digital ecoregions layer is available for download as an ArcGIS Shapefile.
Subdivisions
Ecoregions
The Marine Ecoregions of the World classification defines 232 marine ecoregions (e.g. Adriatic Sea, Cortezian, Ningaloo, Ross Sea) for the coastal and shelf waters of the world.
Provinces
These marine ecoregions form part of a nested system and are grouped into 62 provinces (e.g. the South China Sea, Mediterranean Sea, Central Indian Ocean Islands).
Realms
The provinces in turn, are grouped into 12 major realms. The latter are considered analogou |
https://en.wikipedia.org/wiki/Wire%20bonding | Wire bonding is the method of making interconnections between an integrated circuit (IC) or other semiconductor device and its packaging during semiconductor device fabrication. Although less common, wire bonding can be used to connect an IC to other electronics or to connect from one printed circuit board (PCB) to another. Wire bonding is generally considered the most cost-effective and flexible interconnect technology and is used to assemble the vast majority of semiconductor packages. Wire bonding can be used at frequencies above 100 GHz.
Materials
Bondwires usually consist of one of the following materials:
Aluminium
Copper
Silver
Gold
Wire diameters start from under 10 μm and can be up to several hundred micrometres for high-powered applications.
The wire bonding industry is transitioning from gold to copper. This change has been instigated by the rising cost of gold and the comparatively stable, and much lower, cost of copper. While possessing higher thermal and electrical conductivity than gold, copper had previously been seen as less reliable due to its hardness and susceptibility to corrosion. By 2015, it is expected that more than a third of all wire bonding machines in use will be set up for copper.
Copper wire has become one of the preferred materials for wire bonding interconnects in many semiconductor and microelectronic applications. Copper is used for fine wire ball bonding in sizes from up to . Copper wire has the ability of being used at smaller diameters providing the same performance as gold without the high material cost.
Copper wire up to can be successfully wedge bonded. Large diameter copper wire can and does replace aluminium wire where high current carrying capacity is needed or where there are problems with complex geometry. Annealing and process steps used by manufacturers enhance the ability to use large diameter copper wire to wedge bond to silicon without damage occurring to the die.
Copper wire does pose some challenges in |
https://en.wikipedia.org/wiki/Membrane%20progesterone%20receptor | Membrane progesterone receptors (mPRs) are a group of cell surface receptors and membrane steroid receptors belonging to the progestin and adipoQ receptor (PAQR) family which bind the endogenous progestogen and neurosteroid progesterone, as well as the neurosteroid allopregnanolone. Unlike the progesterone receptor (PR), a nuclear receptor which mediates its effects via genomic mechanisms, mPRs are cell surface receptors which rapidly alter cell signaling via modulation of intracellular signaling cascades. The mPRs mediate important physiological functions in male and female reproductive tracts, liver, neuroendocrine tissues, and the immune system as well as in breast and ovarian cancer.
The mPRs appear to be involved in the neuroprotective and antigonadotropic effects of progesterone and allopregnanolone. The progesterone active metabolites 5α-dihydroprogesterone, also a progestogen, and allopregnanolone, which are positive allosteric modulators of the GABAA receptor, have been found to rapidly influence sexual receptivity and behavior in mice, actions that are GABAA receptor-dependent.
These proteins are classified into three groups known as mPRα (PAQR7), mPRβ (PAQR8), mPRγ (PAQR5), mPRδ (PAQR6), and mPRϵ (PAQR9).
mPR Subtypes
mPRα
Membrane progesterone receptor alpha (mPRα) is a protein that in humans is encoded by the PAQR7 gene. It is a steroid receptor which binds progesterone in vitro. Recent studies suggest the mPRα has important physiological functions in a variety of reproductive tissues. The mPRα is an intermediary in progestin induction of oocyte maturation and stimulation of sperm hyper motility in fish. In mammals, the mPRα has been implied in progesterone regulation of uterine functions in humans and GnRH secretion in rodents.
mPRβ
Membrane progesterone receptor beta (mPRβ) is a protein that in humans is encoded by the PAQR8 gene.
A recent study has investigated the role of mPRβ in regulating in vitro maturation (IVM) of pig cumulus-oocyte com |
https://en.wikipedia.org/wiki/The%20Schoolmaster%27s%20Assistant%2C%20Being%20a%20Compendium%20of%20Arithmetic%20Both%20Practical%20and%20Theoretical | The Schoolmaster's Assistant, Being a Compendium of Arithmetic both Practical and Theoretical was an early and popular English arithmetic textbook, written by Thomas Dilworth and first published in England in 1743. An American edition was published in 1769; by 1786 it had reached 23 editions, and through 1800 it was the most popular mathematics text in America.
Sections
Although different editions of the book varied in content according to the whims of their publishers, most editions of the book reached from the introductory topics to the advanced in five sections:
Section I, Whole Numbers included the basis of the four operations and proceeded to topics on interest, rebates, partnership, weights and measures, the double rule of three, alligation, mediation and permutations.
Section II dealt with common fractions.
Section III dealt with decimal fraction operations and included roots up to the fourth power, and work on annuities and pensions.
Section IV was a collection of 104 word problems to be solved. As was common in many older texts, the questions were sometimes stated in rhyme. Lessons for students were for memorization and recitation.
Section V was on duodecimals, working with fractions in which the only denominators were twelfths. These types of problems continue in textbooks and appear in the 1870 edition of White's Complete Arithmetic in the appendix. The definition states: A Duodecimal is a denominate number in which twelve units of any denomination make a unit of the next higher denomination. Duodecimals are used by artificers in measuring surfaces and solids. |
https://en.wikipedia.org/wiki/Fasting%20spittle | Fasting spittle – saliva produced first thing in the morning, before breakfast – has been used to treat a wide variety of diseases for many hundreds of years. Spittle cures are usually considered to be more effective if fasting spittle is used.
An early recorded use of spittle as a cure comes from the Gospel of St Mark, believed to have been written in about 70 AD:
Writing at about the same time as Mark, the Roman natural philosopher Pliny commented in his Natural History that fasting spittle was efficacious in the treatment of ophthalmia, and that the fasting spittle of a woman was particularly beneficial for treating bloodshot eyes. |
https://en.wikipedia.org/wiki/Elements%20of%20Dynamic | Elements of Dynamic is a book published by William Kingdon Clifford in 1878. In 1887 it was supplemented by a fourth part and an appendix. The subtitle is "An introduction to motion and rest in solid and fluid bodies". It was reviewed positively, has remained a standard reference since its appearance, and is now available online as a Historical Math Monograph from Cornell University.
On page 95 Clifford deconstructed the quaternion product of William Rowan Hamilton into two separate "products" of two vectors: vector product and scalar product, anticipating the severance seen in Vector Analysis (1901). Elements of Dynamic was the debut of the term cross-ratio for a four-argument function frequently used in geometry.
Clifford uses the term twist to discuss (pages 126 to 131) the screw theory that had recently been introduced by Robert Stawell Ball.
Reviews
A review in the Philosophical Magazine explained for prospective readers that kinematics is the "study of the theory of pure motion". Noting the nature of "progressive training" required for mathematics, the reviewer wondered "For what class of readers is the book designed?"
Richard A. Proctor noted in The Contemporary Review (33:65) that there are "few errors in the work, and even misprints are few and far between for a treatise of this kind." He did not approve of Clifford's coining of "odd new words as squirts, sinks, twists, and whirls." Proctor quoted the last sentence of the book: "Every continuous motion of an infinite body may be built up of squirts and vortices."
In a "Sketch of Professor Clifford" in June 1879 the journal Popular Science said "It will probably not take high rank as a university text-book, for which it was intended, but is much admired by mathematicians for the elegance, freshness, and originality displayed in the treatment of mathematical problems."
After Clifford had died, and Book IV and Appendix were published in 1887, the literary magazine
Athenaeum said "we have here Clifford p |
https://en.wikipedia.org/wiki/Cora%20G.%20Burwell | Cora Gertrude Burwell (June 25, 1883 – June 20, 1982) was an American astronomical researcher specialized in stellar spectroscopy. She was based at Mount Wilson Observatory from 1907 to 1949.
Early life
Cora Gertrude Burwell was born in Massachusetts and raised in Stafford Springs, Connecticut. She graduated from Mount Holyoke College in 1906 and was active in Holyoke alumnae activities in the Los Angeles area.
Career
In July, 1907, Burwell was appointed to a "human computer" position at Mount Wilson Observatory. In 1910, she attended the fourth conference of the International Union for Cooperation in Solar Research, when it was held at Mount Wilson.
Burwell specialized in stellar spectroscopy. She was solo author on some scientific publications, and co-authored several others (some of which she was lead author), with notable collaborators including Dorrit Hoffleit, Henrietta Swope, Walter S. Adams, and Paul W. Merrill. With Merrill she compiled several catalogs of Be stars, in 1933, 1943, 1949, and 1950. She also helped to tend the Mount Wilson Observatory Library. She retired from the observatory in 1949, but continued speaking about astronomy to community groups. She also published a book of poetry, Neatly Packed.
Personal life
Cora Burwell lived in Pasadena, and later in Monrovia with her sister, Priscilla Burwell. She died in 1982, two days before her 99th birthday, in Los Angeles. |
https://en.wikipedia.org/wiki/Alienware | Alienware is an American computer hardware subsidiary of Dell. Their product range is dedicated to gaming computers and can be identified by their alien-themed designs. Alienware was founded in 1996 by Nelson Gonzalez and Alex Aguila. The development of the company is also associated with Frank Azor, Arthur Lewis, Joe Balerdi, and Michael S. Dell. The company's corporate headquarters is located in The Hammocks, Miami, Florida.
History
Overview
Established in 1996 as Saikai of Miami, Inc. by Nelson Gonzalez and Alex Aguila, two childhood friends, Alienware assembles desktops, notebooks, workstations, and PC gaming consoles. According to employees, the name "Alienware" was chosen because of the founders' fondness for the hit television series The X-Files, which also inspired the science-fiction themed names of product lines such as Area-51, Hangar 18, and Aurora. In 1997, it changed its name to Alienware.
Acquisition and current status
Dell had considered buying the Alienware company since 2002, but did not agree to purchase the company until March 22, 2006. As a subsidiary, it retains control of its design and marketing while benefiting from Dell's purchasing power, economies of scale, and supply chain, which lowered its operating costs.
Initially, Dell maintained its competing XPS line of gaming PCs, often selling computers with similar specifications, which may have hurt Alienware's market share within its market segment. Due to corporate restructuring in the spring of 2008, the XPS brand was scaled down, and the Desktop line was eliminated leaving only the XPS Notebooks, but XPS Desktop models had returned by the end of the year.
Product development of gaming PCs was consolidated with Dell's gaming division, with Alienware becoming Dell's premier gaming brand. On June 2, 2009, The M17x was introduced as the first Alienware/Dell branded system. This launch also expanded Alienware's global reach from 6 to 35 countries while supporting 17 different languages.
C |
https://en.wikipedia.org/wiki/Computational%20Biology%20and%20Chemistry | Computational Biology and Chemistry is a peer-reviewed scientific journal published by Elsevier covering all areas of computational life sciences. The current editor-in-chief are Wentian Li (The Feinstein Institute for Medical Research) and Donald Hamelberg (Georgia State University). The journal was established in 1976 as Computers & Chemistry, with DeLos F. DeTar (Florida State University) as its first editor. It obtained its current title in 2003 under the editorship of Andrzej K Konopka and James Crabble (University of Bedfordshire).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal had a 2011 impact factor of 1.551, ranking it 42nd out of 85 journals in the category "Biology" and 36th out of 99 journals in the category "Computer Science, Interdisciplinary Applications" |
https://en.wikipedia.org/wiki/Flamm%C3%A9%20%28vexillology%29 | Flammé (German geflammt) is a term in vexillology for a flag design that places a coat of arms in the center of the flag, filling the remaining space on the flag with flame-like designs.
The design was used specifically in the Old Swiss Confederacy during the 17th and 18th centuries, where there was no difference between coat of arms and flags, and the same design was used for both.
Regiments of Swiss mercenaries during the 18th century, especially those in French service, often used flammé designs with the Swiss Cross superimposed rather than a coat of arms. |
https://en.wikipedia.org/wiki/SUBST | In computing, SUBST is a command on the DOS, IBM OS/2, Microsoft Windows and ReactOS operating systems used for substituting paths on physical and logical drives as virtual drives.
Overview
In MS-DOS, the SUBST command was added with the release of MS-DOS 3.1. The command is similar to floating drives, a more general concept in operating systems of Digital Research origin, including CP/M-86 2.x, Personal CP/M-86 2.x, Concurrent DOS, Multiuser DOS, System Manager 7, REAL/32, as well as DOS Plus and DR DOS (up to 6.0). DR DOS 6.0 includes an implementation of the command. The command is also available in FreeDOS and PTS-DOS. The Windows SUBST command is available in supported versions of the command line interpreter cmd.exe. In Windows NT, SUBST uses DefineDosDevice() to create the disk mappings.
The JOIN command is the "opposite" of SUBST, because JOIN will take a drive letter and make it appear as a directory.
Some versions of MS-DOS COMMAND.COM support the undocumented internal TRUENAME command which can display the "true name" of a file, i.e. the fully qualified name with drive, path, and extension, which is found possibly by name only via the PATH environment variable, or through SUBST, JOIN and ASSIGN filesystem mappings.
Syntax
This is the command syntax in Windows XP to associate a path with a drive letter:
SUBST [drive1: [drive2:]path]
SUBST drive1: /D
Parameters
drive1: – Specify a virtual drive to which to assign a path.
[drive2:]path – Specify a physical drive and path to assign to a virtual drive.
/D – Delete a substituted (virtual) drive.
Examples
Mapping a drive
This means that, for example, to map C:'s root to X:, the following command would be used at the command-line interface:
C:\>SUBST X: C:\
Upon doing this, a new drive called X: would appear under the My Computer virtual folder in Windows Explorer.
Unmapping a drive
To unmap drive X: again, the following command needs to by typed at the command prompt:
C:\>SUBST X: /D
Custom labe |
https://en.wikipedia.org/wiki/Integrated%20Water%20Flow%20Model | Integrated Water Flow Model (IWFM) is a computer program for simulating water flow through the integrated land surface, surface water and groundwater flow systems. It is a rewrite of the abandoned software IGSM, which was found to have several programing errors. The IWFM programs and source code are freely available. IWFM is written in Fortran, and can be compiled and run on Microsoft Windows, Linux and Unix operating systems. The IWFM source code is released under the GNU General Public License.
Groundwater flow is simulated using the finite element method. Surface water flow can be simulated as a simple one-dimensional flow-through network or with the kinematic wave method. IWFM input data sets incorporate a time stamp, allowing users to run a model for a specified time period without editing the input files.
One of the most useful features of IWFM is the internal calculation of water demands for each land use type. IWFM simulates four land use classes: agricultural, urban, native vegetation, and riparian vegetation. Land use areas are delineated as a time series, with corresponding evapotranspiration rates and water management parameters. Each time step, the land use process applies precipitation, calculates infiltration and runoff, calculates water demands, and determines what portion of the demands are not met by soil moisture. For agricultural and urban land use classes, IWFM then applies surface water and groundwater at specified rates, and optionally adjusts surface water and groundwater to exactly meet water demands. This automatic adjustment feature is especially useful for calculating unmeasured flow components (such as groundwater withdrawals) or for simulating proposed future scenarios such as studying the impacts of potential climate change.
In IWFM, the land surface, surface water and groundwater flow domains are simulated as separate processes, compiled into individual dynamic link libraries. The processes are linked by water flow terms, maintain |
https://en.wikipedia.org/wiki/Accelerator%20physicist | An accelerator physicist is a scientist who contributes to the field of Accelerator physics, involving the fundamental physical mechanisms underlying beams of charged particles accelerated to high energies and the structures and materials needed to do so. In addition to developing and applying such basic theoretical models, an accelerator physicist contributes to the design, operation and optimization of particle accelerators.
Significant accelerator physicists
John Cockcroft
Ernest Courant
Helen T. Edwards
Donald William Kerst
Ernest Lawrence
Carlo Rubbia
Ernest Rutherford
Andrew Sessler
Robert Van de Graaff
Simon van der Meer
Ernest Walton
Rolf Widerøe
See Also
Accelerator physics
List of particle accelerators |
https://en.wikipedia.org/wiki/DigitaOS | DigitaOS was a short lived digital camera operating system created by Flashpoint Technology and used on various Kodak, Pentax, and HP cameras in the late 1990s. DigitaOS debuted with the Kodak DC220 on 20 May 1998, and was released on a total of 11 camera models before it was abandoned in 2001. DigitaOS was notable for its ability to run third party software, a concept that was not again realized until the release of various Android based digital cameras in the early 2010s.
DigitaOS applications were programmed either as JIT compiled scripts using "Digita Script", or AOT compiled programs written in C using an official SDK. The operating system abstracted away most camera functionality and hardware platform differences, allowing software to be compatible with most DigitaOS cameras. Additionally, DigitaOS handled the GUI presented to the user and basic camera functionality.
Because of its ability to run third party software, several games were ported to it. The most notable of these being DOOM and MAME.
Cameras using DigitaOS
Kodak DC220
Kodak DC260
Kodak DC265
Kodak DC290
Minolta Dimage 1500 EX
Minolta 1500 3D
HP C500 Photosmart
HP C618 Photosmart
HP C912 Photosmart
PENTAX EI-200
PENTAX EI-2000 |
https://en.wikipedia.org/wiki/Sofya%20Raskhodnikova | Sofya Raskhodnikova (born 1976) is a Belarusian and American theoretical computer scientist. She is known for her research in sublinear-time algorithms, information privacy, property testing, and approximation algorithms, and was one of the first to study differentially private analysis of graphs. She is a professor of computer science at Boston University.
Education and career
Raskhodnikova completed her Ph.D. at the Massachusetts Institute of Technology in 2003. Her dissertation, Property Testing: Theory and Applications, was supervised by Michael Sipser.
After postdoctoral research at the Hebrew University of Jerusalem and the Weizmann Institute of Science, Raskhodnikova became a faculty member at Pennsylvania State University in 2007. She moved to Boston University in 2017.
Other activities
While a student at MIT, Raskhodnikova also competed in ballroom dancing.
She has been one of the organizers of TCS Women, a community for women in theoretical computer science. |
https://en.wikipedia.org/wiki/Ecosia | Ecosia is a search engine based in Berlin, Germany. Ecosia considers itself a social business, claiming to be CO2-negative, supporting full financial transparency, and protecting the privacy of its users.
Ecosia is B Lab certified, meeting its standards of accountability, sustainability, and performance. , the company claims to have planted more than 181 million trees since its inception and planting a tree every 1.3 seconds.
Search engine
At launch, the search engine provided a combination of search results from Yahoo! and technologies from Microsoft Bing and Wikipedia. Advertisements were delivered by Yahoo! as part of a revenue sharing agreement with the company.
Ecosia's search results have been provided by Bing since 2017. Advertisements provided by Microsoft Advertising appear alongside search results, and in 2022 Ecosia stated that it earns "a few cents" on every click of an ad, as well as a portion of the price of a purchase made through an affiliate link.
In 2018, Ecosia committed to becoming a privacy-friendly search engine. Searches are encrypted (presumably with standard HTTPS) and not stored permanently, nor is data sold to third-party advertisers. The company states in its privacy policy that it does not create personal profiles based on search history or use external tracking tools like Google Analytics.
, Ecosia users conducted over 10,000 searches every minute.
Business model
Ecosia uses 80% of its profits (47.1% of its income) from advertising revenue to support tree-planting projects. In October 2018, founder Christian Kroll announced he had given some of his shares to the Purpose Foundation. As a result, Kroll and Ecosia co-owner Tim Schumacher gave up their right to sell Ecosia or take any profits out of the company.
In a May 2021 Handelsblatt article, example figures from March showed revenues of €1,969,440, while the largest expenditure was "trees" at €789,113, ahead of the second-largest expenditure, operating costs, at €543, |
https://en.wikipedia.org/wiki/Acutance | In photography, acutance describes a subjective perception of sharpness that is related to the edge contrast of an image. Acutance is related to the amplitude of the derivative of brightness with respect to space. Due to the nature of the human visual system, an image with higher acutance appears sharper even though an increase in acutance does not increase real resolution.
Historically, acutance was enhanced chemically during development of a negative (high acutance developers), or by optical means in printing (unsharp masking). In digital photography, onboard camera software and image postprocessing tools such as Photoshop or GIMP offer various sharpening facilities, the most widely used of which is known as "unsharp mask" because the algorithm is derived from the eponymous analog processing method.
In the example image, two light gray lines were drawn on a gray background. As the transition is instantaneous, the line is as sharp as can be represented at this resolution. Acutance in the left line was artificially increased by adding a one-pixel-wide darker border on the outside of the line and a one-pixel-wide brighter border on the inside of the line. The actual sharpness of the image is unchanged, but the apparent sharpness is increased because of the greater acutance.
Artificially increased acutance has drawbacks. In this somewhat overdone example most viewers will also be able to see the borders separately from the line, which create two halos around the line, one dark and one shimmering bright.
Tools
Several image processing techniques, such as unsharp masking, can increase the acutance in real images.
Resampling
Low-pass filtering and resampling often cause overshoot, which increases acutance, but can also reduce absolute gradient, which reduces acutance. Filtering and resampling can also cause clipping and ringing artifacts. An example is bicubic interpolation, widely used in image processing for resizing images.
Definition
One definition of acu |
https://en.wikipedia.org/wiki/White%20Christmas%20%28song%29 | "White Christmas" is an Irving Berlin song reminiscing about an old-fashioned Christmas setting. The song was written by Berlin for the 1942 musical film Holiday Inn. The composition won the Academy Award for Best Original Song at the 15th Academy Awards. Bing Crosby's record topped the Billboard chart for 11 weeks in 1942 and returned to the number one position again in December of 1943 and 1944. His version would return to the top 40 a dozen times in subsequent years.
Since its release, "White Christmas" has been covered by many artists, the version sung by Bing Crosby being the world's best-selling single (in terms of sales of physical media) with estimated sales in excess of 50 million physical copies worldwide. When the figures for other versions of the song are added to Crosby's, sales of the song exceed 100 million.
History
Origin
Accounts vary as to when and where Berlin wrote the song. One story is that he wrote it in 1940, in warm La Quinta, California, while staying at the La Quinta Hotel, a frequent Hollywood retreat also favored by writer-director-producer Frank Capra, although the Arizona Biltmore also claims the song was written there. He often stayed up all night writing. One day he told his secretary, "I want you to take down a song I wrote over the weekend. Not only is it the best song I ever wrote, it's the best song anybody ever wrote."
Bing Crosby versions
The first public performance of the song was by Bing Crosby, on his NBC radio show The Kraft Music Hall on Christmas Day, 1941, a few weeks after the attack on Pearl Harbor. Crosby subsequently recorded the song with the John Scott Trotter Orchestra and the Ken Darby Singers at Radio Recorders for Decca Records in 18 minutes on May 29, 1942, and it was released on July 30 as part of an album of six 78-rpm discs from the musical film Holiday Inn. At first, Crosby did not see anything special about the song. He just said "I don't think we have any problems with that one, Irving."
The song |
https://en.wikipedia.org/wiki/Jes%C3%BAs%20Ildefonso%20D%C3%ADaz | Jesús Ildefonso Díaz is a Spanish mathematician who works in partial differential equations. He is a professor at Complutense University of Madrid (UCM) and a member of the Spanish Royal Academy of Sciences.
Biography
Díaz was born in Toledo, Spain on December 11, 1950. He graduated in mathematics from UCM in 1973, and obtained his PhD from the same university in 1976. His Ph.D. thesis advisors were Alberto Dou and Haïm Brezis.
Career
Díaz joined the faculty at UCM as an Associate Professor in Mathematical Analysis in 1978. He moved briefly to the University of Santander in 1980, before returning to UCM as a full professor in 1983. In 1998, he co-founded the journal Revista Matemática de la UCM and served on its editorial board from 1988 to 1995. He founded the Department of Applied Mathematics at the Facultad de Matemáticas of UCM in the early 1980s and led it for several years. In 2006, he founded the Instituto de Matemática Interdisciplinar (IMI), serving as Director from 2006 to 2008 and again from 2012 to 2016. He is an energetic teacher, organizing six summer courses at UCM, two of them with Jacques Louis Lions.
Díaz has worked in many areas of applied mathematics, such as theoretical and applied aspects of nonlinear partial differential equations, fluid mechanics models, geophysical models, reaction-diffusion models, elasticity and homogenization models and control theory models, among others. He has also worked in areas closer to pure mathematics, such as nonlinear analysis, focusing on accretive operators, rearrangement and gradient estimates. Other activities include contributions in science history, science communication and scientific management. His mentors and influential colleagues include Philippe Benilan and Jacques Louis Lions.
As of July 2019, his research publications included more than 250 papers in research journals, 141 contributions published in proceedings of meetings, seven books, eight book chapters and 20 edited volumes. His popul |
https://en.wikipedia.org/wiki/Cultured%20meat | Cultured meat, also known as cultivated meat among other names, is a form of cellular agriculture where meat is produced by culturing animal cells in vitro. Cultured meat is produced using tissue engineering techniques pioneered in regenerative medicine. Jason Matheny popularized the concept in the early 2000s after he co-authored a paper on cultured meat production and created New Harvest, the world's first non-profit organization dedicated to in-vitro meat research. Cultured meat has the potential to address the environmental impact of meat production, animal welfare, food security and human health, in addition to its potential mitigation of climate change.
In 2013, Mark Post created a hamburger patty made from tissue grown outside of an animal. Since then, other cultured meat prototypes have gained media attention: SuperMeat opened a farm-to-fork restaurant, called "The Chicken", in Tel Aviv to test consumer reaction to its "Chicken" burger, while the "world's first commercial sale of cell-cultured meat" occurred in December 2020 at Singapore restaurant 1880, where cultured meat manufactured by United States firm Eat Just was sold.
While most efforts focus on common meats such as pork, beef, and chicken which constitute the bulk of consumption in developed countries, companies such as Orbillion Bio focused on high-end or unusual meats including elk, lamb, bison, and Wagyu beef. Avant Meats brought cultured grouper to market in 2021, while other companies have pursued different species of fish and other seafood.
The production process is constantly evolving, driven by companies and research institutions. The applications for cultured meat led to ethical, health, environmental, cultural, and economic discussions. Data published by the non-governmental organization Good Food Institute found that in 2021 cultivated meat companies attracted $140 million in Europe. Cultured meat is mass-produced in Israel. The first restaurant to serve cultured meat opened in Singap |
https://en.wikipedia.org/wiki/Issues%20in%20retirement%20security | Issues in retirement security refers to growing economic concerns and societal issues over the ability of individual workers and other individuals in society to have an economically secure retirement.
Main issues appear to arise from the general inability of maintaining the economic life-cycle model, that anticipates people to make proper savings during their working lives and eventually exhaust these resources in retirement in order to retain their existing consumption level.
Overview
the issues of economic security in retirement pertain to the following concerns.
whether individuals are saving enough.
whether existing government programs for retired individuals are sufficient to support the average person
the decline in the existence of company-funded pensions for their employees.
the question of whether workers are contributing enough to company retirement plans.
the problem that the majority workers do not have a company retirement plan that they can contribute to.
Situation in USA
One of the biggest issue in the current world and also in USA concerning retirement security is the inability to adequately save money for the retirement. Personal saving rate in U.S. is currently near half the level, that was 50 years ago. The rate has increased after the financial crisis in 2009, but still reaches only about 8% of net personal income. Approximately two thirds of Millennials do not save money for retirement at all and half of the American households with someone aged 55 years or more has the same problem. In addition, there is a significant demographic change coming in the next 30 years where the increment of people older than 65 years compared to working age population will be substantially greater. Longevity of the population in the United States is also emerging as a crucial factor for future retirement security, average life expectancy in 2017 of males and females at the age of 65 has increased since 1940 by 6 and 7 years to 84 and 87. This upward trend |
https://en.wikipedia.org/wiki/David%20Silver%20%28computer%20scientist%29 | David Silver (born 1976) is a principal research scientist at Google DeepMind and a professor at University College London. He has led research on reinforcement learning with AlphaGo, AlphaZero and co-lead on AlphaStar.
Education
He studied at Christ's College, Cambridge, graduating in 1997 with the Addison-Wesley award, and having befriended Demis Hassabis whilst at Cambridge. Silver returned to academia in 2004 at the University of Alberta to study for a PhD on reinforcement learning, where he co-introduced the algorithms used in the first master-level 9×9 Go programs and graduated in 2009. His version of program MoGo (co-authored with Sylvain Gelly) was one of the strongest Go programs as of 2009.
Career and research
After graduating from university, Silver co-founded the video games company Elixir Studios, where he was CTO and lead programmer, receiving several awards for technology and innovation.
Silver was awarded a Royal Society University Research Fellowship in 2011, and subsequently became a lecturer at University College London. His lectures on Reinforcement Learning are available on YouTube. Silver consulted for Google DeepMind from its inception, joining full-time in 2013.
His recent work has focused on combining reinforcement learning with deep learning, including a program that learns to play Atari games directly from pixels. Silver led the AlphaGo project, culminating in the first program to defeat a top professional player in the full-size game of Go. AlphaGo subsequently received an honorary 9 Dan Professional Certification; and won the Cannes Lion award for innovation. He then led development of AlphaZero, which used the same AI to learn to play Go from scratch (learning only by playing itself and not from human games) before learning to play chess and shogi in the same way, to higher levels than any other computer program.
Silver is among the most published members of staff at Google DeepMind, with over 130,000 citations and has an h-ind |
https://en.wikipedia.org/wiki/Chemical%20shift%20index | The chemical shift index or CSI is a widely employed technique in protein nuclear magnetic resonance spectroscopy that can be used to display and identify the location (i.e. start and end) as well as the type of protein secondary structure (beta strands, helices and random coil regions) found in proteins using only backbone chemical shift data The technique was invented by David S. Wishart in 1992 for analyzing 1Hα chemical shifts and then later extended by him in 1994 to incorporate 13C backbone shifts. The original CSI method makes use of the fact that 1Hα chemical shifts of amino acid residues in helices tends to be shifted upfield (i.e. towards the right side of an NMR spectrum) relative to their random coil values and downfield (i.e. towards the left side of an NMR spectrum) in beta strands. Similar kinds of upfield and downfield trends are also detectable in backbone 13C chemical shifts.
Implementation
The CSI is a graph-based technique that essentially employs an amino acid-specific digital filter to convert every assigned backbone chemical shift value into a simple three-state (-1, 0, +1) index. This approach generates a more easily understood and much more visually pleasing graph of protein chemical shift values. In particular, if the upfield 1Hα chemical shift (relative to an amino acid-specific random coil value) of a certain residue is > 0.1 ppm, then that amino acid residue is assigned a value of -1. Similarly, if the downfield 1Hα chemical shift of a certain amino acid residue is > 0.1 ppm then that residue is assigned a value of +1. If an amino acid residue's chemical shift is not shifted downfield or upfield by a sufficient amount (i.e. <0.1 ppm), it is given a value of 0. When this 3-state index is plotted as a bar graph over the full length of the protein sequence, simple inspection can allow one to identify beta strands (clusters of +1 values), alpha helices (clusters of -1 values), and random coil segments (clusters of 0 values). A list |
https://en.wikipedia.org/wiki/Studio%20transmitter%20link | A studio transmitter link (or STL) sends a radio station's or television station's audio and video from the broadcast studio or origination facility to a radio transmitter, television transmitter or uplink facility in another location. This is accomplished through the use of terrestrial microwave links or by using fiber optic or other telecommunication connections to the transmitter site.
This is often necessary because the best locations for an antenna are on top of a mountain, where a much shorter radio tower is required, but where locating a studio may be impractical. Even in flat regions, the center of the station's allowed coverage area may not be near the studio location or may lie within a populated area where a transmitter would be frowned upon by the community, so the antenna must be placed at a distance from the studio.
Depending on the locations that must be connected, a station may choose either a point to point (PTP) link on another special radio frequency, or a newer all-digital wired link via a dedicated data transmission circuit. Radio links can also be digital, or the older analog type, or a hybrid of the two. Even on older all-analog systems, multiple audio and data channels can be sent using subcarriers.
Stations that employ an STL usually also have a transmitter/studio link (TSL) to return telemetry information. Both the STL and TSL are considered broadcast auxiliary services (BAS).
Transmitter/studio link
The transmitter/studio link (or TSL) of a radio station or television station is a return link which sends telemetry data from the remotely located radio transmitter or television transmitter back to the studio for monitoring purposes. The TSL may return the same way as the STL, or it can be embedded in the station's regular broadcast signal as a subcarrier (for analog stations) or a separate data channel (for digital stations).
Analog or digital data such as transmitter power, temperature, VSWR, voltage, modulation level, and other |
https://en.wikipedia.org/wiki/Pissalat | Pissalat or pissala is a condiment originating from the Nice region of France. The name comes from peis salat in Niçard and means 'salted fish'. It is made from anchovy puree flavoured with cloves, thyme, bay leaf and black pepper mixed with olive oil. Pissalat is used for flavouring hors d'oeuvres, fish, cold meats and, especially, the local specialty, pissaladière.
Etymology
The word pissala (in Nissard) or pissalat (in French) is composed of the old Provençal word peis for 'fish', and sala, the past participle of salar, which corresponds to the French saler ('to salt'). Together, they describe "preserves of small crushed and salted fish" or, similarly, "a piquant sauce made from the maceration of salted fish".
History
The pissalat is similar to the siqqu, from the Mesopotamian Culinary Treatise of the 2nd millennium BC. J.-C. (c. 1700 BC) or with garum (juice or sauce, in Latin, from Roman antiquity). Since the time of ancient Rome, garum has been produced (with many variants) throughout the Mediterranean basin. It is a sauce obtained by the maceration in salt of heads and intestines of mackerel, sardines, anchovies and aromatic plants. The sauce thus obtained, passed through a fine sieve, was recovered with a ladle, and was preserved in olive oil.
The manufacture of pissalat was a centuries-old local industry in the Nice-Côte d'Azur region, where the salting of sardines and anchovies employed roughly a dozen families at the beginning of the 19th century. The Niçois writer, Louis Roubaudi, notes in his 1843 book Nice and its surroundings: "The pissalat is very suitable for reviving the appetite when it is seasoned with olive oil, vinegar and salted olives."
The sauce largely disappeared from commerce during the Second World War, and exists today only in the form of local traditional artisanal and family production (it is often replaced by salted anchovies or anchovy purée), in particular for the preparation of pissaladière.
Recipe
The pissalat sauce is |
https://en.wikipedia.org/wiki/Composite%20image%20filter | A composite image filter is an electronic filter consisting of multiple image filter sections of two or more different types.
The image method of filter design determines the properties of filter sections by calculating the properties they would have in an infinite chain of identical sections. In this, the analysis parallels transmission line theory on which it is based. Filters designed by this method are called image parameter filters, or just image filters. An important parameter of image filters is their image impedance, the impedance of an infinite chain of identical sections.
The basic sections are arranged into a ladder network of several sections, the number of sections required is mostly determined by the amount of stopband rejection required. In its simplest form, the filter can consist entirely of identical sections. However, it is more usual to use a composite filter of two or three different types of section to improve different parameters best addressed by a particular type. The most frequent parameters considered are stopband rejection, steepness of the filter skirt (transition band) and impedance matching to the filter terminations.
Image filters are linear filters and are invariably also passive in implementation.
History
The image method of designing filters originated at AT&T, who were interested in developing filtering that could be used with the multiplexing of many telephone channels on to a single cable. The researchers involved in this work and their contributions are briefly listed below;
John Carson provided the mathematical underpinning to the theory. He invented single-sideband modulation for the purpose of multiplexing telephone channels. It was the need to recover these signals that gave rise to the need for advanced filtering techniques. He also pioneered the use of operational calculus (what has now become the theory of Laplace transforms in its more formal mathematical guise) to analyse these signals.
George Campbell worked o |
https://en.wikipedia.org/wiki/Young%27s%20convolution%20inequality | In mathematics, Young's convolution inequality is a mathematical inequality about the convolution of two functions, named after William Henry Young.
Statement
Euclidean space
In real analysis, the following result is called Young's convolution inequality:
Suppose is in the Lebesgue space and is in and
with Then
Here the star denotes convolution, is Lebesgue space, and
denotes the usual norm.
Equivalently, if and then
Generalizations
Young's convolution inequality has a natural generalization in which we replace by a unimodular group If we let be a bi-invariant Haar measure on and we let or be integrable functions, then we define by
Then in this case, Young's inequality states that for and and such that
we have a bound
Equivalently, if and then
Since is in fact a locally compact abelian group (and therefore unimodular) with the Lebesgue measure the desired Haar measure, this is in fact a generalization.
This generalization may be refined. Let and be as before and assume satisfy Then there exists a constant such that for any and any measurable function on that belongs to the weak space which by definition means that the following supremum
is finite, we have and
Applications
An example application is that Young's inequality can be used to show that the heat semigroup is a contracting semigroup using the norm (that is, the Weierstrass transform does not enlarge the norm).
Proof
Proof by Hölder's inequality
Young's inequality has an elementary proof with the non-optimal constant 1.
We assume that the functions are nonnegative and integrable, where is a unimodular group endowed with a bi-invariant Haar measure We use the fact that for any measurable
Since
By the Hölder inequality for three functions we deduce that
The conclusion follows then by left-invariance of the Haar measure, the fact that integrals are preserved by inversion of the domain, and by Fubini's theorem.
Proof by interpolation
Young's i |
https://en.wikipedia.org/wiki/Daniel%20Jackson%20%28computer%20scientist%29 | Daniel Jackson (born 1963) is a professor of Computer Science at the Massachusetts Institute of Technology (MIT). He is the principal designer of the Alloy modelling language, and author of the book Software Abstractions: Logic, Language, and Analysis.
Biography
Jackson was born in London, England, in 1963.
He studied physics at the University of Oxford, receiving an MA in 1984. After completing his MA, Jackson worked for two years as a software engineer at Logica UK Ltd. He then returned to academia to study computer science at MIT, where he received an SM in 1988, and a PhD in 1992. Following the completion of his doctorate Jackson took up a position as an Assistant Professor of Computer Science at Carnegie Mellon University, which he held until 1997. He has been on the faculty of the Department of Electrical Engineering and Computer Science at MIT since 1997.
In 2017 he became a Fellow of the Association for Computing Machinery.
Jackson is also a photographer, and has an interest in the straight photography style. The MIT Museum commissioned a series of photographs of MIT laboratories from him, displayed from May to December 2012, to accompany an exhibit of images by Berenice Abbott.
Jackson is the son of software engineering researcher Michael A. Jackson, developer of Jackson Structured Programming (JSP), Jackson System Development (JSD), and the Problem Frames Approach.
Research
Jackson's research is broadly concerned with improving the dependability of software. He is a proponent of lightweight formal methods. Jackson and his students developed the Alloy language and its associated Alloy Analyzer analysis tool to provide support for lightweight specification and modelling efforts.
Between 2004 and 2007, Jackson chaired a multi-year United States National Research Council study on dependable systems.
Selected publications |
https://en.wikipedia.org/wiki/Bisection%20bandwidth | In computer networking, if the network is bisected into two equal-sized partitions, the bisection bandwidth of a network topology is the bandwidth available between the two partitions. Bisection should be done in such a way that the bandwidth between two partitions is minimum. Bisection bandwidth gives the true bandwidth available in the entire system. Bisection bandwidth accounts for the bottleneck bandwidth of the entire network. Therefore bisection bandwidth represents bandwidth characteristics of the network better than any other metric.
Bisection bandwidth calculations
For a linear array with n nodes bisection bandwidth is one link bandwidth. For linear array only one link needs to be broken to bisect the network into two partitions.
For ring topology with n nodes two links should be broken to bisect the network, so bisection bandwidth becomes bandwidth of two links.
For tree topology with n nodes can be bisected at the root by breaking one link, so bisection bandwidth is one link bandwidth.
For Mesh topology with n nodes, links should be broken to bisect the network, so bisection bandwidth is bandwidth of links.
For Hyper-cube topology with n nodes, n/2 links should be broken to bisect the network, so bisection bandwidth is bandwidth of n/2 links.
Significance of bisection bandwidth
Theoretical support for the importance of this measure of network performance was developed in the PhD research of Clark Thomborson (formerly Clark Thompson). Thomborson proved that important algorithms for sorting, Fast Fourier transformation, and matrix-matrix multiplication become communication-limited—as opposed to CPU-limited or memory-limited—on computers with insufficient bisection bandwidth. F. Thomson Leighton's PhD research tightened Thomborson's loose bound on the bisection bandwidth of a computationally-important variant of the De Bruijn graph known as the shuffle-exchange network. Based on Bill Dally's analysis of latency, average-case throughput, and h |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.