source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Arborio%20rice
|
Arborio rice is an Italian short-grain rice. It is named after the town of Arborio, in the Po Valley, which is situated in the region of Piedmont in Italy. When cooked, the rounded grains are firm, creamy and chewy compared to other varieties of rice, due to their higher amylopectin starch content. It has a starchy taste and blends well with other flavours. Arborio rice is often used to make risotto; other suitable varieties include Carnaroli, Maratelli, Baldo, and Vialone Nano. Arborio rice is also usually used for rice pudding.
Arborio is a cultivar of the Japonica group of varieties of Oryza sativa.
See also
Italian cuisine
Bomba rice
|
https://en.wikipedia.org/wiki/Radeon%20X1000%20series
|
The R520 (codenamed Fudo) is a graphics processing unit (GPU) developed by ATI Technologies and produced by TSMC. It was the first GPU produced using a 90 nm photolithography process.
The R520 is the foundation for a line of DirectX 9.0c and OpenGL 2.0 3D accelerator X1000 video cards. It is ATI's first major architectural overhaul since the R300 and is highly optimized for Shader Model 3.0. The Radeon X1000 series using the core was introduced on October 5, 2005, and competed primarily against Nvidia's GeForce 7000 series. ATI released the successor to the R500 series with the R600 series on May 14, 2007.
ATI does not provide official support for any X1000 series cards for Windows 8 or Windows 10; the last AMD Catalyst for this generation is the 10.2 from 2010 up to Windows 7. AMD stopped providing drivers for Windows 7 for this series in 2015.
A series of open source Radeon drivers are available when using a Linux distribution.
The same GPUs are also found in some AMD FireMV products targeting multi-monitor set-ups.
Delay during the development
The Radeon X1800 video cards that included an R520 were released with a delay of several months because ATI engineers discovered a bug within the GPU in a very late stage of development. This bug, caused by a faulty 3rd party 90 nm chip design library, greatly hampered clock speed ramping, so they had to "respin" the chip for another revision (a new GDSII had to be sent to TSMC). The problem had been almost random in how it affected the prototype chips, making it difficult to identify.
Architecture
The R520 architecture is referred to by ATI as an "Ultra Threaded Dispatch Processor", which refers to ATI's plan to boost the efficiency of their GPU, instead of going with a brute force increase in the number of processing units. A central pixel shader "dispatch unit" breaks shaders down into threads (batches) of 16 pixels (4×4) and can track and distribute up to 128 threads per pixel "quad" (4 pipelines each). When a sh
|
https://en.wikipedia.org/wiki/Subderivative
|
In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization.
Let be a real-valued convex function defined on an open interval of the real line. Such a function need not be differentiable at all points: For example, the absolute value function is non-differentiable when . However, as seen in the graph on the right (where in blue has non-differentiable kinks similar to the absolute value function), for any in the domain of the function one can draw a line which goes through the point and which is everywhere either touching or below the graph of f. The slope of such a line is called a subderivative.
Definition
Rigorously, a subderivative of a convex function at a point in the open interval is a real number such that
for all . By the converse of the mean value theorem, the set of subderivatives at for a convex function is a nonempty closed interval , where and are the one-sided limits
The set of all subderivatives is called the subdifferential of the function at , denoted by . If is convex, then its subdifferential at any point is non-empty. Moreover, if its subdifferential at contains exactly one subderivative, then and is differentiable at .
Example
Consider the function which is convex. Then, the subdifferential at the origin is the interval . The subdifferential at any point is the singleton set , while the subdifferential at any point is the singleton set . This is similar to the sign function, but is not single-valued at , instead including all possible subderivatives.
Properties
A convex function is differentiable at if and only if the subdifferential is a singleton set, which is .
A point is a global minimum of a convex function if and only if zero is contained in the subdifferential. For instance, in the figure abo
|
https://en.wikipedia.org/wiki/Electrical%20load
|
An electrical load is an electrical component or portion of a circuit that consumes (active) electric power, such as electrical appliances and lights inside the home. The term may also refer to the power consumed by a circuit. This is opposed to a power supply source, such as a battery or generator, which provides power.
The term is used more broadly in electronics for a device connected to a signal source, whether or not it consumes power. If an electric circuit has an output port, a pair of terminals that produces an electrical signal, the circuit connected to this terminal (or its input impedance) is the load. For example, if a CD player is connected to an amplifier, the CD player is the source, and the amplifier is the load.
Load affects the performance of circuits with respect to output voltages or currents, such as in sensors, voltage sources, and amplifiers. Mains power outlets provide an easy example: they supply power at constant voltage, with electrical appliances connected to the power circuit collectively making up the load. When a high-power appliance switches on, it dramatically reduces the load impedance.
The voltages will drop if the load impedance is not much higher than the power supply impedance. Therefore, switching on a heating appliance in a domestic environment may cause incandescent lights to dim noticeably.
A more technical approach
When discussing the effect of load on a circuit, it is helpful to disregard the circuit's actual design and consider only the Thévenin equivalent. (The Norton equivalent could be used instead, with the same results.) The Thévenin equivalent of a circuit looks like this:
With no load (open-circuited terminals), all of falls across the output; the output voltage is . However, the circuit will behave differently if a load is added. Therefore, we would like to ignore the details of the load circuit, as we did for the power supply, and represent it as simply as possible. For example, if we use an input resist
|
https://en.wikipedia.org/wiki/Features%20new%20to%20Windows%20XP
|
As the next version of Windows NT after Windows 2000, as well as the successor to Windows Me, Windows XP introduced many new features but it also removed some others.
User interface and appearance
Graphics
With the introduction of Windows XP, the C++ based software-only GDI+ subsystem was introduced to replace certain GDI functions. GDI+ adds anti-aliased 2D graphics, textures, floating point coordinates, gradient shading, more complex path management, bicubic filtering, intrinsic support for modern graphics-file formats like JPEG and PNG, and support for composition of affine transformations in the 2D view pipeline. GDI+ uses RGBA values to represent color. Use of these features is apparent in Windows XP's user interface (transparent desktop icon labels, drop shadows for icon labels on the desktop, shadows under menus, translucent blue selection rectangle in Windows Explorer, sliding task panes and taskbar buttons), and several of its applications such as Microsoft Paint, Windows Picture and Fax Viewer, Photo Printing Wizard, My Pictures Slideshow screensaver, and their presence in the basic graphics layer greatly simplifies implementations of vector-graphics systems such as Flash or SVG. The GDI+ dynamic library can be shipped with an application and used under older versions of Windows. The total number of GDI handles per session is also raised in Windows XP from 16,384 to 65,536 (configurable through the registry).
Windows XP shipped with DirectX 8.1, which brings major new features to DirectX Graphics besides DirectX Audio (both DirectSound and DirectMusic), DirectPlay, DirectInput and DirectShow. Direct3D introduced programmability in the form of vertex and pixel shaders, enabling developers to write code without worrying about superfluous hardware state, and fog, bump mapping and texture mapping. DirectX 9 was released in 2003, which also sees major revisions to Direct3D, DirectSound, DirectMusic and DirectShow. Direct3D 9 added a new version of the High-
|
https://en.wikipedia.org/wiki/International%20Colloquium%20on%20Group%20Theoretical%20Methods%20in%20Physics
|
The International Colloquium on Group Theoretical Methods in Physics (ICGTMP) is an academic conference devoted to applications of group theory to physics. It was founded in 1972 by Henri Bacry and Aloysio Janner. It hosts a colloquium every two years. The ICGTMP is led by a Standing Committee, which helps select winners for the three major awards presented at the conference: the Wigner Medal (19782018), the Hermann Weyl Prize (since 2002) and the Weyl-Wigner Award (since 2022).
Wigner Medal
The Wigner Medal was an award designed "to recognize outstanding contributions to the understanding of physics through Group Theory". It was administered by The Group Theory and Fundamental Physics Foundation, a publicly supported organization. The first award was given in 1978 to Eugene Wigner at the Integrative Conference on Group Theory and Mathematical Physics.
The collaboration between the Standing Committee of the ICGTMP and the Foundation ended in 2020. In 2023 a new process for awarding the Wigner Medal was created by the Foundation. The new Wigner Medal can be granted in any field of theoretical physics. The new Wigner Medals for 2020 and 2022 were granted retrospectively in 2023. The first winners of the new prize were Yvette Kosmann-Schwarzbach, and Daniel Greenberger.
The Standing Committee does not recognize the post-2018 Wigner Medals awarded by the Foundation as the continuation of the prize from 1978 through 2018.
Weyl-Wigner Award
In 202021, the ICGTMP Standing Committee created a new prize to replace the Wigner Medal, called the Weyl-Wigner Award. The purpose of the Weyl-Wigner Award is "to recognize outstanding contributions to the understanding of physics through group theory, continuing the tradition of The Wigner Medal that was awarded at the International Colloquium on Group Theoretical Methods in Physics from 1978 to 2018." The recipients of this prize are chosen by an international selection committee elected by the Standing Committee.
The first
|
https://en.wikipedia.org/wiki/Foton%20%28satellite%29
|
Foton (or Photon) is the project name of two series of Russian science satellite and reentry vehicle programs. Although uncrewed, the design was adapted from the crewed Vostok spacecraft capsule. The primary focus of the Foton project is materials science research, but some missions have also carried experiments for other fields of research including biology. The original Foton series included 12 launches from the Plesetsk Cosmodrome from 1985 to 1999.
The second series, under the name Foton-M, incorporates many design improvements over the original Foton, and is still in use. So far, there have been four launch attempts of the Foton-M. The first was in 2002 from the Plesetsk Cosmodrome, which ended in failure due to a problem in the launch vehicle. The last three were from the Baikonur Cosmodrome, in 2005, 2007, and 2014; all were successful. Both the Foton and Foton-M series used Soyuz-U (11A511U and 11A511U2) rockets as launch vehicles. Starting with the Foton-7 mission, the European Space Agency has been a partner in the Foton program.
Foton-M
Foton-M is a new generation of Russian robotic spacecraft for research conducted in the microgravity environment of Earth orbit. The Foton-M design is based on the design of the Foton, with several improvements including a new telemetry and telecommand unit for increased data flow rate, increased battery capacity, and a better thermal control system. It is produced by TsSKB-Progress in Samara.
The launch of Foton-M1 failed because of a malfunction of the Soyuz-U launcher. The second launch (of Foton-M2) was a success. Foton-M3 was launched on 14 September 2007, carried by a Soyuz-U rocket lifting off from the Baikonur Cosmodrome in Kazakhstan with Nadezhda, a cockroach that became the first Earth creature to produce offspring that had been conceived in space. It returned successfully to Earth on 26 September 2007, landing in Kazakhstan at 7:58 GMT.
Reentry
The Foton capsule has limited thruster capability. As such, t
|
https://en.wikipedia.org/wiki/Imiquimod
|
Imiquimod, sold under the brand name Aldara among others, is a medication that acts as an immune response modifier that is used to treat genital warts, superficial basal cell carcinoma, and actinic keratosis. Scientists at 3M's pharmaceuticals division discovered the drug and 3M obtained the first FDA approval in 1997. As of 2015, imiquimod is generic and is available worldwide under many brands.
Medical uses
Imiquimod is a patient-applied cream prescribed to treat genital warts, Bowens disease (squamous cell carcinoma in situ), and, secondary to surgery, for basal cell carcinoma, as well as actinic keratosis.
Imiquimod 5% cream is indicated for the topical treatment of:
external genital and perianal warts (condylomata acuminata) in adults;
small superficial basal-cell carcinomas (sBCCs) in adults;
clinically typical, non-hyperkeratotic, non-hypertrophic actinic keratoses (AKs) on the face or scalp in immunocompetent adults when size or number of lesions limit the efficacy and / or acceptability of cryotherapy and other topical treatment options are contraindicated or less appropriate.
Imiquimod 3.75% cream is indicated for the topical treatment of clinically typical, non-hyperkeratotic, non-hypertrophic, visible or palpable actinic keratosis of the full face or balding scalp in immunocompetent adults when other topical treatment options are contraindicated or less appropriate.
Side effects
Side effects include local inflammatory reactions, such as blisters, a burning sensation, skin redness, dry skin, itching, skin breakdown, skin crusting or scabbing, skin drainage, skin flaking or scaling, skin ulceration, sores, swelling, as well as systemic reactions, such as fever, "flu-like" symptoms, headache, and tiredness.
People who have had an organ transplant and are taking immune-suppressing drugs should not use imiquimod.
Mechanism of action
Imiquimod yields profound antitumoral activity by acting on several immunological levels synergistically. Imiquimod st
|
https://en.wikipedia.org/wiki/WSTM-TV
|
WSTM-TV (channel 3) is a television station in Syracuse, New York, United States, affiliated with NBC and The CW. It is owned by Sinclair Broadcast Group, which provides certain services to CBS affiliate WTVH (channel 5) through a local marketing agreement with Granite Broadcasting. Both stations share studios on James Street/NY 290 in the Near Northeast section of Syracuse, while WSTM-TV's transmitter is located in the town of Onondaga, New York.
History
The station began operations on February 15, 1950, on VHF channel 5 with the call sign WSYR-TV, moving to VHF channel 3 in 1953. It was owned by Advance Publications (the Newhouse family's company) along with the Syracuse Post-Standard, Syracuse Herald-Journal, and WSYR radio (AM 570 and FM 94.5, now WYYY). It was Syracuse's second television station, signing on a year and three months after WHEN-TV (now WTVH). It originally had facilities at the Kemper Building in Downtown Syracuse. In 1958, WSYR-AM-FM-TV moved to new studios on James Street.
Unlike most NBC affiliates in two station markets, WSYR-TV did not take a secondary ABC or DuMont affiliation. WSYR-TV doubled as the NBC affiliate for Binghamton until WINR-TV (now WICZ-TV) signed-on in 1957. The station also operated a satellite station in Elmira until 1980; that station, first known as WSYE-TV and now WETM-TV, is now owned by Nexstar Broadcasting Group and fed via centralcasting facilities of a Syracuse cross-town rival, which ironically now holds the WSYR-TV call letters. It remains affiliated with NBC.
The Newhouse family largely exited broadcasting in 1980. The WSYR cluster had been grandfathered after the Federal Communications Commission (FCC) banned common ownership of newspaper and broadcasting outlets, but lost this protection when Advance dismantled its broadcasting division. Channel 3 was sold to the Times Mirror Company, who—so as to comply with an FCC rule in effect at the time that prohibited TV and radio stations in the same market, but wi
|
https://en.wikipedia.org/wiki/Nestedness
|
Nestedness is a measure of structure in an ecological system, usually applied to species-sites systems (describing the distribution of species across locations), or species-species interaction networks (describing the interactions between species, usually as bipartite networks such as hosts-parasites, plants-pollinators, etc.).
A system (usually represented as a matrix) is said to be nested when the elements that have a few items in them (locations with few species, species with few interactions) have a subset of the items of elements with more items. Imagine a series of islands that are ordered by their distance from the mainland. If the mainland has all species, the first island has a subset of mainland's species, the second island has a subset of the first island's species, and so forth, then this system is perfectly nested.
Measures of nestedness
One measurement unit for nestedness is a system's 'temperature' offered by Atmar and Patterson in 1993. This measures the order in which species' extinctions would occur in the system (or from the other side - the order of colonizing a system). The 'colder' the system is, the more fixed the order of extinction would be. In a warmer system, extinctions will take a more random order. Temperatures go from 0°, coldest and absolutely fixed, to 100° absolutely random.
For various reasons, the Nestedness Temperature Calculator is not mathematically satisfying (no unique solution, not conservative enough). A software (BINMATNEST) is available from the authors on request and from the Journal of Biogeography to correct these deficits In addition, ANINHADO solves problems of large matrix size and processing of a large number of randomized matrices; in addition it implements several null models to estimate the significance of nestedness.
Bastolla et al. introduced a simple measure of nestedness based on the number of common neighbours for each pair of nodes. They argue that this can help reduce the effective competition betwe
|
https://en.wikipedia.org/wiki/Namdapha%20flying%20squirrel
|
The Namdapha flying squirrel (Biswamoyopterus biswasi) is an arboreal, nocturnal flying squirrel endemic to Arunachal Pradesh in northeast India, where it is known from a single specimen collected in Namdapha National Park in 1981. No population estimate is available for B. biswasi, but the known habitat is tall Mesua ferrea jungles, often on hill slopes in the catchment area of Dihing River (particularly on the western slope of Patkai range) in northeastern India.
It was the sole member in the genus Biswamoyopterus until the description of the Laotian giant flying squirrel (Biswamoyopterus laoensis) in 2013. In 2018, Quan Li from the Kunming Institute of Zoology at the Chinese Academy of Sciences discovered a new squirrel in the same genus while studying specimens in their collection, called the Mount Gaoligong flying squirrel (Biswamoyopterus gaoligongensis), based on the region it was discovered in.
Description
Biswamoyopterus biswasi has reddish, grizzled fur with white above. Its crown is pale grey, its patagium is orangish and its underparts are white.
The cheek teeth of B. biswasi are simple, and its incisors are unpigmented. Septae are multiple in auditory bullae and sometimes honeycomb-shaped with 10 to 12 cells in it.
It measures from head-to-vent and has a long tail. The hindfoot is and the ear is .
The scientific name commemorates Biswamoy Biswas, director of the Zoological Survey of India.
Status
The Namdapha flying squirrel is listed as critically endangered by the IUCN. It is known from a single specimen collected in 1981 in Namdapha National Park. Its range of the Namdapha flying squirrel may be restricted to a single valley and it is threatened by poaching of animals for food from within the park, and possibly by habitat destruction. It is among the 25 "most wanted lost" species that are the focus of Global Wildlife Conservation's "Search for Lost Species" initiative.
There are several later reports of sightings by tourists and local rese
|
https://en.wikipedia.org/wiki/Medical%20genetics
|
Medical genetics is the branch of medicine that involves the diagnosis and management of hereditary disorders. Medical genetics differs from human genetics in that human genetics is a field of scientific research that may or may not apply to medicine, while medical genetics refers to the application of genetics to medical care. For example, research on the causes and inheritance of genetic disorders would be considered within both human genetics and medical genetics, while the diagnosis, management, and counselling people with genetic disorders would be considered part of medical genetics.
In contrast, the study of typically non-medical phenotypes such as the genetics of eye color would be considered part of human genetics, but not necessarily relevant to medical genetics (except in situations such as albinism). Genetic medicine is a newer term for medical genetics and incorporates areas such as gene therapy, personalized medicine, and the rapidly emerging new medical specialty, predictive medicine.
Scope
Medical genetics encompasses many different areas, including clinical practice of physicians, genetic counselors, and nutritionists, clinical diagnostic laboratory activities, and research into the causes and inheritance of genetic disorders. Examples of conditions that fall within the scope of medical genetics include birth defects and dysmorphology, intellectual disabilities, autism, mitochondrial disorders, skeletal dysplasia, connective tissue disorders, cancer genetics, and prenatal diagnosis. Medical genetics is increasingly becoming relevant to many common diseases. Overlaps with other medical specialties are beginning to emerge, as recent advances in genetics are revealing etiologies for morphologic, endocrine, cardiovascular, pulmonary, ophthalmologist, renal, psychiatric, and dermatologic conditions. The medical genetics community is increasingly involved with individuals who have undertaken elective genetic and genomic testing.
Subspecialties
In som
|
https://en.wikipedia.org/wiki/Defence%20Science%20and%20Technology%20Laboratory
|
The Defence Science and Technology Laboratory (Dstl) is an executive agency of the Ministry of Defence of the United Kingdom. Its stated purpose is "to maximise the impact of science and technology for the defence and security of the UK". The agency is headed by Paul Hollinshead as its Chief Executive, with the board being chaired by Adrian Belton. Ministerial responsibility lies with the Minister for Defence Procurement.
History
Dstl was formed from the July 2001 split of the Defence Evaluation and Research Agency (DERA). Dstl was established to carry out and retain the science and technology work that is best done within government, while work that could be done by industry (forming the majority of DERA's activities) was transferred to Qinetiq, a government-owned company that was later floated on the stock exchange.
Dstl absorbed the Home Office's Centre for Applied Science and Technology (CAST) in April 2018, taking on CAST's role to apply science and technology to support the Home Office's operations and frontline delivery, provide evidence to support policy, and perform certain regulatory functions.
Dstl was a trading fund of the MOD from its formation until 2016, when it became an executive agency of the MOD.
Organisation
Most of Dstl's funding comes from the MOD, while a small portion comes from other government departments and commercial sources. In 2016/17, 91% of Dstl's £587m income came from the MOD.
In April 2015, Dstl completed a major reorganisation, merging twelve operating departments into five divisions. The motivation behind this change was to enable more coherent and productive delivery to customers and simplify access routes for suppliers.
Leadership
Martin Earwicker (2001–06): Chief Executive from its creation in 2001, until he left in 2006 for the Science Museum.
Frances Saunders (2006–11): took over as acting Chief Executive in May 2006 and was appointed as Chief Executive in August 2007. On 29 June 2011, Saunders announced to staff t
|
https://en.wikipedia.org/wiki/Cosmotron
|
The Cosmotron was a particle accelerator, specifically a proton synchrotron, at Brookhaven National Laboratory. Its construction was approved by the U.S. Atomic Energy Commission in 1948, reaching its full energy in 1953, and continuing to run until 1966. It was dismantled in 1969.
It was the first particle accelerator to impart kinetic energy in the range of GeV to a single particle, accelerating protons to 3.3 GeV. It was also the first accelerator to allow the extraction of the particle beam for experiments located physically outside the accelerator. It was used to observe a number of mesons previously seen only in cosmic rays, and to make the first discoveries of heavy, unstable particles (called V particles at the time) leading to the experimental confirmation of the theory of associated production of strange particles. It was the first accelerator that was able to produce all positive and negative mesons known to exist in cosmic rays. Its discoveries include the first vector meson.
The name chosen for the synchrotron was Cosmitron (representing an ambition to produce cosmic rays) but was changed to Cosmotron to sound like the cyclotron. The beam size of 64 × 15 cm and an energy goal of about 3 GeV determined the machine
parameters. The synchrotron had a 75-foot/22.9-meter diameter. It consisted of 288 magnets each weighing 6 tons and providing up to 1.5 T, forming four curved sections. The range of field change was kept within limits by first accelerating particles to an intermediate energy in another accelerator and then injected into the Cosmotron. The straight sections without magnets were worrisome because there was no focusing and the betatron oscillations would change suddenly and might swing wildly. But, all these major problems were overcome.
|
https://en.wikipedia.org/wiki/Somali%20golden%20mole
|
The Somali golden mole (Calcochloris tytonis) is a golden mole endemic to Somalia. In 1964, Dr. Alberto Simonetta of the University of Florence discovered the mole's jaw and ear bone fragments in a barn owl pellet in Jowhar, Somalia. The Somali golden mole differs from the other species in its family (Chrysochloridae) because the shape of its jaw is distinct; although the length of the lower jaw fits within the size range of the skulls of species Amblysomus leucorhinus and Amblysomus sclateri, the width of the ascending parts of the jaw is much bigger (2mm) than that of the species it most closely matches (Amblysomus leucorhinus).
|
https://en.wikipedia.org/wiki/Visagie%27s%20golden%20mole
|
Visagie's golden mole (Chrysochloris visagiei) is a small, insectivorous mammal of the family Chrysochloridae, the golden moles, endemic to South Africa.
|
https://en.wikipedia.org/wiki/V%20particle
|
In particle physics, V was a generic name for heavy, unstable subatomic particles that decay into a pair of particles, thereby producing a characteristic letter V in a bubble chamber or other particle detector. Such particles were first detected in cosmic ray interactions in the atmosphere in the late 1940s and were first produced using the Cosmotron particle accelerator at Brookhaven National Laboratory in the 1950s. Since all such particles have now been identified and given specific names, for instance Kaons or Sigma baryons, this term has fallen into disuse.
V0 is still used on occasion to refer generally to neutral particles that may confuse the B-tagging algorithms in a modern particle detector, as is used in Section 7 of
this ATLAS conference note.
|
https://en.wikipedia.org/wiki/GoldWave
|
GoldWave is a commercial digital audio editing software product developed by GoldWave Inc, first released to the public in April 1993.
Goldwave product lines
GoldWave: Audio editor for Microsoft Windows, iOS, Android. iOS version runs on Mac OS 11 with Apple M1-compatible processor.
GoldWave Infinity: Web browser-based audio editor that also supports Linux, Mac OS X, Android, iOS.
Features
GoldWave has an array of features bundled which define the program. They include:
Real-time graphic visuals, such as bar, waveform, spectrogram, spectrum, and VU meter.
Basic and advanced effects and filters such as noise reduction, compressor/expander, volume shaping, volume matcher, pitch, reverb, resampling, and parametric EQ.
Effect previewing
Saving and restoring effect presets
DirectX Audio plug-in support
A variety of audio file formats are supported, including WAV, MP3, Windows Media Audio, Ogg, FLAC, AIFF, AU, Monkey's Audio, VOX, mat, snd, and voc
Batch processing and conversion support for converting a set of files to a different format and applying effects
Multiple undo levels
Edit multiple files at once
Support for editing large files
Storage option available to use RAM
GoldWave
Supported versions and compatibility
Windows
A version prior to the version 5 series still exists for download of its shareware version at the official website.
Versions up to 3.03 are 16-bit applications and cannot run in 64-bit versions of Windows).
All versions up to 4.26 can run on any 32-bit Windows operating system.
Starting with version 5, the minimum supported operating system is changed to Windows ME. However, the requirements listed in the software package's HTML documentation was not updated.
Starting with version 5.03, mininum hardware requirements were increased to Pentium III of 700 (500 in FAQ)MHz and DirectX 8 are now part of the minimum system requirements compared to the Pentium 2 of 300 MHz and DirectX 5 required by previous versions.
Windows ME are supported up
|
https://en.wikipedia.org/wiki/Plate%20Boundary%20Observatory
|
The Plate Boundary Observatory (PBO) was the geodetic component of the EarthScope Facility. EarthScope was an earth science program that explored the 4-dimensional structure of the North American Continent. EarthScope (and PBO) was a 15-year project (2003-2018) funded by the National Science Foundation (NSF) in conjunction with NASA. PBO construction (an NSF MREFC) took place from October 2003 through September 2008. Phase 1 of operations and maintenance concluded in September 2013. Phase 2 of operations ended in September 2018, along with the end of the EarthScope project. In October 2018, PBO was assimilated into a broader Network of the Americas (NOTA), along with networks in Mexico (TLALOCNet) and the Caribbean (COCONet), as part of the NSF's Geodetic Facility for the Advancement of Geosciences (GAGE). GAGE is operated by UNAVCO.
PBO precisely measured Earth deformation resulting from the constant motion of the Pacific and North American tectonic plates in the western United States. These Earth movements can be very small and incremental and not felt by people, or they can be very large and sudden, such as those that occur during earthquakes and volcanic eruptions. The high-precision instrumentation of the PBO enabled detection of motions to a sub-centimeter level. PBO measured Earth deformation through a network of instrumentation including: high-precision Global Positioning System (GPS) and Global Navigation Satellite System (GNSS) receivers, strainmeters, seismometers, tiltmeters, and other geodetic instruments.
The PBO GPS network included 1100 stations extending from the Aleutian Islands south to Baja and eastward across the continental United States. During the construction phase, 891 permanent and continuously operating GPS stations were installed, and another 209 existing stations were integrated (PBO Nucleus stations) into the network. Geodetic imaging data was transmitted, often in realtime, from a wide network of GPS stations, augmented by seismomet
|
https://en.wikipedia.org/wiki/Generalized%20valence%20bond
|
The generalized valence bond (GVB) is a method in valence bond theory that uses flexible orbitals in the general way used by modern valence bond theory. The method was developed by the group of William A. Goddard, III around 1970.
Theory
The generalized Coulson–Fischer theory for the hydrogen molecule, discussed in Modern valence bond theory, is used to describe every electron pair in a molecule. The orbitals for each electron pair are expanded in terms of the full basis set and are non-orthogonal. Orbitals from different pairs are forced to be orthogonal - the strong orthogonality condition. This condition simplifies the calculation but can lead to some difficulties.
Calculations
GVB code in some programs, particularly GAMESS (US), can also be used to do a variety of restricted open-shell Hartree–Fock calculations, such as those with one or three electrons in two pi-electron molecular orbitals while retaining the degeneracy of the orbitals. This wave function is essentially a two-determinant function, rather than the one-determinant function of the restricted Hartree–Fock method.
|
https://en.wikipedia.org/wiki/Schlemm%27s%20canal
|
Schlemm's canal is a circular lymphatic-like vessel in the eye. It collects aqueous humor from the anterior chamber and delivers it into the episcleral blood vessels. Canaloplasty may be used to widen it.
Structure
Schlemm's canal is an endothelium-lined tube, resembling that of a lymphatic vessel. On the inside of the canal, nearest to the aqueous humor, it is covered and held open by the trabecular meshwork. This creates outflow resistance against the aqueous humor.
Development
While Schlemm's canal has generally been considered as a vein or a scleral venous sinus, the canal is similar to the lymphatic vasculature. It is never filled with blood in physiological settings as it does not receive arterial blood circulation. Schlemm's canal displays several features of lymphatic endothelium, including the expression of PROX1, VEGFR3, CCL21, FOXC2, but lacked the expression of LYVE1 and PDPN. It develops via a unique mechanism involving the transdifferentiation of venous endothelial cells in the eye into lymphatic-like endothelial cells.
This developmental morphogenesis of the canal is sensitive to the inhibition of lymphangiogenic growth factors. In adults, the administration of the lymphangiogenic growth factor VEGFC enlarged the Schlemm's canal, which was associated with a reduction in intraocular pressure.
In the combined absence of angiopoietin 1 and angiopoietin 2, Schlemm's canal and episcleral lymphatic vasculature completely failed to develop.
Function
Schlemm's canal collects aqueous humor from the anterior chamber. It delivers it into the episcleral blood vessels via aqueous veins.
Clinical significance
Canaloplasty
Canaloplasty is a procedure to restore the eye’s natural drainage system to provide sustained reduction of intraocular pressure. Microcatheters are used in a simple and minimally invasive procedure. A surgeon will create a tiny incision to gain access to Schlemm's canal. A microcatheter circumnavigates Schlemm's canal around the iris, enl
|
https://en.wikipedia.org/wiki/Samsung%20Display
|
Samsung Display (Hangul: 삼성디스플레이) ) is a company selling display devices with OLED and QD-OLED technology. Display markets include smartphones, TVs, laptops, computer monitors, smartwatches, VR, game consoles, and automotive applications.
Headquartered in South Korea, Samsung Display has production plants in China, Vietnam, and India, and operates sales offices in six countries. Samsung Display enabled the first mass-production of OLED and quantum dot display and aims to develop next-generation technology such as slidable, rollable and stretchable panels.
As the LCD business spun off from Samsung Electronics, Samsung Display Corporation was established on April 1, 2012. The company launched on July 1 by merging Samsung Electronics’ LCD business, S-LCD Corporation(manufacturer of amorphous TFT LCD panels) and Samsung Mobile Display(Samsung’s OLED arm). By combining the OLED and LCD businesses, Samsung Display became the world's largest display company.
History
January 1991: Samsung Electronics launched TFT-LCD business
February 1995: Operated TFT-LCD line for the first time domestically
November 2003: Invested for 4.5 generation AMOLED mass-production for the first time in the world
July 2004: A joint venture S-LCD Corporation between Samsung Electronics and Sony Corporation was established.
April 2005: S-LCD begins shipment of seventh-generation TFT LCD panels for LCD TVs.
August 2007: S-LCD begins shipment of eighth-generation TFT LCD panels for LCD TVs.
October 2007: Started to mass produce AMOLED for the first time in the world
March 2009: Exceed production of AMOLED one million monthly
December 2011: The company's partners announce that Samsung will acquire Sony's entire stake in the joint venture, making S-LCD Corporation a wholly owned subsidiary of Samsung Electronics.
July 1, 2012: S-LCD and Samsung Mobile Display merge to create Samsung Display
August 2014: Samsung Display mass-produced the world’s first curved edge display panel, featured in the Galax
|
https://en.wikipedia.org/wiki/Uranium%20dioxide
|
Uranium dioxide or uranium(IV) oxide (), also known as urania or uranous oxide, is an oxide of uranium, and is a black, radioactive, crystalline powder that naturally occurs in the mineral uraninite. It is used in nuclear fuel rods in nuclear reactors. A mixture of uranium and plutonium dioxides is used as MOX fuel. Prior to 1960, it was used as yellow and black color in ceramic glazes and glass.
Production
Uranium dioxide is produced by reducing uranium trioxide with hydrogen.
UO3 + H2 → UO2 + H2O at 700 °C (973 K)
This reaction plays an important part in the creation of nuclear fuel through nuclear reprocessing and uranium enrichment.
Chemistry
Structure
The solid is isostructural with (has the same structure as) fluorite (calcium fluoride), where each U is surrounded by eight O nearest neighbors in a cubic arrangement. In addition, the dioxides of cerium, thorium, and the transuranic elements from neptunium through californium have the same structures. No other elemental dioxides have the fluorite structure. Upon melting, the measured average U-O coordination reduces from 8 in the crystalline solid (UO8 cubes), down to 6.7±0.5 (at 3270 K) in the melt. Models consistent with these measurements show the melt to consist mainly of UO6 and UO7 polyhedral units, where roughly of the connections between polyhedra are corner sharing and are edge sharing.
Oxidation
Uranium dioxide is oxidized in contact with oxygen to the triuranium octaoxide.
3 UO2 + O2 → U3O8 at 700 °C (970 K)
The electrochemistry of uranium dioxide has been investigated in detail as the galvanic corrosion of uranium dioxide controls the rate at which used nuclear fuel dissolves. See spent nuclear fuel for further details. Water increases the oxidation rate of plutonium and uranium metals.
Carbonization
Uranium dioxide is carbonized in contact with carbon, forming uranium carbide and carbon monoxide.
UO2 \ + \ 4C -> UC2 \ + \ 2CO.
This process must be done under an inert gas as uranium car
|
https://en.wikipedia.org/wiki/Baker%20percentage
|
Baker's percentage is a notation method indicating the proportion of an ingredient relative to the flour used in a recipe when making breads, cakes, muffins, and other baked goods. It is also referred to as baker's math, and may be indicated by a phrase such as based on flour weight. It is sometimes called formula percentage, a phrase that refers to the sum of a set of baker's percentages. Baker's percentage expresses a ratio in percentages of each ingredient's weight to the total flour weight:
For example, in a recipe that calls for 10 pounds of flour and 5 pounds of water, the corresponding baker's percentages are 100% for the flour and 50% for the water. Because these percentages are stated with respect to the weight of flour rather than with respect to the weight of all ingredients, the sum of these percentages always exceeds 100%.
Flour-based recipes are more precisely conceived as baker's percentages, and more accurately measured using weight instead of volume. The uncertainty in using volume measurements follows from the fact that flour settles in storage and therefore does not have a constant density.
Baker's percentages
A yeast-dough formula could call for the following list of ingredients, presented as a series of baker's percentages:
{| class=wikitable style="text-align:center;"
|-
| align=left | flour || 100%
|-
| align=left | water || 60%
|-
| align=left | yeast || 1%
|-
| align=left | salt || 2%
|-
| align=left | oil || 1%
|}
Conversions
There are several common conversions that are used with baker's percentages. Converting baker's percentages to ingredient weights is one. Converting known ingredient weights to baker percentages is another. Conversion to true percentages, or based on total weight, is helpful to calculate unknown ingredient weights from a desired total or formula weight.
Using baker's percentages
To derive the ingredient weights when any weight of flour Wf is chosen:
{| class=wikitable style="text-align:center;"
|-
! align=le
|
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Graham%20problem
|
In combinatorial number theory, the Erdős–Graham problem is the problem of proving that, if the set of integers greater than one is partitioned into finitely many subsets, then one of the subsets can be used to form an Egyptian fraction representation of unity. That is, for every , and every -coloring of the integers greater than one, there is a finite monochromatic subset of these integers such that
In more detail, Paul Erdős and Ronald Graham conjectured that, for sufficiently large , the largest member of could be bounded by for some constant independent of . It was known that, for this to be true, must be at least Euler's constant .
Ernie Croot proved the conjecture as part of his Ph.D thesis, and later (while a post-doctoral researcher at UC Berkeley) published the proof in the Annals of Mathematics. The value Croot gives for is very large: it is at most . Croot's result follows as a corollary of a more general theorem stating the existence of Egyptian fraction representations of unity for sets of smooth numbers in intervals of the form , where contains sufficiently many numbers so that the sum of their reciprocals is at least six. The Erdős–Graham conjecture follows from this result by showing that one can find an interval of this form in which the sum of the reciprocals of all smooth numbers is at least ; therefore, if the integers are -colored there must be a monochromatic subset satisfying the conditions of Croot's theorem.
A stronger form of the result, that any set of integers with positive upper density includes the denominators of an Egyptian fraction representation of one, was announced in 2021 by Thomas Bloom, a postdoctoral researcher at the University of Oxford.
See also
Conjectures by Erdős
|
https://en.wikipedia.org/wiki/Banach%20bundle
|
In mathematics, a Banach bundle is a vector bundle each of whose fibres is a Banach space, i.e. a complete normed vector space, possibly of infinite dimension.
Definition of a Banach bundle
Let M be a Banach manifold of class Cp with p ≥ 0, called the base space; let E be a topological space, called the total space; let π : E → M be a surjective continuous map. Suppose that for each point x ∈ M, the fibre Ex = π−1(x) has been given the structure of a Banach space. Let
be an open cover of M. Suppose also that for each i ∈ I, there is a Banach space Xi and a map τi
such that
the map τi is a homeomorphism commuting with the projection onto Ui, i.e. the following diagram commutes:
and for each x ∈ Ui the induced map τix on the fibre Ex
is an invertible continuous linear map, i.e. an isomorphism in the category of topological vector spaces;
if Ui and Uj are two members of the open cover, then the map
is a morphism (a differentiable map of class Cp), where Lin(X; Y) denotes the space of all continuous linear maps from a topological vector space X to another topological vector space Y.
The collection {(Ui, τi)|i∈I} is called a trivialising covering for π : E → M, and the maps τi are called trivialising maps. Two trivialising coverings are said to be equivalent if their union again satisfies the two conditions above. An equivalence class of such trivialising coverings is said to determine the structure of a Banach bundle on π : E → M.
If all the spaces Xi are isomorphic as topological vector spaces, then they can be assumed all to be equal to the same space X. In this case, π : E → M is said to be a Banach bundle with fibre X. If M is a connected space then this is necessarily the case, since the set of points x ∈ M for which there is a trivialising map
for a given space X is both open and closed.
In the finite-dimensional case, the second condition above is implied by the first.
Examples of Banach bundles
If V is any Banach space, the tangent space TxV
|
https://en.wikipedia.org/wiki/EBONE
|
EBONE (standing for European Backbone) was a pan-European Internet backbone. It went online in 1992 and was deactivated in July 2002. Some portions of the Ebone were sold to other companies and continue to operate today.
History
Formation
In 1991 a certain number of research network managers, including Frode Greisen, from Denmark, Kees Neggers, director of the Dutch network SURFNet, and Francois Fluckiger at CERN sought to create a European Internet backbone open to publicly-financed academic networks and private commercial networks. The Ebone consortium was established at the RIPE meeting in Geneva in September 1991, and the network went online in 1992 after the initial IP backbone with 256 kbps links was completed. Frode Greisen became the general manager while Peter Löthberg served as de facto architect.
Operation
In 1996 the consortium was transformed into the Ebone Association which again established a private limited company Ebone Inc. based in Denmark. In 1998 the Ebone Association sold 75% of the company to Hermes Europe Railtel, and in 1999 the remaining 25% was bought by Global Telesystems Group Inc. (GTS) which had then acquired Hermes Europe Railtel.
The Ebone backbone increased by a factor of 40,000 in speed over nine years from 256 kbit/s to 10 Gbit/s and the traffic roughly followed, see table below:
In year 2000 Ebone provided international transit for around 100 Internet Service Providers based in most of the European countries.
In 2001 GTS re-branded all its data communications products as Ebone and
Ebone was one of Europe's leading broadband optical and IP network service providers.
Shutdown
In October 2001, KPNQwest acquired Ebone and the Central Europe businesses of GTS and completed their EuroRings network.
Following the Dot com crash and various investigations, KPNQwest declared bankruptcy. In June 2002, it was announced that the Ebone Network Operations Center would be shut down, and the Ebone would be deactivated.
Employees in the
|
https://en.wikipedia.org/wiki/End%20of%20interrupt
|
An end of interrupt (EOI) is a computing signal sent to a programmable interrupt controller (PIC) to indicate the completion of interrupt processing for a given interrupt. Interrupts are used to facilitate hardware signals sent to the processor that temporarily stop a running program and allow a special program, an interrupt handler, to run instead. An EOI is used to cause a PIC to clear the corresponding bit in the in-service register (ISR), and thus allow more interrupt requests (IRQs) of equal or lower priority to be generated by the PIC.
EOIs may indicate the interrupt vector implicitly or explicitly. An explicit EOI vector is indicated with the EOI, whereas an implicit EOI vector will typically use a vector as indicated by the PICs priority schema, for example the highest vector in the ISR. Also, EOIs may be sent at the end of interrupt processing by an interrupt handler, or the operation of a PIC may be set to auto-EOI at the start of the interrupt handler.
See also
Intel 8259 – notable PIC from Intel
Advanced Programmable Interrupt Controller (APIC)
OpenPIC and IBM MPIC
Inter-processor interrupt (IPI)
Interrupt latency
Non-maskable interrupt (NMI)
IRQL (Windows)
|
https://en.wikipedia.org/wiki/Straight%20and%20Crooked%20Thinking
|
Straight and Crooked Thinking, first published in 1930 and revised in 1953, is a book by Robert H. Thouless which describes, assesses and critically analyses flaws in reasoning and argument. Thouless describes it as a practical manual, rather than a theoretical one.
Synopsis
Thirty-eight fallacies are discussed in the book. Among them are:
No. 3. proof by example, biased sample, cherry picking
No. 6. ignoratio elenchi: "red herring"
No. 9. false compromise/middle ground
No. 12. argument in a circle
No. 13. begging the question
No. 17. equivocation
No. 18. false dilemma: black and white thinking
No. 19. continuum fallacy (fallacy of the beard)
No. 21. ad nauseam: "argumentum ad nauseam" or "argument from repetition" or "argumentum ad infinitum"
No. 25. style over substance fallacy
No. 28. appeal to authority
No. 31. thought-terminating cliché
No. 36. special pleading
No. 37. appeal to consequences
No. 38. appeal to motive
See also
List of cognitive biases
List of common misconceptions
List of fallacies
List of memory biases
List of topics related to public relations and propaganda
|
https://en.wikipedia.org/wiki/SCSI%20Enclosure%20Services
|
SCSI Enclosure Services (SES) is a protocol for more modern SCSI enclosure products. An initiator can communicate with the enclosure using a specialized set of SCSI commands to access power, cooling, and other non-data characteristics.
SES devices
There are two major classes of SES devices:
Attached enclosure services devices allow SES communication through a logical unit within one SCSI disk drive located in the enclosure. The disk-drive then communicates with the enclosure by some other method, the only commonly used one being Enclosure Services Interface (ESI). In fault-tolerant enclosures, more than one disk-drive slot has ESI enabled to allow SES communications to continue even after the failure of any of the disk-drives. The definition of the ESI protocols is owned by an ANSI committee and defined in their specifications ANSI SFF-8067 and SFF-8045.
Standalone enclosure services enclosures have a separate SES processor which occupies its own address on the SCSI bus. The protocol for this uses direct SCSI commands. An enclosure can be fault-tolerant by containing two SES processors.
SES commands
The SCSI initiator communicates with an SES device using two SCSI commands: Send Diagnostic and Receive Diagnostic Results. Some universal SCSI commands such as Inquiry are also used with standalone enclosure services to perform basic functions such as initial discovery of the devices.
SES elements
The SCSI Send Diagnostic and Receive Diagnostic Results commands can be addressed to a specific SES element in the enclosure. There are many different element codes defined to cover a wide range of devices. The most common SES elements are power supply, cooling fan, temperature sensor, and UPS. The SCSI command protocols assume that there may be more than one of each device type so they must be each given an 8-bit address.
When an SES controller is interrogated for the status of an SES element, the response includes a 4-bit element status code. The most commo
|
https://en.wikipedia.org/wiki/3D-Calc
|
3D-Calc is a 3-dimensional spreadsheet program for the Atari ST computer. The first version of the program was released in April 1989 and was distributed by ISTARI bvba, Ghent, Belgium.
History
Starting May 1991, the English version was distributed by MichTron/Microdeal, Cornwall, UK.
In January 1992, version 2.3 of the program was licensed to Atari Corp., who released Dutch and French translations.
In 1994, version 3 of 3D-Calc (renamed 3D-Calc+) was licensed to the UK magazine ST Applications.
Today, 3D-Calc software is Freeware ("Public domain without source code") and can be downloaded freely.
In 1992–1993, it was ported to MS-DOS to serve as the basis of a new statistics software package MedCalc.
Features and reception
The spreadsheet contains 13 pages of 2048 rows and 256 columns. Cells of different pages could be cross-referenced. 3D Calc offers GEM based user interface with icons, menus and function keys and users can work on three spreadsheets at the same time with up to three GEM windows for each. The application supports on-screen help via the "?" menu and can import data from Lotus 1-2-3 (with some limitations).
The program includes an integrated scripting language, and an integrated text module with a data import feature from the spreadsheet, allowing formatted data output, mailmerge, label printing etc.
Peter Crush writing for the ST Format and ST Applications magazines commended 3D Calc for rich features including easy to use graph generation, but criticized no support for colours.
|
https://en.wikipedia.org/wiki/World%20Geographic%20Reference%20System
|
The World Geographic Reference System (GEOREF) is a geocode, a grid-based method of specifying locations on the surface of the Earth. GEOREF is essentially based on the geographic system of latitude and longitude, but using a simpler and more flexible notation. GEOREF was used primarily in aeronautical charts for air navigation, particularly in military or inter-service applications, but it is rarely seen today. However, GEOREF can be used with any map or chart that has latitude and longitude printed on it.
Quadrangles
GEOREF is based on the standard system of latitude and longitude, but uses a simpler and more concise notation. GEOREF divides the Earth's surface into successively smaller quadrangles, with a notation system used to identify each quadrangle within its parent. Unlike latitude/longitude, GEOREF runs in one direction horizontally, east from the 180° meridian; and one direction vertically, north from the South Pole. GEOREF can easily be adapted to give co-ordinates with varying degrees of precision, using a 2–12 character geocode.
GEOREF co-ordinates are defined by successive divisions of the Earth's surface, as follows:
The first level of GEOREF divides the world into quadrangles each measuring 15 degrees of longitude by 15 degrees of latitude; this results in 24 zones of longitude and 12 bands of latitude. A longitude zone is identified by a letter from A to Z (omitting I and O) starting at 180 degrees and progressing eastward through the full 360 degrees of longitude; a latitude band is identified by a letter from A through M (omitting I) northward from the south pole. Hence, any 15 degree quadrangle can be identified by two letters; the easting (longitude) is given first, followed by the northing (latitude). These two letters are the first two characters of a full GEOREF coordinate.
Each 15-degree quadrangle is further divided into smaller quadrangles, measuring 1 degree of longitude by 1 degree of latitude. These quadrangles are lettered A to
|
https://en.wikipedia.org/wiki/Molecular%20anatomy
|
Molecular anatomy is the subspecialty of microscopic anatomy concerned with the identification and description of molecular structures of cells, tissues, and organs in an organism.
|
https://en.wikipedia.org/wiki/DBC%201012
|
The DBC/1012 Data Base Computer was a database machine introduced by Teradata Corporation in 1984, as a back-end data base management system for mainframe computers.
The DBC/1012 harnessed multiple Intel microprocessors, each with its own dedicated disk drive, by interconnecting them with the Ynet switching network in a massively parallel processing system.
The DBC/1012 was designed to manage databases up to one terabyte (1,000,000,000,000 characters) in size; "1012" in the name refers to "10 to the power of 12".
Major components included:
Mainframe-resident software to manage users and transfer data
Interface processor (IFP) - the hardware connection between the mainframe and the DBC/1012
Ynet - a custom-built system interconnect that supported broadcast and sorting
Access module processor (AMP) - the unit of parallelism: includes microprocessor, disk drive, file system, and database software
System console and printer
TEQUEL (TEradata QUEry Language) - an extension of SQL
The DBC/1012 was designed to scale up to 1024 Ynet interconnected processor-disk units. Rows of a relation (table) were distributed by hashing on the primary database index.
The DBC/1012 used a 474 megabyte Winchester disk drive with an average seek time of 18 milliseconds. The disk drive was capable of transferring data at 1.9 MB/s although in practice the sustainable data rate was lower because the IO pattern tended towards random access and transfer lengths of 8 to 12 kilobytes.
The processor cabinet was 60 inches high and 27 inches wide, weighed 450 pounds, and held up to 8 microprocessor units.
The storage cabinet was 60 inches high and 27 inches wide, weighed 625 pounds, and held up to 4 disk storage units.
The DBC/1012 preceded the advent of redundant array of independent disks (RAID) technology, so data protection was provided by the "fallback" feature, which kept a logical copy of rows of a relation on different AMPs. The collection of AMPs that provided this protection
|
https://en.wikipedia.org/wiki/Schlieren
|
Schlieren ( ; , ) are optical inhomogeneities in transparent media that are not necessarily visible to the human eye. Schlieren physics developed out of the need to produce high-quality lenses devoid of such inhomogeneities. These inhomogeneities are localized differences in optical path length that cause deviations of light rays, especially by refraction. This light deviation can produce localized brightening, darkening, or even color changes in an image, depending on the directions the rays deviate.
History
Schlieren were first observed by Robert Hooke in 1665 using a large concave lens and two candles. One candle served as a light source. The warm air rising from the second candle provided the schliere.
The conventional schlieren system is credited mostly to German physicist August Toepler, though Jean Bernard Léon Foucault invented the method in 1859 that Toepler improved upon. Toepler's original system was designed to detect schlieren in glass used to make lenses. In the conventional schlieren system, a point source is used to illuminate the test section containing the schliere. An image of this light is formed using a converging lens (also called a schlieren lens). This image is located at the conjugate distance to the lens according to the thin lens equation:
where is the focal length of the lens, is the distance from the object to the lens and is the distance from the image of the object to the lens. A knife edge at the point source-image location is positioned as to partially block some light from reaching the viewing screen. The illumination of the image is reduced uniformly. A second lens is used to image the test section to the viewing screen. The viewing screen is located a conjugate distance from the plane of the schliere.
The word schlieren originates from the German schliere, meaning "streak".
Schlieren flow visualization
Schlieren flow visualization is based on the deflection of light by a refractive index gradient The index gra
|
https://en.wikipedia.org/wiki/Enhanced%20biological%20phosphorus%20removal
|
Enhanced biological phosphorus removal (EBPR) is a sewage treatment configuration applied to activated sludge systems for the removal of phosphate.
The common element in EBPR implementations is the presence of an anaerobic tank (nitrate and oxygen are absent) prior to the aeration tank. Under these conditions a group of heterotrophic bacteria, called polyphosphate-accumulating organisms (PAO) are selectively enriched in the bacterial community within the activated sludge. In the subsequent aerobic phase, these bacteria can accumulate large quantities of polyphosphate within their cells and the removal of phosphorus is said to be enhanced.
Generally speaking, all bacteria contain a fraction (1-2%) of phosphorus in their biomass due to its presence in cellular components, such as membrane phospholipids and DNA. Therefore, as bacteria in a wastewater treatment plant consume nutrients in the wastewater, they grow and phosphorus is incorporated into the bacterial biomass. When PAOs grow they not only consume phosphorus for cellular components but also accumulate large quantities of polyphosphate within their cells. Thus, the phosphorus fraction of phosphorus accumulating biomass is 5-7%. In mixed bacterial cultures the phosphorus content will be maximal 3 - 4 % on total organic mass. If additional chemical precipitation takes place, for example to reach discharge limits, the P-content could be higher, but that is not affected by EBPR. This biomass is then separated from the treated (purified) water at end of the process and the phosphorus is thus removed. Thus if PAOs are selectively enriched by the EBPR configuration, considerably more phosphorus is removed, compared to the relatively poor phosphorus removal in conventional activated sludge systems.
See also
List of waste-water treatment technologies
|
https://en.wikipedia.org/wiki/Digital%20Universe
|
Digital Universe was a free online information service founded in 2006. The project aimed to create a "network of portals designed to provide high-quality information and services to the public". Subject matter experts were to have been responsible for reviewing and approving content; contributors were to have been both experts (researchers, scholars, educators) and the public.
The project was founded in 2005 by Joe Firmage, CEO of ManyOne, with Bernard Haisch as the president. It launched in early 2006. Larry Sanger was a director, and helped with the launch of the project's Encyclopedia of Earth. Sanger left in late 2006 to launch Citizendium. As of 2019, the website was nonfunctional.
Characteristics
Goals
In December 2005, when the project was announced, the founders' goal was to create a worldwide network of researchers, scholars, and educators, to become "the PBS of the Web."
While the public will be invited to contribute to some articles in the Digital Universe encyclopedia, they will be supervised by "stewards" whose role is to guarantee quality and accuracy of the articles. In addition, parts of the Digital Universe will be editable only by credentialed experts.
Multi-tiered system
The expert wiki, is expected to be written and managed by experts.
The public wiki, will be editable by members of the educated public. However, according to Sanger, only registered users who have provided their real names will be permitted to edit this wiki. According to Sanger, an article rating system will be used for articles in the public wiki.
Some of the 3-D graphical interface features will require the use of a Mozilla-based browser developed by ManyOne Networks, which they say will be made available free of charge.
Some content will be available only to ManyOne subscribers.
Content
Around 2000 content pages existed as of August 2007. The Digital Universe claims the following featured portals: Earth, Energy, The Arctic, Texas Environment, U.S. Government, and Sal
|
https://en.wikipedia.org/wiki/Zener%20pinning
|
Zener pinning is the influence of a dispersion of fine particles on the movement of low- and high-angle grain boundaries through a polycrystalline material. Small particles act to prevent the motion of such boundaries by exerting a pinning pressure which counteracts the driving force pushing the boundaries. Zener pinning is very important in materials processing as it has a strong influence on recovery, recrystallization and grain growth.
Origin of the pinning force
A boundary is an imperfection in the crystal structure and as such is associated with a certain quantity of energy. When a boundary passes through an incoherent particle then the portion of boundary that would be inside the particle essentially ceases to exist. In order to move past the particle some new boundary must be created, and this is energetically unfavourable. While the region of boundary near the particle is pinned, the rest of the boundary continues trying to move forward under its own driving force. This results in the boundary becoming bowed between those points where it is anchored to the particles.
Mathematical description
The figure illustrates a boundary intersecting with an incoherent particle of radius . The pinning force acts along the line of contact between the boundary and the particle, i.e., a circle of diameter . The force per unit length of boundary in contact is , where is the interfacial energy. Hence, the total force acting on the particle-boundary interface is
The maximum restraining force occurs when , so .
In order to determine the pinning force resulting from a given dispersion of particles, Clarence Zener made several important assumptions:
The particles are spherical.
The passage of the boundary does not alter the particle-boundary interaction.
Each particle exerts the maximum pinning force on the boundary, regardless of contact position.
The contacts between particles and boundaries are completely random.
The number density of particles on the boundary is
|
https://en.wikipedia.org/wiki/Recrystallization%20%28metallurgy%29
|
In materials science, recrystallization is a process by which deformed grains are replaced by a new set of defect-free grains that nucleate and grow until the original grains have been entirely consumed. Recrystallization is usually accompanied by a reduction in the strength and hardness of a material and a simultaneous increase in the ductility. Thus, the process may be introduced as a deliberate step in metals processing or may be an undesirable byproduct of another processing step. The most important industrial uses are softening of metals previously hardened or rendered brittle by cold work, and control of the grain structure in the final product. Recrystallization temperature is typically 0.3–0.4 times the melting point for pure metals and 0.5 times for alloys.
Definition
Recrystallization is defined as the process in which grains of a crystal structure come in a new structure or new crystal shape.
A precise definition of recrystallization is difficult to state as the process is strongly related to several other processes, most notably recovery and grain growth. In some cases it is difficult to precisely define the point at which one process begins and another ends. Doherty et al. (1997) defined recrystallization as:
"... the formation of a new grain structure in a deformed material by the formation and migration of high angle grain boundaries driven by the stored energy of deformation. High angle boundaries are those with greater than a 10-15° misorientation"
Thus the process can be differentiated from recovery (where high angle grain boundaries do not migrate) and grain growth (where the driving force is only due to the reduction in boundary area).
Recrystallization may occur during or after deformation (during cooling or subsequent heat treatment, for example). The former is termed dynamic while the latter is termed static. In addition, recrystallization may occur in a discontinuous manner, where distinct new grains form and grow, or a continuous manner,
|
https://en.wikipedia.org/wiki/Affect%20%28psychology%29
|
Affect, in psychology, refers to the underlying experience of feeling, emotion, attachment, or mood. In psychology, "affect" refers to the experience of feeling or emotion. It encompasses a wide range of emotional states and can be positive (e.g., happiness, joy, excitement) or negative (e.g., sadness, anger, fear, disgust). Affect is a fundamental aspect of human experience and plays a central role in many psychological theories and studies. It can be understood as a combination of three components: emotion, mood (enduring, less intense emotional states that are not necessarily tied to a specific event), and affectivity (an individual's overall disposition or temperament, which can be characterized as having a generally positive or negative affect). In psychology, the term "affect" is often used interchangeably with several related terms and concepts, though each term may have slightly different nuances. These terms encompass: emotion, feeling, mood, emotional state, sentiment, affective state, emotional response, affective reactivity, disposition. Researchers and psychologists may employ specific terms based on their focus and the context of their work.
History
The modern conception of affect developed in the 19th century with Wilhelm Wundt. The word comes from the German Gefühl, meaning "feeling".
A number of experiments have been conducted in the study of social and psychological affective preferences (i.e., what people like or dislike). Specific research has been done on preferences, attitudes, impression formation, and decision-making. This research contrasts findings with recognition memory (old-new judgments), allowing researchers to demonstrate reliable distinctions between the two. Affect-based judgments and cognitive processes have been examined with noted differences indicated, and some argue affect and cognition are under the control of separate and partially independent systems that can influence each other in a variety of ways (Zajonc, 1980). Both a
|
https://en.wikipedia.org/wiki/Domain%20analysis
|
In software engineering, domain analysis, or product line analysis, is the process of analyzing related software systems in a domain to find their common and variable parts. It is a model of wider business context for the system. The term was coined in the early 1980s by James Neighbors. Domain analysis is the first phase of domain engineering. It is a key method for realizing systematic software reuse.
Domain analysis produces domain models using methodologies such as domain specific languages, feature tables, facet tables, facet templates, and generic architectures, which describe all of the systems in a domain. Several methodologies for domain analysis have been proposed.
The products, or "artifacts", of a domain analysis are sometimes object-oriented models (e.g. represented with the Unified Modeling Language (UML)) or data models represented with entity-relationship diagrams (ERD). Software developers can use these models as a basis for the implementation of software architectures and applications. This approach to domain analysis is sometimes called model-driven engineering.
In information science, the term "domain analysis" was suggested in 1995 by Birger Hjørland and H. Albrechtsen.
Domain analysis techniques
Several domain analysis techniques have been identified, proposed and developed due to the diversity of goals, domains, and involved processes.
DARE: Domain Analysis and Reuse Environment ,
Feature-Oriented Domain Analysis (FODA)
IDEF0 for Domain Analysis
Model Oriented Domain Analysis and Engineering
|
https://en.wikipedia.org/wiki/Out-of-band%20agreement
|
In the exchange of information over a communication channel, an out-of-band agreement is an agreement or understanding between the communicating parties that is not included in any message sent over the channel but which is relevant for the interpretation of such messages.
By extension, in a client–server or provider-requester setting, an out-of-band agreement is an agreement or understanding that governs the semantics of the request/response interface but which is not part of the formal or contractual description of the interface specification itself.
See also
API
Contract
Out-of-band
Off-balance-sheet
External links
SakaiProject definition
Computer networking
|
https://en.wikipedia.org/wiki/Yat%20Ming
|
Yat Ming (formerly called Yatming) was a die-cast car scale model maker, based in Hong Kong. Yat Ming Industrial Factory Ltd was founded in 1970 by Mr. Wai Ming Lam. Yat in Chinese means "best or number one". The Ming portion came from the founder's "middle name". They continued producing diecast models until 2013. In 2015, their diecast line and tooling was purchased by Lucky Industrial Group Limited, which is now producting these models under their Lucky Die Cast brand.
The company started out making muscle cars, European sedans, and transport trucks from the USA in the early 1970s. In the mid 1970s, JRI. Inc. (Road Champs) purchased the tractor trailer designs from Yatming and began marketing them under the Road Champs name in the 1980s. In the late 1990s, the company moved towards making more realistic models, and moved away from its toy-maker roots.
Yat Ming was perhaps most famous for its 1:18, 1:24, and 1:43 scale "Road Signature" series. This series continues under Lucky Diecast.
|
https://en.wikipedia.org/wiki/Playart
|
Playart was a toy company owned by Hong Kong industrialist Duncan Tong (唐鼎康) that specialized in die-cast cars, similar in size and style to Hot Wheels, Matchbox or Tomica. Cars were well done, but were often diecast seconds from other companies like Yatming or Tomica. Cars were made from 1965 to 1983 at the factory in San Po Kong, Kowloon, Hong Kong. Plastic cars and trucks of 1:43, and 1:24 scale were also made, while trains and other theme toys also appeared.
Diverse marketing approaches
Playart (the name in all lower case with a larger "a" in "art" and dots in the bowls of the letters) die-cast cars were made in Hong Kong and mostly were distributed with the name Peelers, the in-house brand of toy cars for Woolworth. During the late 1970s and early 1980s, Sears sold blister packaged Playarts as Road Mates. McCrory stores had a line of Playart vehicles called Freewheelers. They were blister-packaged on a blue, white and yellow card. Another Playart blister package stated "Die cast metal - En metal moule", perhaps for the French market. In another twist the American distribution company for Playart was Model Power which focused on train accessories. The small sized cars were packaged under this name as Road Kings.
On another Playart series, the name was printed with each letter a separate color, on bright packages marketed as FASTWHEEL (see photo). The phrase "really fast - die cast" was also printed on these blister-paks and boxes with the checkered black and white background. Playart diecast cars were also packaged as Charmerz for New York distributor Charles Merzbach. These were marketed as Charmerz Super Singles and packaged in a blue blister card with many different vehicles listed on the back. The Playart name, however, did not accompany all toy packaging.
Vehicle offerings were more clever than those of other Hong Kong and Chinese makers of the 1970s and 1980s. Marques such as a 1967 Eldorado, a Fiat X 1/9, a Rolls-Royce Corniche coupe, and a Lotus El
|
https://en.wikipedia.org/wiki/Lactate%20threshold
|
Lactate inflection point (LIP) is the exercise intensity at which the blood concentration of lactate and/or lactic acid begins to increase rapidly. It is often expressed as 85% of maximum heart rate or 75% of maximum oxygen intake. When exercising at or below the lactate threshold, any lactate produced by the muscles is removed by the body without it building up.
The onset of blood lactate accumulation (OBLA) is often confused with the lactate threshold. With an exercise intensity higher than the threshold the lactate production exceeds the rate at which it can be broken down. The blood lactate concentration will show an increase equal to 4.0 mM; it then accumulates in the muscle and then moves to the bloodstream.
Regular endurance exercise leads to adaptations in skeletal muscle which raises the threshold at which lactate levels will rise. This is mediated via activation of the protein receptor PGC-1α, which alters the isoenzyme composition of the lactate dehydrogenase (LDH) complex and decreases the activity of lactate dehydrogenase A (LDHA), while increasing the activity of lactate dehydrogenase B (LDHB).
Training types
The lactate threshold is a useful measure for deciding exercise intensity for training and racing in endurance sports (e.g., long distance running, cycling, rowing, long distance swimming and cross country skiing), but varies between individuals and can be increased with training.
Interval training
Interval training alternates work and rest periods allowing the body to temporarily exceed the lactate threshold at a high intensity, and then recover (reduce blood-lactate). This type of training uses the ATP-PC and the lactic acid system while exercising, which provides the most energy when there are short bursts of high intensity exercise followed by a recovery period. Interval training can take the form of many different types of exercise and should closely replicate the movements found in the sport being trained for. Interval training can be
|
https://en.wikipedia.org/wiki/Isopentenyl%20pyrophosphate
|
Isopentenyl pyrophosphate (IPP, isopentenyl diphosphate, or IDP) is an isoprenoid precursor. IPP is an intermediate in the classical, HMG-CoA reductase pathway (commonly called the mevalonate pathway) and in the non-mevalonate MEP pathway of isoprenoid precursor biosynthesis. Isoprenoid precursors such as IPP, and its isomer DMAPP, are used by organisms in the biosynthesis of terpenes and terpenoids.
Biosynthesis
IPP is formed from acetyl-CoA via the mevalonate pathway (the "upstream" part), and then is isomerized to dimethylallyl pyrophosphate by the enzyme isopentenyl pyrophosphate isomerase.
IPP can be synthesised via an alternative non-mevalonate pathway of isoprenoid precursor biosynthesis, the MEP pathway, where it is formed from (E)-4-hydroxy-3-methyl-but-2-enyl pyrophosphate (HMB-PP) by the enzyme HMB-PP reductase (LytB, IspH). The MEP pathway is present in many bacteria, apicomplexan protozoa such as malaria parasites, and in the plastids of higher plants.
See also
Dimethylallyltranstransferase
|
https://en.wikipedia.org/wiki/Finders%20Keepers%20%281985%20video%20game%29
|
Finders Keepers is a video game written by David Jones and the first game in the Magic Knight series. It was published on the Mastertronic label for the ZX Spectrum, Amstrad CPC, MSX, Commodore 64, and Commodore 16 in 1985. Published in the United Kingdom at the budget price of £1.99. Finders Keepers is a platform game with some maze sections.
On the ZX Spectrum it sold more than 117,000 copies and across all 8-bit formats more than 330,000 copies, making it Mastertronic's second best-selling original game after BMX Racers.
Plot
Magic Knight has been sent to the Castle of Spriteland by the King of Ibsisima in order to find a special present for Princess Germintrude. If Magic Knight is successful in his quest, he may have proved himself worthy of joining the famous "Polygon Table", a reference to the mythical Round Table from the legends of King Arthur.
Gameplay
The hero starts in the King's throne room and is transported, via a teleporter, to the castle. The castle is made up of two types of playing area: flick-screen rooms in the manner of a platform game and two large scrolling mazes. On the ZX Spectrum, Amstrad CPC, and MSX these are "Cold Upper Maze" and the "Slimey Lower Maze"; on the Commodore 64 they consist of "The Castle Gardens" and "The Castle Dungeons".
An additional aspect of the gameplay is the ability to collect objects (found in both the rooms and the mazes) scattered around the castle and sell them for money. Some of these objects can combine or react to create an object of higher value (for example, the bar of lead and the philosopher's stone react to create a bar of gold). Both the amount of money Magic Knight is carrying and the market value of his inventory are displayed on-screen. The buying and selling of objects is done with the various traders who live in the castle.
The Castle of Spriteland is full of dangerous creatures who inhabit its many rooms as well as both of its mazes and collision with these saps Magic Knight's strength. If h
|
https://en.wikipedia.org/wiki/The%20Stand%20%281994%20miniseries%29
|
The Stand (also known as Stephen King's The Stand) is a 1994 American post-apocalyptic television miniseries based on the 1978 novel of the same name by Stephen King. King also wrote the teleplay and has a minor role in the series. It was directed by Mick Garris, who previously directed the original King screenplay/film Sleepwalkers (1992). In order to satisfy expectations from King fans and King himself, The Stand is a mostly faithful adaptation to the original book, with only minor changes to material that would otherwise have not met broadcast standards and practices, and in order to keep ABC content.
The Stand includes a cast of more than 125 speaking roles and features Gary Sinise, Miguel Ferrer, Rob Lowe, Ossie Davis, Ruby Dee, Jamey Sheridan, Laura San Giacomo, Molly Ringwald, Corin Nemec, Adam Storke, Ray Walston, Ed Harris, and Matt Frewer. The miniseries was shot in several locations and on 225 sets. Each episode was given a $6 million budget so to reduce cost, the miniseries was shot on 16 mm film. The Stand originally aired on ABC from May 8 to May 12, 1994. Reviews were positive and the miniseries was nominated for six Primetime Emmy Awards, winning two for its makeup and sound mixing.
Plot
On June 13, at a top-secret government laboratory in Northern California, a weaponized version of influenza, called Project Blue, is accidentally released. A U.S. Army soldier, Charlie Campion, escapes the lab and flees across the country with his wife and daughter, unintentionally spreading the virus. On June 17, Campion crashes his car into a gas station in Arnette, Texas, where Stu Redman and some friends are gathered. With his wife and child already dead from the superflu, Campion warns Redman that he had been pursued by a "Dark Man" before he succumbs to the virus as well. The next day, the U.S. military arrives to quarantine the town on orders from General Starkey, commander of Project Blue.
The townspeople are taken to a CDC facility in Stovington, Vermont.
|
https://en.wikipedia.org/wiki/Email%20spoofing
|
Email spoofing is the creation of email messages with a forged sender address. The term applies to email purporting to be from an address which is not actually the sender's; mail sent in reply to that address may bounce or be delivered to an unrelated party whose identity has been faked. Disposable email address or "masked" email is a different topic, providing a masked email address that is not the user's normal address, which is not disclosed (for example, so that it cannot be harvested), but forwards mail sent to it to the user's real address.
The original transmission protocols used for email do not have built-in authentication methods: this deficiency allows spam and phishing emails to use spoofing in order to mislead the recipient. More recent countermeasures have made such spoofing from internet sources more difficult but they have not eliminated it completely; few internal networks have defences against a spoof email from a colleague's compromised computer on that network. Individuals and businesses deceived by spoof emails may suffer significant financial losses; in particular, spoofed emails are often used to infect computers with ransomware.
Technical details
When a Simple Mail Transfer Protocol (SMTP) email is sent, the initial connection provides two pieces of address information:
MAIL FROM: - generally presented to the recipient as the Return-path: header but not normally visible to the end user, and by default no checks are done that the sending system is authorized to send on behalf of that address.
RCPT TO: - specifies which email address the email is delivered to, is not normally visible to the end user but may be present in the headers as part of the "Received:" header.
Together, these are sometimes referred to as the "envelope" addressing – an analogy to a traditional paper envelope. Unless the receiving mail server signals that it has problems with either of these items, the sending system sends the "DATA" command, and typically sends severa
|
https://en.wikipedia.org/wiki/Louis%20Herman
|
Louis Herman (April 16, 1930 – August 3, 2016) was an American marine biologist. He was a researcher of dolphin sensory abilities, dolphin cognition, and humpback whales. He was professor in the Department of Psychology and a cooperating faculty member of the Department of Oceanography at the University of Hawaiʻi at Mānoa. He founded the Kewalo Basin Marine Mammal Laboratory (KBMML) in Honolulu, Hawaii in 1970 to study bottlenose dolphin perception, cognition, and communication. In 1975, he pioneered the scientific study of the annual winter migration of humpback whales into Hawaiian waters. Together with Adam Pack, he founded The Dolphin Institute in 1993, a non-profit corporation dedicated to dolphins and whales through education, research, and conservation.
Herman served as a member of the Sanctuary Advisory Council for the Hawaiian Islands Humpback Whale National Marine Sanctuary. In total, he has published over 120 scientific papers.
Dolphin research
Herman is most known for his research into sensory perception, animal language and echolocation, and more recently on the topic of imitation. The Atlantic bottlenosed dolphins involved in the research programs were Puka, Kea, Akeakamai, Phoenix, Elele, and Hiapo. Akeakamai is perhaps the best-known of the "language" dolphins, and was inserted as a character in David Brin's science fiction novel Startide Rising. In the Hawaiian language, Akeakamai roughly corresponds to: lover (ake) of wisdom (akamai).
Animal language
His 1984 paper on animal language (Herman, Richards, and Wolz, 1984) was published in the human psychology journal Cognition, during the anti-animal language backlash generated by the skeptical critique of primate animal language programs by Herbert Terrace in 1979. The key difference with previous primate work was that the dolphin work focused on language comprehension only. The problem with researching language production was the issue of scientific parsimony: it is essentially impossible to v
|
https://en.wikipedia.org/wiki/Unrestricted%20Hartree%E2%80%93Fock
|
Unrestricted Hartree–Fock (UHF) theory is the most common molecular orbital method for open shell molecules where the number of electrons of each spin are not equal. While restricted Hartree–Fock theory uses a single molecular orbital twice, one multiplied by the α spin function and the other multiplied by the β spin function in the Slater determinant, unrestricted Hartree–Fock theory uses different molecular orbitals for the α and β electrons. This has been called a different orbitals for different spins (DODS) method. The result is a pair of coupled Roothaan equations, known as the Pople–Nesbet–Berthier equations.
Where and are the Fock matrices for the and orbitals, and are the matrices of coefficients for the and orbitals, is the overlap matrix of the basis functions, and and are the (diagonal, by convention) matrices of orbital energies for the and orbitals. The pair of equations are coupled because the Fock matrix elements of one spin contains coefficients of both spin as the orbital has to be optimized in the average field of all other electrons. The final result is a set of molecular orbitals and orbital energies for the α spin electrons and a set of molecular orbitals and orbital energies for the β electrons.
This method has one drawback. A single Slater determinant of different orbitals for different spins is not a satisfactory eigenfunction of the total spin operator - . The ground state is contaminated by excited states. If there is one more electron of α spin than β spin, the ground state is a doublet. The average value of , written , should be but will actually be rather more than this value as the doublet state is contaminated by a quadruplet state. A triplet state with two excess α electrons should have = 1 (1 + 1) = 2, but it will be larger as the triplet is contaminated by a quintuplet state. When carrying out unrestricted Hartree–Fock calculations, it is always necessary to check this contamination. For example, with a doublet s
|
https://en.wikipedia.org/wiki/Placement%20%28electronic%20design%20automation%29
|
Placement is an essential step in electronic design automation — the portion of the physical design
flow that assigns exact locations for various circuit components within the chip's core area. An inferior placement assignment will not only affect the chip's performance but might also make it non-manufacturable by producing excessive wire-length, which is beyond available routing resources. Consequently, a placer must perform the assignment while optimizing a number of objectives to ensure that a circuit meets its performance demands. Together, the placement and routing steps of IC design are known as place and route.
A placer takes a given synthesized circuit netlist together with a technology library and produces a valid placement layout. The layout is optimized according to the aforementioned objectives and ready for cell resizing and buffering — a step essential for timing and signal integrity satisfaction. Clock-tree synthesis and Routing follow, completing the physical design process. In many cases, parts of, or the entire, physical design flow are iterated a number of times until design closure is achieved.
In the case of application-specific integrated circuits, or ASICs, the chip's core layout area comprises a number of fixed height rows, with either some or no space between them. Each row consists of a number of sites which can be occupied by the circuit components. A free site is a site that is not occupied by any component. Circuit components are either standard cells, macro blocks, or I/O pads. Standard cells have a fixed height equal to a row's height, but have variable widths. The width of a cell is an integral number of sites. On the other hand, blocks are typically larger than cells and have variable heights that can stretch a multiple number of rows. Some blocks can have preassigned locations — say from a previous floorplanning process — which limit the placer's task to assigning locations for just the cells. In this case, the blocks are typicall
|
https://en.wikipedia.org/wiki/Gene%20product
|
A gene product is the biochemical material, either RNA or protein, resulting from expression of a gene. A measurement of the amount of gene product is sometimes used to infer how active a gene is. Abnormal amounts of gene product can be correlated with disease-causing alleles, such as the overactivity of oncogenes which can cause cancer.
A gene is defined as "a hereditary unit of DNA that is required to produce a functional product". Regulatory elements include:
Promoter region
TATA box
Polyadenylation sequences
Enhancers
These elements work in combination with the open reading frame to create a functional product. This product may be transcribed and be functional as RNA or is translated from mRNA to a protein to be functional in the cell.
RNA products
RNA molecules that do not code for any proteins still maintain a function in the cell. The function of the RNA depends on its classification. These roles include:
aiding protein synthesis
catalyzing reactions
regulating various processes.
Protein synthesis is aided by functional RNA molecules such as tRNA, which helps add the correct amino acid to a polypeptide chain during translation, rRNA, a major component of ribosomes (which guide protein synthesis), as well as mRNA which carry the instructions for creating the protein product.
One type of functional RNA involved in regulation are microRNA (miRNA), which works by repressing translation. These miRNAs work by binding to a complementary target mRNA sequence to prevent translation from occurring. Short-interfering RNA (siRNA) also work by negative regulation of transcription. These siRNA molecules work in RNA-induced silencing complex (RISC) during RNA interference by binding to a target DNA sequence to prevent transcription of a specific mRNA.
Protein products
Proteins are the product of a gene that are formed from translation of a mature mRNA molecule. Proteins contain 4 elements in regards to their structure: primary, secondary, tertiary and quaternary.
|
https://en.wikipedia.org/wiki/Repetition%20code
|
In coding theory, the repetition code is one of the most basic linear error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the message several times. The hope is that the channel corrupts only a minority of these repetitions. This way the receiver will notice that a transmission error occurred since the received data stream is not the repetition of a single message, and moreover, the receiver can recover the original message by looking at the received message in the data stream that occurs most often.
Because of the bad error correcting performance coupled with the low code rate (ratio between useful information symbols and actual transmitted symbols), other error correction codes are preferred in most cases. The chief attraction of the repetition code is the ease of implementation.
Code parameters
In the case of a binary repetition code, there exist two code words - all ones and all zeros - which have a length of . Therefore, the minimum Hamming distance of the code equals its length . This gives the repetition code an error correcting capacity of (i.e. it will correct up to errors in any code word).
If the length of a binary repetition code is odd, then it's a perfect code. The binary repetition code of length n is equivalent to the (n,1)-Hamming code.
Example
Consider a binary repetition code of length 3. The user wants to transmit the information bits 101. Then the encoding maps each bit either to the all ones or all zeros code word, so we get the 111 000 111, which will be transmitted.
Let's say three errors corrupt the transmitted bits and the received sequence is 111 010 100. Decoding is usually done by a simple majority decision for each code word. That lead us to 100 as the decoded information bits, because in the first and second code word occurred less than two errors, so the majority of the bits are correct. But in the third
|
https://en.wikipedia.org/wiki/Restricted%20open-shell%20Hartree%E2%80%93Fock
|
Restricted open-shell Hartree–Fock (ROHF) is a variant of Hartree–Fock method for open shell molecules. It uses doubly occupied molecular orbitals as far as possible and then singly occupied orbitals for the unpaired electrons. This is the simple picture for open shell molecules but it is difficult to implement.
The foundations of the ROHF method were first formulated by Clemens C. J. Roothaan in a celebrated paper and then extended by various authors, see e.g. for in-depth discussions.
As with restricted Hartree–Fock theory for closed shell molecules, it leads to Roothaan equations written in the form of a generalized eigenvalue problem
Where F is the so-called Fock matrix (which is a function of C), C is a matrix of coefficients, S is the overlap matrix of the basis functions, and is the (diagonal, by convention) matrix of orbital energies. Unlike restricted Hartree–Fock theory for closed shell molecules, the form of the Fock matrix is not unique. Different so-called canonicalisations can be used leading to different orbitals and different orbital energies, but the same total wavefunction, total energy, and other observables.
In contrast to unrestricted Hartree–Fock (UHF), the ROHF wave function is a satisfactory eigenfunction of the total spin operator - (i.e. no Spin contamination).
Developing post-Hartree–Fock methods based on a ROHF wave function is inherently more difficult than using a UHF wave function, due to the lack of a unique set of molecular
orbitals.
However, different choices of reference orbitals have shown to provide similar results, and thus many different post-Hartree–Fock methods have been implemented in a variety of electronic structure packages.
Many (but not all) of these post-Hartree–Fock methods are completely invariant with respect to orbital choice (assuming that no orbitals are "frozen" and
thus not correlated).
The ZAPT2 version of Møller–Plesset perturbation theory specifies the choice of orbitals.
|
https://en.wikipedia.org/wiki/Noisy-channel%20coding%20theorem
|
In information theory, the noisy-channel coding theorem (sometimes Shannon's theorem or Shannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data (digital information) nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley.
The Shannon limit or Shannon capacity of a communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon and Warren Weaver entitled The Mathematical Theory of Communication (1949). This founded the modern discipline of information theory.
Overview
Stated by Claude Shannon in 1948, the theorem describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. Shannon's theorem has wide-ranging applications in both communications and data storage. This theorem is of foundational importance to the modern field of information theory. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is given in .
The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C.
The converse is also important. If , an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rat
|
https://en.wikipedia.org/wiki/Neuronal%20noise
|
Neuronal noise or neural noise refers to the random intrinsic electrical fluctuations within neuronal networks. These fluctuations are not associated with encoding a response to internal or external stimuli and can be from one to two orders of magnitude. Most noise commonly occurs below a voltage-threshold that is needed for an action potential to occur, but sometimes it can be present in the form of an action potential; for example, stochastic oscillations in pacemaker neurons in suprachiasmatic nucleus are partially responsible for the organization of circadian rhythms.
Background
Neuronal activity at the microscopic level has a stochastic character, with atomic collisions and agitation, that may be termed "noise." While it isn't clear on what theoretical basis neuronal responses involved in perceptual processes can be segregated into a "neuronal noise" versus a "signal" component, and how such a proposed dichotomy could be corroborated empirically, a number of computational models incorporating a "noise" term have been constructed.
Single neurons demonstrate different responses to specific neuronal input signals. This is commonly referred to as neural response variability. If a specific input signal is initiated in the dendrites of a neuron, then a hypervariability exists in the number of vesicles released from the axon terminal fiber into the synapse. This characteristic is true for fibers without neural input signals, such as pacemaker neurons, as mentioned previously, and cortical pyramidal neurons that have highly-irregular firing pattern. Noise generally hinders neural performance, but recent studies show, in dynamical non-linear neural networks, this statement does not always hold true. Non-linear neural networks are a network of complex neurons that have many connections with one another such as the neuronal systems found within our brains. Comparatively, linear networks are an experimental view of analyzing a neural system by placing neurons in series w
|
https://en.wikipedia.org/wiki/F-15%20Strike%20Eagle%20%28video%20game%29
|
F-15 Strike Eagle is an F-15 Strike Eagle combat flight simulator originally released for the Atari 8-bit family in 1984 by MicroProse then ported to other systems. It is the first in the F-15 Strike Eagle series followed by F-15 Strike Eagle II and F-15 Strike Eagle III. An arcade version of the game was released simply as F-15 Strike Eagle in 1991, which uses higher-end hardware than was available in home systems, including the TMS34010 graphics-oriented CPU.
Gameplay
The game begins with the player selecting Libya (much like Operation El Dorado Canyon), the Persian Gulf, or Vietnam as a mission theater. Play then begins from the cockpit of an F-15 already in flight and equipped with a variety of missiles, bombs, drop tanks, flares and chaff. The player flies the plane in combat to bomb various targets including a primary and secondary target while also engaging in air-to-air combat with enemy fighters.
The game ends when either the player's plane crashes, is destroyed, or when the player returns to base.
Ports
The game was first released for the Atari 8-bit family, with ports appearing from 1985-87 for the Apple II, Commodore 64, ZX Spectrum, MSX, and Amstrad CPC. It was also ported to the IBM PC as a self-booting disk, being one of the first games that MicroProse company released for IBM compatibles. The initial IBM release came on a self-booting 5.25" floppy disk and supported only CGA graphics, but a revised version in 1986 was offered on 3.5" disks and added limited EGA support (which added the ability to change color palettes if an EGA card was present).
Versions for the Game Boy, Game Gear, and NES were published in the early 1990s.
Reception
F-15 Strike Eagle was a commercial blockbuster. It sold 250,000 copies by March 1987, and surpassed 1 million units in 1989. It ultimately reached over 1.5 million sales overall, and was MicroProse's best-selling Commodore game as of late 1987. Computer Gaming World in 1984 called F-15 "an excellent simulation"
|
https://en.wikipedia.org/wiki/Lindley%20equation
|
In probability theory, the Lindley equation, Lindley recursion or Lindley processes is a discrete-time stochastic process An where n takes integer values and:
An + 1 = max(0, An + Bn).
Processes of this form can be used to describe the waiting time of customers in a queue or evolution of a queue length over time. The idea was first proposed in the discussion following Kendall's 1951 paper.
Waiting times
In Dennis Lindley's first paper on the subject the equation is used to describe waiting times experienced by customers in a queue with the First-In First-Out (FIFO) discipline.
Wn + 1 = max(0,Wn + Un)
where
Tn is the time between the nth and (n+1)th arrivals,
Sn is the service time of the nth customer, and
Un = Sn − Tn
Wn is the waiting time of the nth customer.
The first customer does not need to wait so W1 = 0. Subsequent customers will have to wait if they arrive at a time before the previous customer has been served.
Queue lengths
The evolution of the queue length process can also be written in the form of a Lindley equation.
Integral equation
Lindley's integral equation is a relationship satisfied by the stationary waiting time distribution F(x) in a G/G/1 queue.
Where K(x) is the distribution function of the random variable denoting the difference between the (k - 1)th customer's arrival and the inter-arrival time between (k - 1)th and kth customers. The Wiener–Hopf method can be used to solve this expression.
Notes
Queueing theory
|
https://en.wikipedia.org/wiki/Gravitational%20wave%20background
|
The gravitational wave background (also GWB and stochastic background) is a random background of gravitational waves permeating the Universe, which is detectable by gravitational-wave experiments, like pulsar timing arrays. The signal may be intrinsically random, like from stochastic processes in the early Universe, or may be produced by an incoherent superposition of a large number of weak independent unresolved gravitational-wave sources, like supermassive black-hole binaries. Detecting the gravitational wave background can provide information that is inaccessible by any other means, about astrophysical source population, like hypothetical ancient supermassive black-hole binaries, and early Universe processes, like hypothetical primordial inflation and cosmic strings.
Sources of a stochastic background
Several potential sources for the background are hypothesized across various frequency bands of interest, with each source producing a background with different statistical properties. The sources of the stochastic background can be broadly divided into two categories: cosmological sources, and astrophysical sources.
Cosmological sources
Cosmological backgrounds may arise from several early universe sources. Some examples of these primordial sources include time-varying inflationary scalar fields in the early universe, "preheating" mechanisms after inflation involving energy transfer from inflaton particles to regular matter, cosmological phase transitions in the early universe (such as the electroweak phase transition), cosmic strings, etc. While these sources are more hypothetical, a detection of a primordial gravitational wave background from them would be a major discovery of new physics and would have a profound impact on early-universe cosmology and on high-energy physics.
Astrophysical sources
An astrophysical background is produced by the combined noise of many weak, independent, and unresolved astrophysical sources. For instance the astrophysical bac
|
https://en.wikipedia.org/wiki/AllAdvantage
|
AllAdvantage was an Internet advertising company that positioned itself as the world’s first "infomediary" by paying its users/members a portion of the advertising revenue generated by their online viewing habits. It became most well known for its slogan "Get Paid to Surf the Web," a phrase that has since become synonymous with a wide array of online ad revenue sharing systems (see, e.g., paid to surf).
History
AllAdvantage was launched on March 31, 1999, by Jim Jorgensen, Johannes Pohle, Carl Anderson, and Oliver Brock. During its nearly 2 years of operation, it raised nearly $200 Million in venture capital and grew to more than 10 million members in its first 18 months of operation. The company's practice of compensating existing members for referring new members led it to become one of the most heavily promoted websites of its time. In 1999, the company had over 4 million members worldwide, in over 240 countries, having delivered more than 4 billion ads in the month of November of that year. That popularity was reflected in the ranking of AllAdvantage.com among the top 20 of many website traffic indices during most of the company's existence, including Nielsen/NetRatings. That method of promotion also led the company to be heavily criticized for its early inability to prevent its members from spamming for referrals in order to collect additional income. It eventually overcame many of those problems and company executives were deeply involved in anti-spam legislative proposals, including the first anti-spam bill to pass the US House of Representatives.
AllAdvantage ultimately fell victim to the sharp decline in advertising spending as the dot-com bubble burst and the U.S. economy entered a recessionary period in mid-2000. AllAdvantage planned an initial public offering of stock in early 2000, underwritten by investment banker Frank Quattrone of the firm Credit Suisse First Boston. As the IPO market continued to sour through mid-2000, the offering plans were canc
|
https://en.wikipedia.org/wiki/Delayed-choice%20quantum%20eraser
|
A delayed-choice quantum eraser experiment, first performed by Yoon-Ho Kim, R. Yu, S. P. Kulik, Y. H. Shih and Marlan O. Scully, and reported in early 1998, is an elaboration on the quantum eraser experiment that incorporates concepts considered in John Archibald Wheeler's delayed-choice experiment. The experiment was designed to investigate peculiar consequences of the well-known double-slit experiment in quantum mechanics, as well as the consequences of quantum entanglement.
The delayed-choice quantum eraser experiment investigates a paradox. If a photon manifests itself as though it had come by a single path to the detector, then "common sense" (which Wheeler and others challenge) says that it must have entered the double-slit device as a particle. If a photon manifests itself as though it had come by two indistinguishable paths, then it must have entered the double-slit device as a wave. Accordingly, if the experimental apparatus is changed while the photon is in mid‑flight, the photon may have to revise its prior "commitment" as to whether to be a wave or a particle. Wheeler pointed out that when these assumptions are applied to a device of interstellar dimensions, a last-minute decision made on Earth on how to observe a photon could alter a situation established millions or even billions of years earlier.
While delayed-choice experiments might seem to allow measurements made in the present to alter events that occurred in the past, this conclusion requires assuming a non-standard view of quantum mechanics. If a photon in flight is instead interpreted as being in a so-called "superposition of states"—that is, if it is allowed the potentiality of manifesting as a particle or wave, but during its time in flight is neither—then there is no causation paradox. This notion of superposition reflects the standard interpretation of quantum mechanics.
Introduction
In the basic double-slit experiment, a beam of light (usually from a laser) is directed perpendicularly
|
https://en.wikipedia.org/wiki/Magnesium%20transporter
|
Magnesium transporters are proteins that transport magnesium across the cell membrane. All forms of life require magnesium, yet the molecular mechanisms of Mg2+ uptake from the environment and the distribution of this vital element within the organism are only slowly being elucidated.
The ATPase function of MgtA is highly cardiolipin dependent and has been shown to detect free magnesium in the μM range
In bacteria, Mg2+ is probably mainly supplied by the CorA protein and, where the CorA protein is absent, by the MgtE protein. In yeast the initial uptake is via the Alr1p and Alr2p proteins, but at this stage the only internal Mg2+ distributing protein identified is Mrs2p. Within the protozoa only one Mg2+ transporter (XntAp) has been identified. In metazoa, Mrs2p and MgtE homologues have been identified, along with two novel Mg2+ transport systems TRPM6/TRPM7 and PCLN-1. Finally, in plants, a family of Mrs2p homologues has been identified along with another novel protein, AtMHX.
Evolution
The evolution of Mg2+ transport appears to have been rather complicated. Proteins apparently based on MgtE are present in bacteria and metazoa, but are missing in fungi and plants, whilst proteins apparently related to CorA are present in all of these groups. The two active transport transporters present in bacteria, MgtA and MgtB, do not appear to have any homologies in higher organisms. There are also Mg2+ transport systems that are found only in the higher organisms.
Types
There are a large number of proteins yet to be identified that transport Mg2+. Even in the best studied eukaryote, yeast, Borrelly has reported a Mg2+/H+ exchanger without an associated protein, which is probably localised to the Golgi. At least one other major Mg2+ transporter in yeast is still unaccounted for, the one affecting Mg2+ transport in and out of the yeast vacuole. In higher, multicellular organisms, it seems that many Mg2+ transporting proteins await discovery.
The CorA-domain-containing
|
https://en.wikipedia.org/wiki/Wheeler%27s%20delayed-choice%20experiment
|
Wheeler's delayed-choice experiment describes a family of thought experiments in quantum physics proposed by John Archibald Wheeler, with the most prominent among them appearing in 1978 and 1984. These experiments are attempts to decide whether light somehow "senses" the experimental apparatus in the double-slit experiment it travels through, adjusting its behavior to fit by assuming an appropriate determinate state, or whether light remains in an indeterminate state, exhibiting both wave-like and particle-like behavior until measured.
The common intention of these several types of experiments is to first do something that, according to some hidden-variable models, would make each photon "decide" whether it was going to behave as a particle or behave as a wave, and then, before the photon had time to reach the detection device, create another change in the system that would make it seem that the photon had "chosen" to behave in the opposite way. Some interpreters of these experiments contend that a photon either is a wave or is a particle, and that it cannot be both at the same time. Wheeler's intent was to investigate the time-related conditions under which a photon makes this transition between alleged states of being. His work has been productive of many revealing experiments.
This line of experimentation proved very difficult to carry out when it was first conceived. Nevertheless, it has proven very valuable over the years since it has led researchers to provide "increasingly sophisticated demonstrations of the wave–particle duality of single quanta". As one experimenter explains, "Wave and particle behavior can coexist simultaneously."
Introduction
"Wheeler's delayed-choice experiment" refers to a series of thought experiments in quantum physics, the first being proposed by him in 1978. Another prominent version was proposed in 1983. All of these experiments try to get at the same fundamental issues in quantum physics. Many of them are discussed in Wheeler'
|
https://en.wikipedia.org/wiki/Motronic
|
Motronic is the trade name given to a range of digital engine control units developed by Robert Bosch GmbH (commonly known as Bosch) which combined control of fuel injection and ignition in a single unit. By controlling both major systems in a single unit, many aspects of the engine's characteristics (such as power, fuel economy, drivability, and emissions) can be improved.
Motronic 1.x
Motronic M1.x is powered by various i8051 derivatives made by Siemens, usually SAB80C515 or SAB80C535. Code/data is stored in DIL or PLCC EPROM and ranges from 32k to 128k.
1.0
Often known as "Motronic basic", Motronic ML1.x was one of the first digital engine-management systems developed by Bosch. These early Motronic systems integrated the spark timing element with then-existing Jetronic fuel injection technology. It was originally developed and first used in the BMW 7 Series, before being implemented on several Volvo and Porsche engines throughout the 1980s.
The components of the Motronic ML1.x systems for the most part remained unchanged during production, although there are some differences in certain situations. The engine control module (ECM) receives information regarding engine speed, crankshaft angle, coolant temperature and throttle position. An air flow meter also measures the volume of air entering the induction system.
If the engine is naturally aspirated, an air temperature sensor is located in the air flow meter to work out the air mass. However, if the engine is turbocharged, an additional charge air temperature sensor is used to monitor the temperature of the inducted air after it has passed through the turbocharger and intercooler, in order to accurately and dynamically calculate the overall air mass.
Main system characteristics
Fuel delivery, ignition timing, and dwell angle incorporated into the same control unit.
Crank position and engine speed is determined by a pair of sensors reading from the flywheel.
Separate constant idle speed system monitors and re
|
https://en.wikipedia.org/wiki/Dawes%27%20limit
|
Dawes' limit is a formula to express the maximum resolving power of a microscope or telescope. It is so named after its discoverer, William Rutter Dawes
, although it is also credited to Lord Rayleigh.
The formula takes different forms depending on the units.
This formula agrees with the usual
at a wavelength of about 460nm, somewhat bluer than the peak sensitivity of rod cells at c. 498nm.
See also
Rayleigh criterion
|
https://en.wikipedia.org/wiki/Demon%20Deacon
|
The Demon Deacon is the mascot of Wake Forest University, a school located in Winston-Salem, North Carolina, United States. Probably best known for its slightly unorthodox name and appearance, the Demon Deacon has become a mainstay in the world of U.S. college mascots.
History
The early years and "The Old Gold & Black"
The origins of Wake Forest's mascot are distinctive, yet somewhat debated. As early as 1895, Wake Forest College (as it was called at the time) was using its colors in athletic competition. The school's literary magazine, The Wake Forest Student, described them in this manner:
During the early part of the 20th century, these colors became more and more associated with the college. Since Wake Forest was founded as a Baptist college, some historians have proposed an association with the Bible, but most people believe their adoption comes from the connection with the original tiger mascot.
The tiger mascot stayed with the school for a little more than two decades, but reports indicate that by the early 1920s, the college's nicknames were most commonly noted as the "Baptists", or "The Old Gold & Black".
Origin of the Demon Deacons name
The first few decades of the 20th century were particularly rough for the Wake Forest athletic squads, but in 1923, Hank Garrity took the head football and basketball coaching jobs. His leadership gave the school a short relief from its early mediocrity when he led the football team to three consecutive winning seasons, and the basketball team compiled a 33-14 combined record in two seasons.
In 1923, the Wake Forest football team defeated rival Trinity (later renamed Duke University). In the following issue of the school newspaper, the editor of the paper, Mayon Parker (1924 Wake Forest graduate), first referred to the team as "Demon Deacons", in recognition of what he called their "devilish" play and fighting spirit. Henry Belk, Wake Forest's news director, and Garrity liked the title and used it often, so the popul
|
https://en.wikipedia.org/wiki/Timeline%20of%20information%20theory
|
A timeline of events related to information theory, quantum information theory and statistical physics, data compression, error correcting codes and related subjects.
1872 – Ludwig Boltzmann presents his H-theorem, and with it the formula Σpi log pi for the entropy of a single gas particle
1878 – J. Willard Gibbs defines the Gibbs entropy: the probabilities in the entropy formula are now taken as probabilities of the state of the whole system
1924 – Harry Nyquist discusses quantifying "intelligence" and the speed at which it can be transmitted by a communication system
1927 – John von Neumann defines the von Neumann entropy, extending the Gibbs entropy to quantum mechanics
1928 – Ralph Hartley introduces Hartley information as the logarithm of the number of possible messages, with information being communicated when the receiver can distinguish one sequence of symbols from any other (regardless of any associated meaning)
1929 – Leó Szilárd analyses Maxwell's Demon, showing how a Szilard engine can sometimes transform information into the extraction of useful work
1940 – Alan Turing introduces the deciban as a measure of information inferred about the German Enigma machine cypher settings by the Banburismus process
1944 – Claude Shannon's theory of information is substantially complete
1947 – Richard W. Hamming invents Hamming codes for error detection and correction (to protect patent rights, the result is not published until 1950)
1948 – Claude E. Shannon publishes A Mathematical Theory of Communication
1949 – Claude E. Shannon publishes Communication in the Presence of Noise – Nyquist–Shannon sampling theorem and Shannon–Hartley law
1949 – Claude E. Shannon's Communication Theory of Secrecy Systems is declassified
1949 – Robert M. Fano publishes Transmission of Information. M.I.T. Press, Cambridge, Massachusetts – Shannon–Fano coding
1949 – Leon G. Kraft discovers Kraft's inequality, which shows the limits of prefix codes
1949 –
|
https://en.wikipedia.org/wiki/Born%E2%80%93von%20Karman%20boundary%20condition
|
Born–von Karman boundary conditions are periodic boundary conditions which impose the restriction that a wave function must be periodic on a certain Bravais lattice. Named after Max Born and Theodore von Kármán, this condition is often applied in solid state physics to model an ideal crystal. Born and von Karman published a series of articles in 1912 and 1913 that presented one of the first theories of specific heat of solids based on the crystalline hypothesis and included these boundary conditions.
The condition can be stated as
where i runs over the dimensions of the Bravais lattice, the ai are the primitive vectors of the lattice, and the Ni are integers (assuming the lattice has N cells where N=N1N2N3). This definition can be used to show that
for any lattice translation vector T such that:
Note, however, the Born–von Karman boundary conditions are useful when Ni are large (infinite).
The Born–von Karman boundary condition is important in solid state physics for analyzing many features of crystals, such as diffraction and the band gap. Modeling the potential of a crystal as a periodic function with the Born–von Karman boundary condition and plugging in Schrödinger's equation results in a proof of Bloch's theorem, which is particularly important in understanding the band structure of crystals.
However, since any real crystal always has a finite size, the electronic states in the crystal do not satisfy the Born–von Karman boundary condition. Consequently, the conventional theory of electronic states in crystals based on the Bloch's theorem has some fundamental difficulties.
|
https://en.wikipedia.org/wiki/Bragg%20peak
|
The Bragg peak is a pronounced peak on the Bragg curve which plots the energy loss of ionizing radiation during its travel through matter. For protons, α-rays, and other ion rays, the peak occurs immediately before the particles come to rest. It is named after William Henry Bragg, who discovered it in 1903.
When a fast charged particle moves through matter, it ionizes atoms of the material and deposits a dose along its path. A peak occurs because the interaction cross section increases as the charged particle's energy decreases. Energy lost by charged particles is inversely proportional to the square of their velocity, which explains the peak occurring just before the particle comes to a complete stop. In the upper figure, it is the peak for alpha particles of 5.49 MeV moving through air. In the lower figure, it is the narrow peak of the "native" proton beam curve which is produced by a particle accelerator of 250 MeV. The figure also shows the absorption of a beam of energetic photons (X-rays) which is entirely different in nature; the curve is mainly exponential.
This characteristic of proton beams was first recommended for use in cancer therapy by Robert R. Wilson in his 1946 article, Radiological Use of Fast Protons. Wilson studied how the depth of proton beam penetration could be controlled by the energy of the protons. This phenomenon is exploited in particle therapy of cancer, specifically in proton therapy, to concentrate the effect of light ion beams on the tumor being treated while minimizing the effect on the surrounding healthy tissue.
The blue curve in the figure ("modified proton beam") shows how the originally monoenergetic proton beam with the sharp peak is widened by increasing the range of energies, so that a larger tumor volume can be treated. The plateau created by modifying the proton beam is referred to as the spread out Bragg Peak, or SOBP, which allows the treatment to conform to not only larger tumors, but to more specific 3D shapes. Thi
|
https://en.wikipedia.org/wiki/Rencontres%20numbers
|
In combinatorial mathematics, the rencontres numbers are a triangular array of integers that enumerate permutations of the set { 1, ..., n } with specified numbers of fixed points: in other words, partial derangements. (Rencontre is French for encounter. By some accounts, the problem is named after a solitaire game.) For n ≥ 0 and 0 ≤ k ≤ n, the rencontres number Dn, k is the number of permutations of { 1, ..., n } that have exactly k fixed points.
For example, if seven presents are given to seven different people, but only two are destined to get the right present, there are D7, 2 = 924 ways this could happen. Another often cited example is that of a dance school with 7 couples, where, after tea-break the participants are told to randomly find a partner to continue, then once more there are D7, 2 = 924 possibilities that 2 previous couples meet again by chance.
Numerical values
Here is the beginning of this array :
Formulas
The numbers in the k = 0 column enumerate derangements. Thus
for non-negative n. It turns out that
where the ratio is rounded up for even n and rounded down for odd n. For n ≥ 1, this gives the nearest integer.
More generally, for any , we have
The proof is easy after one knows how to enumerate derangements: choose the k fixed points out of n; then choose the derangement of the other n − k points.
The numbers are generated by the power series ; accordingly,
an explicit formula for Dn, m can be derived as follows:
This immediately implies that
for n large, m fixed.
Probability distribution
The sum of the entries in each row for the table in "Numerical Values" is the total number of permutations of { 1, ..., n }, and is therefore n!. If one divides all the entries in the nth row by n!, one gets the probability distribution of the number of fixed points of a uniformly distributed random permutation of { 1, ..., n }. The probability that the number of fixed points is k is
For n ≥ 1, the expected number of fixed points is 1
|
https://en.wikipedia.org/wiki/B%C3%A1nh%20tr%C3%A1ng
|
Bánh tráng or bánh đa nem, a Vietnamese term (literally, coated bánh), sometimes called rice paper wrappers, rice crepes, rice wafers or nem wrappers, are edible Vietnamese wrappers used in Vietnamese cuisine, primarily in finger foods and appetizers such as Vietnamese nem dishes. The term rice paper wrappers can sometimes be a misnomer, as some banh trang wrappers are made from rice flour supplemented with tapioca flour or sometimes replaced completely with tapioca starch. The roasted version is bánh tráng nướng.
Description
Vietnamese banh trang are rice paper wrappers that are edible. They are made from steamed rice batter, then sun-dried. A more modern method is to use machines that can steam and dry the wrapper for a thinner and more hygienic product, suitable for the export market.
Types
Vietnamese banh trang wrappers come in various textures, shapes and types. Textures may vary from thin, soft to thick (much like a rice cracker). Banh trang wrappers come in various shapes, though circular and squared shapes are most commonly used. A plethora of local Vietnamese ingredients and spices are added to Vietnamese banh trang wrappers for the purpose of creating different flavors and textures, such as sesame seeds, chili, coconut milk, bananas, and durian, to name a few.
Bánh tráng
Southern Vietnamese term for rice wrappers, which are also commonly used overseas. These banh trang wrappers are made from a mixture of rice flour with tapioca starch, water and salt. These wrappers are thin and light in texture. They are often used for chả giò and gỏi cuốn. There are also certain rice wrappers products that are specifically for frying.
Bánh đa nướng / Bánh tráng nướng (grilled rice cracker)
Bánh đa nướng or bánh tráng nướng are roasted or grilled rice crackers. Some can be thicker than the standard rice wrapper and can also include sesame seeds. It is often used in dishes like Mì Quảng as a topping. Not be confused with the street food dish from Đà Lạt (also known
|
https://en.wikipedia.org/wiki/VIX
|
VIX is the ticker symbol and the popular name for the Chicago Board Options Exchange's CBOE Volatility Index, a popular measure of the stock market's expectation of volatility based on S&P 500 index options. It is calculated and disseminated on a real-time basis by the CBOE, and is often referred to as the fear index or fear gauge.
The VIX traces its origin to the financial economics research of Menachem Brenner and Dan Galai. In a series of papers beginning in 1989, Brenner and Galai proposed the creation of a series of volatility indices, beginning with an index on stock market volatility, and moving to interest rate and foreign exchange rate volatility.
In their papers, Brenner and Galai proposed, "[the] volatility index, to be named 'Sigma Index', would be updated frequently and used as the underlying asset for futures and options. ... A volatility index would play the same role as the market index plays for options and futures on the index." In 1992, the CBOE hired consultant Bob Whaley to calculate values for stock market volatility based on this theoretical work. Whaley utilized data series in the index options market, and computed daily VIX levels from January 1986 to May 1992.
The resulting VIX index formulation provides a measure of market volatility on which expectations of further stock market volatility in the near future might be based. The current VIX index value quotes the expected annualized change in the S&P 500 index over the following 30 days, as computed from options-based theory and current options-market data.
To summarize, VIX is a volatility index derived from S&P 500 options for the 30 days following the measurement date, with the price of each option representing the market's expectation of 30-day forward-looking volatility. The resulting VIX index formulation provides a measure of expected market volatility on which expectations of further stock market volatility in the near future might be based.
Like conventional indexes, the VI
|
https://en.wikipedia.org/wiki/Jackshaft%20%28locomotive%29
|
A jackshaft is an intermediate shaft used to transfer power from a powered shaft such as the output shaft of an engine or motor to driven shafts such as the drive axles of a locomotive. As applied to railroad locomotives in the 19th and 20th centuries, jackshafts were typically in line with the drive axles of locomotives and connected to them by side rods. In general, each drive axle on a locomotive is free to move about one inch (2.5 cm) vertically relative to the frame, with the locomotive weight carried on springs. This means that if the engine, motor or transmission is rigidly attached to the locomotive frame, it cannot be rigidly connected to the axle. This problem can be solved by mounting the jackshaft on unsprung bearings and using side-rods or (in some early examples) chain drives.
Jackshafts were first used in early steam locomotives, although the designers did not yet call them by that name. In the early 20th century, large numbers of jackshaft-driven electric locomotives were built for heavy mainline service. Jackshaft drives were also used in many early gasoline and diesel locomotives that used mechanical transmissions.
Steam locomotives
The Baltimore and Ohio Railroad was a pioneer in the use of jackshaft driven locomotives. While the drive axle of the first Grasshopper locomotive was directly driven by spur gears from the crankshaft, the Traveler delivered in 1833, used a jackshaft, as did all the later Grasshopper and Crab locomotives. These locomotives used step-up gearing to achieve a reasonable running speed using small diameter driving wheels. It is notable that the term jackshaft was not used by the designers of these machines. Instead, they referred to what would later be called a jackshaft as "a separate axle, about three feet forward of the front axle, and carrying cranks coupled by connecting rods to cranks on the two road axles." In his 1837 patent for what became known as the crab class of locomotives, Ross Winans referred to
|
https://en.wikipedia.org/wiki/Darda%20%28toy%29
|
Darda is the name of a German toy car racing set (and related items) which was most popular in Europe and the USA throughout the 1980s and '90s.
The unique selling point of the sets is the special Darda Motor, invented by Helmut Darda in 1970, which propelled the cars (similarly sized to Matchbox or Hot Wheels) at speeds of up to 30 mph (50 km/h). The pullback motor is wound up by pressing down the rear of the car and rolling it forwards and backwards on its wheels. Whilst winding the car up the motor clicks and once fully wound the tone of clicks deepens to signify that it can be wound no more.
The cars can be run on any surface but are designed to be run on special Darda tracks which can be bought as sets or individual track pieces. The tracks incorporate loops, jumps, curves and crossovers, and can be combined into quite elaborate creations including multi-level loops and Y-shaped return curves.
As with Hot Wheels and Matchbox, a range of cars is available. As Darda was originally a German company, many of the cars were based on German brands such as VW and Porsche. Various special cars were also created including a dragster and a mouse (which could not run on the Darda tracks as it was too wide). Another variant available in the late '80s was a replica KITT (Knight Industries Two Thousand) from the US television series Knight Rider. Recent models have included custom NASCAR racers, patrol cars from a variety of police forces including the New York Police Department, and a series of mounted collectible cars.
In the late '80s a new version of the Darda motor was introduced, called the Darda-Stop motor (later renamed to the Darda Stop-n-Go motor), it allowed the motor to be locked once wound so that the car could be placed on the track without immediately setting off. This allowed for longer tracks and 'tag-team' relay style racing where once the first car had gone round the track and just run out of power, it would tap the back of the wound up one and set it g
|
https://en.wikipedia.org/wiki/Immune%20reconstitution%20inflammatory%20syndrome
|
Immune reconstitution inflammatory syndrome (IRIS) is a condition seen in some cases of HIV/AIDS or immunosuppression, in which the immune system begins to recover, but then responds to a previously acquired opportunistic infection with an overwhelming inflammatory response that paradoxically makes the symptoms of infection worse.
IRIS may also be referred to as immune reconstitution syndrome, immune reconstitution disease, immune recovery disease, and immune restoration disease.
Systemic or local inflammatory responses may occur with improvement in immune function. While this inflammatory reaction is usually self-limited, there is risk of long-term symptoms and death, particularly when the central nervous system is involved.
Management generally involves symptom control and treatment of the underlying infection. In severe cases of IRIS, corticosteroids are commonly used. Important exceptions to using corticosteroids include Cryptococcal meningitis and Kaposi’s sarcoma, as they have been associated with poorer outcomes.
Mechanism
There are two common IRIS scenarios. The first is the “unmasking” of an occult opportunistic infection. The second is the “paradoxical” symptomatic relapse of a prior infection despite microbiologic treatment success. Often in paradoxical IRIS, microbiologic cultures are sterile. In either scenario, there is hypothesized reconstitution of antigen-specific T cell-mediated immunity with activation of the immune system against persisting antigen, whether present as intact organisms, dead organisms, or debris.
In HIV infection and immunosuppression
The suppression of CD4 T cells by HIV (or by immunosuppressive drugs) causes a decrease in the body's normal response to certain infections. Not only does this make it more difficult to fight the infection, it may mean that a level of infection that would normally produce symptoms is instead undetected (subclinical infection). If the CD4 count rapidly increases (due to effective treatment of
|
https://en.wikipedia.org/wiki/Cohen%E2%80%93Sutherland%20algorithm
|
In computer graphics, the Cohen–Sutherland algorithm is an algorithm used for line clipping. The algorithm divides a two-dimensional space into 9 regions and then efficiently determines the lines and portions of lines that are visible in the central region of interest (the viewport).
The algorithm was developed in 1967 during flight simulator work by Danny Cohen and Ivan Sutherland.
The algorithm
The algorithm includes, excludes or partially includes the line based on whether:
Both endpoints are in the viewport region (bitwise OR of endpoints = 0000): trivial accept.
Both endpoints share at least one non-visible region, which implies that the line does not cross the visible region. (bitwise AND of endpoints ≠ 0000): trivial reject.
Both endpoints are in different regions: in case of this nontrivial situation the algorithm finds one of the two points that is outside the viewport region (there will be at least one point outside). The intersection of the outpoint and extended viewport border is then calculated (i.e. with the parametric equation for the line), and this new point replaces the outpoint. The algorithm repeats until a trivial accept or reject occurs.
The numbers in the figure below are called outcodes. An outcode is computed for each of the two points in the line. The outcode will have 4 bits for two-dimensional clipping, or 6 bits in the three-dimensional case. The first bit is set to 1 if the point is above the viewport. The bits in the 2D outcode represent: top, bottom, right, left. For example, the outcode 1010 represents a point that is top-right of the viewport.
{| class="wikitable"
! !! left || central || right
|-
! top
| 1001
| 1000
| 1010
|-
! central
| 0001
| 0000
| 0010
|-
! bottom
| 0101
| 0100
| 0110
|}
Note that the outcodes for endpoints must be recalculated on each iteration after the clipping occurs.
The Cohen–Sutherland algorithm can be used only on a rectangular clip window.
Example C/C++ implementation
typedef int OutCode;
co
|
https://en.wikipedia.org/wiki/Nicholl%E2%80%93Lee%E2%80%93Nicholl%20algorithm
|
In computer graphics, the Nicholl–Lee–Nicholl algorithm is a fast algorithm for line clipping that reduces the chances of clipping a single line segment multiple times, as may happen in the Cohen–Sutherland algorithm.
Description
Using the Nicholl–Lee–Nicholl algorithm, the area around the clipping window is divided into a number of different areas, depending on the position of the initial point of the line to be clipped. This initial point should be in three predetermined areas; thus the line may have to be translated and/or rotated to bring it into the desired region. The line segment may then be re-translated and/or re-rotated to bring it to the original position. After that, straight line segments are drawn from the line end point, passing through the corners of the clipping window. These areas are then designated as L, LT, LB, or TR, depending on the location of the initial point. Then the other end point of the line is checked against these areas. If a line starts in the L area and finishes in the LT area then the algorithm concludes that the line should be clipped at xw (max). Thus the number of clipping points is reduced to one, compared to other algorithms that may require two or more clipping
See also
Algorithms used for the same purpose:
Liang–Barsky algorithm
Cyrus–Beck algorithm
Fast clipping
|
https://en.wikipedia.org/wiki/Ethnocomputing
|
Ethnocomputing is the study of the interactions between computing and culture. It is carried out through theoretical analysis, empirical investigation, and design implementation. It includes research on the impact of computing on society, as well as the reverse: how cultural, historical, personal, and societal origins and surroundings cause and affect the innovation, development, diffusion, maintenance, and appropriation of computational artifacts or ideas. From the ethnocomputing perspective, no computational technology is culturally "neutral," and no cultural practice is a computational void. Instead of considering culture to be a hindrance for software engineering, culture should be seen as a resource for innovation and design.
Subject matter
Social categories for ethnocomputing include:
Indigenous computing: In some cases, ethnocomputing "translates" from indigenous culture to high tech frameworks: for example, analyzing the African board game Owari as a one-dimensional cellular automaton.
Social/historical studies of computing: In other cases ethnocomputing seeks to identify the social, cultural, historical, or personal dimensions of high tech computational ideas and artifacts: for example, the relationship between the Turing Test and Alan Turing's closeted gay identity.
Appropriation in computing: lay persons who did not participate in the original design of a computing system can still affect it by modifying its interpretation, use, or structure. Such "modding" may be as subtle as the key board character "emoticons" created through lay use of email, or as blatant as the stylized customization of computer cases.
Equity tools: a software "Applications Quest" has been developed for generating a "diversity index" that allows consideration of multiple identity characteristics in college admissions.
Technical categories in ethnocomputing include:
Organized structures and models used to represent information (data structures)
Ways of manipulating the organiz
|
https://en.wikipedia.org/wiki/Immunoscreening
|
Immunoscreening is a method of biotechnology used to detect a polypeptide produced from a cloned gene. The term encompasses several different techniques designed for protein identification, such as Western blotting, using recombinant DNA, and analyzing antibody-peptide interactions.
Clones are screened for the presence of the gene product: the resulting protein.
This strategy requires first that a gene library is implemented in an expression vector, and that antiserum to the protein is available.
Radioactivity or an enzyme is coupled generally with the secondary
antibody.
The radioactivity/enzyme linked secondary antibody can be purchased commercially and can detect different antigens.
In commercial diagnostics labs, labelled primary antibodies are
also used. The antigen-antibody interaction is used in the immunoscreening of several diseases.
See also
ELISA
Blots
|
https://en.wikipedia.org/wiki/Polyphosphate-accumulating%20organisms
|
Polyphosphate-accumulating organisms (PAOs) are a group of microorganisms that, under certain conditions, facilitate the removal of large amounts of phosphorus from their environments. The most studied example of this phenomenon is in polyphosphate-accumulating bacteria (PAB) found in a type of wastewater processing known as enhanced biological phosphorus removal (EBPR), however phosphate hyperaccumulation has been found to occur in other conditions such as soil and marine environments, as well as in non-bacterial organisms such as fungi and algae. PAOs accomplish this removal of phosphate by accumulating it within their cells as polyphosphate. PAOs are by no means the only microbes that can accumulate phosphate within their cells and in fact, the production of polyphosphate is a widespread ability among microbes. However, PAOs have many characteristics that other organisms that accumulate polyphosphate do not have that make them amenable to use in wastewater treatment. Specifically, in the case of classical PAOs, is the ability to consume simple carbon compounds (energy source) without the presence of an external electron acceptor (such as nitrate or oxygen) by generating energy from internally stored polyphosphate and glycogen. Most other bacteria cannot consume under these conditions and therefore PAOs gain a selective advantage within the mixed microbial community present in the activated sludge. Therefore, wastewater treatment plants that operate for enhanced biological phosphorus removal have an anaerobic tank (where there is no nitrate or oxygen present as external electron acceptor) prior to the other tanks to give PAOs preferential access to the simple carbon compounds in the wastewater that is influent to the plant.
Metabolisms
Classical (Canonical) PAO Metabolism
The classical or "canonical" behavior of PAOs is considered to be the release of phosphate (as orthophosphate) to the environment and transformation of intracellular polyphosphate reserves int
|
https://en.wikipedia.org/wiki/Izod%20impact%20strength%20test
|
The Izod impact strength test is an ASTM standard method of determining the impact resistance of materials. A pivoting arm is raised to a specific height (constant potential energy) and then released. The arm swings down hitting a notched sample, breaking the specimen. The energy absorbed by the sample is calculated from the height the arm swings to after hitting the sample. A notched sample is generally used to determine impact energy and notch sensitivity.
The test is similar to the Charpy impact test but uses a different arrangement of the specimen under test. The Izod impact test differs from the Charpy impact test in that the sample is held in a cantilevered beam configuration as opposed to a three-point bending configuration.
The test is named after the English engineer Edwin Gilbert Izod (1876–1946), who described it in his 1903 address to the British Association, subsequently published in Engineering.
The need for Impact tests
Impact, by definition, is a large force applied for a very short time, resulting in a sudden transfer of momentum and energy, and its effect is different when the same amount of energy is transferred more gradually. Everyday engineering structures are subjected to it and may develop cracks that, over time, propagate to a point where catastrophic failure would result.
Impact tests are used in comparing the shear fracture toughness of various materials under the same test conditions, or of one material versus temperature to determine its ductile-to-brittle transition temperature where a steep descent in impact strength with decreasing temperature is observed.
A material's toughness is a factor of its ability to absorb energy during relatively slow plastic deformation, though the rate at which strain occurs matters. Brittle materials have low toughness as a result of the small amount of plastic deformation they can endure at any rate. However, ductile materials may behave like brittle materials under high-energy impact, hence the
|
https://en.wikipedia.org/wiki/Design%20flow%20%28EDA%29
|
Design flows are the explicit combination of electronic design automation tools to accomplish the design of an integrated circuit. Moore's law has driven the entire IC implementation RTL to GDSII design flows from one which uses primarily stand-alone synthesis, placement, and routing algorithms to an integrated construction and analysis flows for design closure. The challenges of rising interconnect delay led to a new way of thinking about and integrating design closure tools.
The RTL to GDSII flow underwent significant changes from 1980 through 2005. The continued scaling of CMOS technologies significantly changed the objectives of the various design steps. The lack of good predictors for delay has led to significant changes in recent design flows. New scaling challenges such as leakage power,
variability, and reliability will continue to require significant changes to the design closure process in the future. Many factors describe what drove the design flow from a set of separate design steps to a fully integrated approach, and what further changes are coming to address the latest challenges. In his keynote at the 40th Design Automation Conference entitled The Tides of EDA, Alberto Sangiovanni-Vincentelli distinguished three periods of EDA:
The Age of Invention: During the invention era, routing, placement, static timing analysis and logic synthesis were invented.
The Age of Implementation: In the age of implementation, these steps were drastically improved by designing sophisticated data structures and advanced algorithms. This allowed the tools in each of these design steps to keep pace with the rapidly increasing design sizes. However, due to the lack of good predictive cost functions, it became impossible to execute a design flow by a set of discrete steps, no matter how efficiently each of the steps was implemented.
The Age of Integration: This led to the age of integration where most of the design steps are performed in an integrated environment, d
|
https://en.wikipedia.org/wiki/Zarankiewicz%20problem
|
The Zarankiewicz problem, an unsolved problem in mathematics, asks for the largest possible number of edges in a bipartite graph that has a given number of vertices and has no complete bipartite subgraphs of a given size. It belongs to the field of extremal graph theory, a branch of combinatorics, and is named after the Polish mathematician Kazimierz Zarankiewicz, who proposed several special cases of the problem in 1951.
Problem statement
A bipartite graph consists of two disjoint sets of vertices and , and a set of edges each of which connects a vertex in to a vertex in . No two edges can both connect the same pair of vertices. A complete bipartite graph is a bipartite graph in which every pair of a vertex from and a vertex from is connected to each other. A complete bipartite graph in which has vertices and has vertices is denoted . If is a bipartite graph, and there exists a set of vertices of and vertices of that are all connected to each other, then these vertices induce a subgraph of the form . (In this formulation, the ordering of and is significant: the set of vertices must be from and the set of vertices must be from , not vice versa.)
The Zarankiewicz function denotes the maximum possible number of edges in a bipartite graph for which and , but which does not contain a subgraph of the form . As a shorthand for an important special case, is the same as . The Zarankiewicz problem asks for a formula for the Zarankiewicz function, or (failing that) for tight asymptotic bounds on the growth rate of assuming that is a fixed constant, in the limit as goes to infinity.
For this problem is the same as determining cages with girth six. The Zarankiewicz problem, cages and finite geometry are strongly interrelated.
The same problem can also be formulated in terms of digital geometry. The possible edges of a bipartite graph can be visualized as the points of a rectangle in the integer lattice, and a complete subgraph is a set of rows a
|
https://en.wikipedia.org/wiki/Query%20optimization
|
Query optimization is a feature of many relational database management systems and other databases such as NoSQL and graph databases. The query optimizer attempts to determine the most efficient way to execute a given query by considering the possible query plans.
Generally, the query optimizer cannot be accessed directly by users: once queries are submitted to the database server, and parsed by the parser, they are then passed to the query optimizer where optimization occurs. However, some database engines allow guiding the query optimizer with hints.
A query is a request for information from a database. It can be as simple as "find the address of a person with Social Security number 123-45-6789," or more complex like "find the average salary of all the employed married men in California between the ages 30 to 39 who earn less than their spouses." The result of a query is generated by processing the rows in a database in a way that yields the requested information. Since database structures are complex, in most cases, and especially for not-very-simple queries, the needed data for a query can be collected from a database by accessing it in different ways, through different data-structures, and in different orders. Each different way typically requires different processing time. Processing times of the same query may have large variance, from a fraction of a second to hours, depending on the chosen method. The purpose of query optimization, which is an automated process, is to find the way to process a given query in minimum time. The large possible variance in time justifies performing query optimization, though finding the exact optimal query plan, among all possibilities, is typically very complex, time-consuming by itself, may be too costly, and often practically impossible. Thus query optimization typically tries to approximate the optimum by comparing several common-sense alternatives to provide in a reasonable time a "good enough" plan which typically does
|
https://en.wikipedia.org/wiki/Polarization-division%20multiple%20access
|
Polarization-division multiple access (PDMA) is a channel access method used in some cellular networks and broadcast satellite services. Separate antennas are used in this type, each with different polarization and followed by separate receivers, allowing simultaneous regional access of satellites.
Each corresponding ground station antenna needs to be polarized in the same way as its counterpart in the satellite. This is generally accomplished by providing each participating ground station with an antenna that has dual polarization. The frequency band allocated to each antenna beam can be identical because the uplink signals are orthogonal in polarization. This technique allows frequency reuse.
See also
Frequency-division multiple access
Code-division multiple access
Time-division multiple access
Channel access methods
Polarization (waves)
|
https://en.wikipedia.org/wiki/Hardware%20security%20module
|
A hardware security module (HSM) is a physical computing device that safeguards and manages secrets (most importantly digital keys), performs encryption and decryption functions for digital signatures, strong authentication and other cryptographic functions. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. A hardware security module contains one or more secure cryptoprocessor chips.
Design
HSMs may have features that provide tamper evidence such as visible signs of tampering or logging and alerting, or tamper resistance which makes tampering difficult without making the HSM inoperable, or tamper responsiveness such as deleting keys upon tamper detection. Each module contains one or more secure cryptoprocessor chips to prevent tampering and bus probing, or a combination of chips in a module that is protected by the tamper evident, tamper resistant, or tamper responsive packaging.
A vast majority of existing HSMs are designed mainly to manage secret keys. Many HSM systems have means to securely back up the keys they handle outside of the HSM. Keys may be backed up in wrapped form and stored on a computer disk or other media, or externally using a secure portable device like a smartcard or some other security token.
HSMs are used for real time authorization and authentication in critical infrastructure thus are typically engineered to support standard high availability models including clustering, automated failover, and redundant field-replaceable components.
A few of the HSMs available in the market have the capability to execute specially developed modules within the HSM's secure enclosure. Such an ability is useful, for example, in cases where special algorithms or business logic has to be executed in a secured and controlled environment. The modules can be developed in native C language, .NET, Java, or other programming languages. Further, upcoming next-generation HSMs
|
https://en.wikipedia.org/wiki/Denitrifying%20bacteria
|
Denitrifying bacteria are a diverse group of bacteria that encompass many different phyla. This group of bacteria, together with denitrifying fungi and archaea, is capable of performing denitrification as part of the nitrogen cycle. Denitrification is performed by a variety of denitrifying bacteria that are widely distributed in soils and sediments and that use oxidized nitrogen compounds in absence of oxygen as a terminal electron acceptor. They metabolise nitrogenous compounds using various enzymes, turning nitrogen oxides back to nitrogen gas (N2) or nitrous oxide (N2O).
Diversity of denitrifying bacteria
There is a great diversity in biological traits. Denitrifying bacteria have been identified in over 50 genera with over 125 different species and are estimated to represent 10-15% of bacteria population in water, soil and sediment.
Denitrifying include for example several species of Pseudomonas, Alcaligenes , Bacillus and others.
The majority of denitrifying bacteria are facultative aerobic heterotrophs that switch from aerobic respiration to denitrification when oxygen as an available terminal electron acceptor (TEA) runs out. This forces the organism to use nitrate to be used as a TEA. Because the diversity of denitrifying bacteria is so large, this group can thrive in a wide range of habitats including some extreme environments such as environments that are highly saline and high in temperature. Aerobic denitrifiers can conduct an aerobic respiratory process in which nitrate is converted gradually to N2 (NO3− →NO2− → NO → N2O → N2 ), using nitrate reductase (Nar or Nap), nitrite reductase (Nir), nitric oxide reductase (Nor), and nitrous oxide reductase (Nos). Phylogenetic analysis revealed that aerobic denitrifiers mainly belong to α-, β- and γ-Proteobacteria.
Denitrification mechanism
Denitrifying bacteria use denitrification to generate ATP.
The most common denitrification process is outlined below, with the nitrogen oxides being converted back to g
|
https://en.wikipedia.org/wiki/Rank%20mobility%20index
|
In demographics, the rank mobility index (RMI) is a measure of a city's change in population rank among a group of cities.
Formally
where
R1 = city's rank at time 1
R2 = city's rank at time 2
A RMI value must be between −1 and 1. A RMI of 0 indicates no change.
Index numbers
|
https://en.wikipedia.org/wiki/London%20School%20of%20Medicine%20for%20Women
|
The London School of Medicine for Women (LSMW) established in 1874 was the first medical school in Britain to train women as doctors. The patrons, vice-presidents, and members of the committee that supported and helped found the London School of Medicine for Women wanted to provide educated women with the necessary facilities for learning and practicing midwifery and other branches of medicine while also promoting their future employment in the fields of midwifery and other fields of treatment for women and children.
History
The school was formed in 1874 by an association of pioneering women physicians Sophia Jex-Blake, Elizabeth Garrett Anderson, Emily Blackwell and Elizabeth Blackwell with Thomas Henry Huxley. The founding was motivated at least in part by Jex-Blake's frustrated attempts at getting a medical degree at a time when women were not admitted to British medical schools, thus being expelled from Edinburgh University. Other women who had studied with Jex-Blake in Edinburgh joined her at the London school, including Isabel Thorne who succeeded her as honorary secretary in 1877. She departed to start a medical practice in Edinburgh where she would found the Edinburgh School of Medicine for Women in 1886.
The UK Medical Act of 1876 (39 and 40 Vict, Ch. 41) was an act which repealed the previous Medical Act in the United Kingdom and allowed the medical authorities to license all qualified applicants irrespective of gender.
In 1877 an agreement was reached with the Royal Free Hospital that allowed students at the London School of Medicine for Women to complete their clinical studies there. The Royal Free Hospital was the first teaching hospital in London to admit women for training.
Elizabeth Garrett Anderson was Dean (1883–1903) while the school was rebuilt, became part of the University of London and consolidated association with the Royal Free Hospital. In 1896, the School was officially renamed the London (Royal Free Hospital) School of Medicine fo
|
https://en.wikipedia.org/wiki/Lap
|
A lap is a surface (usually horizontal) created between the knee and hips of a biped when it is in a seated or lying down position. The lap of a parent or loved one is seen as a physically and psychologically comfortable place for a child to sit.
In some countries where Christmas is celebrated, it has been a tradition for children to sit on the lap of a person dressed as Santa Claus to tell Santa what they want for Christmas, and have their picture taken, but this practice has since been questioned in some of these countries, where this sort of contact between children and unfamiliar adults raises concerns.
Among adults, a person sitting on the lap of another usually indicates an intimate or romantic relationship between the two; this is a factor in the erotic activity in strip clubs known as a lap dance, where one person straddles the lap of the other and gyrates their lower extremities in a provocative manner.
A Lap steel guitar is a type of steel guitar played in a sitting position with the instrument placed horizontally across the player's knees. The lap can be a useful surface for carrying out tasks when a table is not available. The laptop computer was so named because it was seen as being able to be used on the user's lap.
See also
Lap dog
Laptop
|
https://en.wikipedia.org/wiki/Symrise
|
Symrise AG is a German chemicals company that is a major producer of flavours and fragrances with sales of €4.618 billion in 2022. Major competitors include Givaudan, Takasago International Corporation, International Flavors and Fragrances and Döhler. Symrise is a member of the European Flavour Association. In 2021, Symrise was ranked 4th by FoodTalks' Global Top 50 Food Flavours and Fragrances Companies list.
History
Symrise was founded in 2003 by the merger of Bayer subsidiary Haarmann & Reimer (H&R) and Dragoco, both based in Holzminden, Germany.
Haarman & Reimer
Haarman & Reimer (H&R) was founded in 1874 by chemists Ferdinand Tiemann and Wilhelm Haarmann after they succeeded in first synthesizing vanillin from coniferin. Holzminden was the site where vanillin was first produced industrially.
In 1917, H&R supported Leopold Ružička's unsuccessful three-year project to synthesize irone, a fragrance of violets.
In 1953, H&R was acquired by Bayer.
Dragoco
Dragoco was founded in 1919 by Carl-Wilhelm Gerberding and his cousin August Bellmer.
Horst-Otto Gerberding, majority holder and Chairman of the Executive Board at Dragoco, placed all of his shares into the new Syrmise corporation and the merger was completed May 23, 2003.
History since 2003
In April 2005, Symrise acquired Flavours Direct, a UK-based manufacturer of compounded flavours and seasonings.
In January 2006, Symrise acquired Hamburg based Kaden Biochemicals GmbH, a producer of specialty botanical extracts.
In November 2006, Symrise announced plans to sell shares worth €650 million in an IPO. The firm also announced that its main shareholders, including EQT, would also sell shares worth an unspecified amount. The IPO would leave well above 50% of Symrise shares in free-float. Deutsche Bank and UBS conducted the listing on December 11, 2006. Symrise was listed on the Frankfurt Stock Exchange with the trading symbol SY1. With 81,030,358 shares issued at an issue price of €17.25 for a total volume of
|
https://en.wikipedia.org/wiki/Bob%20Widlar
|
Robert John Widlar (pronounced wide-lar; November 30, 1937 – February 27, 1991) was an American electronics engineer and a designer of linear integrated circuits (ICs).
Early years
Widlar was born November 30, 1937 in Cleveland to parents of Czech, Irish and German ethnicity. His mother, Mary Vithous, was born in Cleveland to Czech immigrants Frank Vithous (František Vitouš) and Marie Zakova (Marie Žáková). His father, Walter J. Widlar, came from prominent German and Irish American families whose ancestors settled in Cleveland in the middle of the 19th century. A self-taught radio engineer, Walter Widlar worked for the radio station and designed pioneering ultra high frequency transmitters. The world of electronics surrounded him since birth: one of his brothers became the first baby monitored by wireless radio. Guided by his father, Bob developed a strong interest in electronics in early childhood.
Widlar never talked about his early years and personal life. He graduated from Saint Ignatius High School in Cleveland and enrolled at the University of Colorado at Boulder. In February 1958 Widlar joined the United States Air Force. He instructed servicemen in electronic equipment and devices and authored his first book, Introduction to Semiconductor Devices (1960), a textbook that demonstrated his ability to simplify complex problems. His liberal mind was a poor match for the military environment, and in 1961 Widlar left the service. He joined the Ball Brothers Research Corporation in Boulder to develop analog and digital equipment for NASA. He simultaneously continued studies at the University of Colorado and graduated with high grades in the summer of 1963.
Achievements
Widlar invented the basic building blocks of linear ICs including the Widlar current source, the Widlar bandgap voltage reference and the Widlar output stage. From 1964 to 1970, Widlar, together with David Talbert, created the first mass-produced operational amplifier ICs (μA702, μA709), some of
|
https://en.wikipedia.org/wiki/Parallels%20%28company%29
|
Parallels is a software company based in Bellevue, Washington; it is primarily involved in the development of virtualization software for macOS. The company has offices in 14 countries, including the United States, Germany, United Kingdom, France, Japan, China, Spain, Malta, Australia and Mauritius and has over 800 employees.
Company history
SWSoft, a privately held server automation and virtualization software company, developed software for running data centers, particularly for web-hosting services companies and application service providers. Their Virtuozzo product was an early system-level server virtualization solution, and in 2003 they bought Plesk, a commercial web hosting platform.
In 2004, SWsoft acquired Parallels, Inc. and Parallels Workstation for Windows and Linux 2.0 was released, with Parallels Desktop for Mac following in mid-2006. SWsoft's acquisition of Parallels was kept confidential until January 2004, two years before Parallels became mainstream. Later the same year the corporate headquarters moved from Herndon, Virginia to Renton, Washington. Historically, their primary development labs were in Moscow and Novosibirsk, Russia. Parallels was founded by Serguei Beloussov, who was born in the former Soviet Union and later immigrated to Singapore.
At Apple's Worldwide Developers Conference 2007 in San Francisco, California, Parallels announced and demonstrated its upcoming Parallels Server for Mac. Parallels Server for Mac will reportedly allow IT managers to run multiple server operating systems on a single Mac Xserve.
In 2007, the German company Netsys GmbH sued Parallels' German distributor Avanquest for copyright violation (see Parallels Desktop for Mac for details), then Parallels Server for Mac was announced at WWDC, and later Parallels Technology Network.
In 2008, SWsoft merged into Parallels to become one company under the Parallels branding which then acquired ModernGigabyte, LLC. Parallels Server for Mac was launched in June then i
|
https://en.wikipedia.org/wiki/Alternating%20factorial
|
In mathematics, an alternating factorial is the absolute value of the alternating sum of the first n factorials of positive integers.
This is the same as their sum, with the odd-indexed factorials multiplied by −1 if n is even, and the even-indexed factorials multiplied by −1 if n is odd, resulting in an alternation of signs of the summands (or alternation of addition and subtraction operators, if preferred). To put it algebraically,
or with the recurrence relation
in which af(1) = 1.
The first few alternating factorials are
1, 1, 5, 19, 101, 619, 4421, 35899, 326981, 3301819, 36614981, 442386619, 5784634181, 81393657019
For example, the third alternating factorial is 1! – 2! + 3!. The fourth alternating factorial is −1! + 2! − 3! + 4! = 19. Regardless of the parity of n, the last (nth) summand, n!, is given a positive sign, the (n – 1)th summand is given a negative sign, and the signs of the lower-indexed summands are alternated accordingly.
This pattern of alternation ensures the resulting sums are all positive integers. Changing the rule so that either the odd- or even-indexed summands are given negative signs (regardless of the parity of n) changes the signs of the resulting sums but not their absolute values.
proved that there are only a finite number of alternating factorials that are also prime numbers, since 3612703 divides af(3612702) and therefore divides af(n) for all n ≥ 3612702. , the known primes and probable primes are af(n) for
n = 3, 4, 5, 6, 7, 8, 10, 15, 19, 41, 59, 61, 105, 160, 661, 2653, 3069, 3943, 4053, 4998, 8275, 9158, 11164
Only the values up to n = 661 have been proved prime in 2006. af(661) is approximately 7.818097272875 × 101578.
Notes
|
https://en.wikipedia.org/wiki/Phylogenomics
|
Phylogenomics is the intersection of the fields of evolution and genomics. The term has been used in multiple ways to refer to analysis that involves genome data and evolutionary reconstructions. It is a group of techniques within the larger fields of phylogenetics and genomics. Phylogenomics draws information by comparing entire genomes, or at least large portions of genomes. Phylogenetics compares and analyzes the sequences of single genes, or a small number of genes, as well as many other types of data. Four major areas fall under phylogenomics:
Prediction of gene function
Establishment and clarification of evolutionary relationships
Gene family evolution
Prediction and retracing lateral gene transfer.
The ultimate goal of phylogenomics is to reconstruct the evolutionary history of species through their genomes. This history is usually inferred from a series of genomes by using a genome evolution model and standard statistical inference methods (e.g. Bayesian inference or maximum likelihood estimation).
Prediction of gene function
When Jonathan Eisen originally coined phylogenomics, it applied to prediction of gene function. Before the use of phylogenomic techniques, predicting gene function was done primarily by comparing the gene sequence with the sequences of genes with known functions. When several genes with similar sequences but differing functions are involved, this method alone is ineffective in determining function. A specific example is presented in the paper "Gastronomic Delights: A movable feast". Gene predictions based on sequence similarity alone had been used to predict that Helicobacter pylori can repair mismatched DNA. This prediction was based on the fact that this organism has a gene for which the sequence is highly similar to genes from other species in the "MutS" gene family which included many known to be involved in mismatch repair. However, Eisen noted that H. pylori lacks other genes thought to be essential for this function (specif
|
https://en.wikipedia.org/wiki/Implementation%20of%20mathematics%20in%20set%20theory
|
This article examines the implementation of mathematical concepts in set theory. The implementation of a number of basic mathematical concepts is carried out in parallel in ZFC (the dominant set theory) and in NFU, the version of Quine's New Foundations shown to be consistent by R. B. Jensen in 1969 (here understood to include at least axioms of Infinity and Choice).
What is said here applies also to two families of set theories: on the one hand, a range of theories including Zermelo set theory near the lower end of the scale and going up to ZFC extended with large cardinal hypotheses such as "there is a measurable cardinal"; and on the other hand a hierarchy of extensions of NFU which is surveyed in the New Foundations article. These correspond to different general views of what the set-theoretical universe is like, and it is the approaches to implementation of mathematical concepts under these two general views that are being compared and contrasted.
It is not the primary aim of this article to say anything about the relative merits of these theories as foundations for mathematics. The reason for the use of two different set theories is to illustrate that multiple approaches to the implementation of mathematics are feasible. Precisely because of this approach, this article is not a source of "official" definitions for any mathematical concept.
Preliminaries
The following sections carry out certain constructions in the two theories ZFC and NFU and compare the resulting implementations of certain mathematical structures (such as the natural numbers).
Mathematical theories prove theorems (and nothing else). So saying that a theory allows the construction of a certain object means that it is a theorem of that theory that that object exists. This is a statement about a definition of the form "the x such that exists", where is a formula of our language: the theory proves the existence of "the x such that " just in case it is a theorem that "there is one and only
|
https://en.wikipedia.org/wiki/Memory%20bandwidth
|
Memory bandwidth is the rate at which data can be read from or stored into a semiconductor memory by a processor. Memory bandwidth is usually expressed in units of bytes/second, though this can vary for systems with natural data sizes that are not a multiple of the commonly used 8-bit bytes.
Memory bandwidth that is advertised for a given memory or system is usually the maximum theoretical bandwidth. In practice the observed memory bandwidth will be less than (and is guaranteed not to exceed) the advertised bandwidth. A variety of computer benchmarks exist to measure sustained memory bandwidth using a variety of access patterns. These are intended to provide insight into the memory bandwidth that a system should sustain on various classes of real applications.
Measurement conventions
There are three different conventions for defining the quantity of data transferred in the numerator of "bytes/second":
The bcopy convention: counts the amount of data copied from one location in memory to another location per unit time. For example, copying 1 million bytes from one location in memory to another location in memory in one second would be counted as 1 million bytes per second. The bcopy convention is self-consistent, but is not easily extended to cover cases with more complex access patterns, for example three reads and one write.
The Stream convention: sums the amount of data that the application code explicitly reads plus the amount of data that the application code explicitly writes. Using the previous 1 million byte copy example, the STREAM bandwidth would be counted as 1 million bytes read plus 1 million bytes written in one second, for a total of 2 million bytes per second. The STREAM convention is most directly tied to the user code, but may not count all the data traffic that the hardware is actually required to perform.
The hardware convention: counts the actual amount of data read or written by the hardware, whether the data motion was explicitly reques
|
https://en.wikipedia.org/wiki/Mariotte%27s%20bottle
|
Mariotte's bottle is a device that delivers a constant rate of flow from closed bottles or tanks. It is named after French physicist Edme Mariotte (1620-1684). A picture of a bottle with a gas inlet is shown in the works of Mariotte, but this construction was made to show the effect of outside pressure on mercury level inside the bottle. It further misses a siphon or an outlet for the liquid.
Invention
The design was first reported by McCarthy (1934).
As shown in the diagram, a stoppered reservoir is supplied with an air inlet and a siphon. The pressure at the bottom of the air inlet is always the same as the pressure outside the reservoir, i.e. the atmospheric pressure. If it were greater, air would not enter. If the entrance to the siphon is at the same depth, then it will always supply the water at atmospheric pressure and will deliver a flow under constant head height, regardless of the changing water level within the reservoir.
This apparatus has many variations in design and has been used extensively when a constant water pressure is needed, e.g. supplying water at constant head for measuring water infiltration into soil or supplying the mobile phase in chromatography.
The drawback of the design is that it is sensitive for gas inlet leakage and that during operation liquid cannot be added, since it would change the pressure control. Accurate control is nowadays provided by electronic devices.
Applications
Constant head is important in simplifying constraint when measuring the movement of water in soil. Several measurement techniques employ the Mariotte's bottle to provide constant head. The Guelph Permeameter measures unsaturated hydraulic conductivity in the field and uses this principle to create a constant head. Single and double ring infiltrometers can also use the Marriotte's bottle.
Another application is a similar arrangement in some fuel tanks used in control line model airplanes, where it is called a "uniflow" tank, where the tank venting tu
|
https://en.wikipedia.org/wiki/Resolution%20enhancement%20technologies
|
Resolution enhancement technologies are methods used to modify the photomasks in the lithographic processes used to make integrated circuits (ICs or "chips") to compensate for limitations in the optical resolution of the projection systems. These processes allow the creation of features well beyond the limit that would normally apply due to the Rayleigh criterion. Modern technologies allow the creation of features on the order of 5 nanometers (nm), far below the normal resolution possible using deep ultraviolet (DUV) light.
Background
Integrated circuits are created in a multi-step process known as photolithography. This process starts with the design of the IC circuitry as a series of layers than will be patterned onto the surface of a sheet of silicon or other semiconductor material known as a wafer.
Each layer of the ultimate design is patterned onto a photomask, which in modern systems is made of fine lines of chromium deposited on highly purified quartz glass. Chromium is used because it is highly opaque to UV light, and quartz because it has limited thermal expansion under the intense heat of the light sources as well as being highly transparent to ultraviolet light. The mask is positioned over the wafer and then exposed to an intense UV light source. With a proper optical imaging system between the mask and the wafer (or no imaging system if the mask is sufficiently closely positioned to the wafer such as in early lithography machines), the mask pattern is imaged on a thin layer of photoresist on the surface of the wafer and a light (UV or EUV)-exposed part of the photoresist experiences chemical reactions causing the photographic pattern to be physically created on the wafer.
When light shines on a pattern like that on a mask, diffraction effects occur. This causes the sharply focused light from the UV lamp to spread out on the far side of the mask and becoming increasingly unfocussed over distance. In early systems in the 1970s, avoiding these effects re
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.