source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/X-ray%20transform
In mathematics, the X-ray transform (also called ray transform or John transform) is an integral transform introduced by Fritz John in 1938 that is one of the cornerstones of modern integral geometry. It is very closely related to the Radon transform, and coincides with it in two dimensions. In higher dimensions, the X-ray transform of a function is defined by integrating over lines rather than over hyperplanes as in the Radon transform. The X-ray transform derives its name from X-ray tomography (used in CT scans) because the X-ray transform of a function ƒ represents the attenuation data of a tomographic scan through an inhomogeneous medium whose density is represented by the function ƒ. Inversion of the X-ray transform is therefore of practical importance because it allows one to reconstruct an unknown density ƒ from its known attenuation data. In detail, if ƒ is a compactly supported continuous function on the Euclidean space Rn, then the X-ray transform of ƒ is the function Xƒ defined on the set of all lines in Rn by where x0 is an initial point on the line and θ is a unit vector in Rn giving the direction of the line L. The latter integral is not regarded in the oriented sense: it is the integral with respect to the 1-dimensional Lebesgue measure on the Euclidean line L. The X-ray transform satisfies an ultrahyperbolic wave equation called John's equation. The Gauss hypergeometric function can be written as an X-ray transform .
https://en.wikipedia.org/wiki/Robert%20Methuen%2C%207th%20Baron%20Methuen
Robert Alexander Holt Methuen, 7th Baron Methuen (22 July 1931 – 9 July 2014), was a British Liberal Democrat peer. He was one of the ninety hereditary peers elected to remain in the House of Lords after the House of Lords Act 1999. Biography Methuen was the third and youngest son of Anthony Methuen, 5th Baron Methuen, by his wife Grace Durning Holt, daughter of Sir Richard Durning Holt, Bt. He was educated at Shrewsbury School before going up to Trinity College, Cambridge, where he graduated in 1957 with a Bachelor of Arts (BA) degree in Engineering. Methuen worked as a design engineer for Westinghouse Brake and Signal Company from 1957 to 1967, and then as a computer systems engineer for IBM UK Ltd from 1968 to 1975 and for Rolls-Royce Holdings plc from 1975 to 1994. In 1994, he succeeded his elder brother to the title. In the House of Lords, he served on the Science and Technology Select Committee and other committees. He also voted to block the Marriage (Same Sex Couples) Act 2013. Lord Methuen married firstly Mary Catherine Jane Hooper in 1958; they divorced in 1993. He married secondly Margrit Andrea Hadwiger one year later. He has two daughters by his first wife: Charlotte Mary Methuen (born 1964) and Henrietta Christian Methuen-Jones (born 1965) who later changed her name to Kittie. Kittie was married to Robert Jones (who took the surname Methuen-Jones) and has three children: Teresa Methuen-Jones (born 1990), Keziah Methuen-Jones (born 1992) and Miriam Methuen-Jones (born 1997). He died after a short illness on 9 July 2014. He was succeeded in the title by his first cousin once removed, James Methuen-Campbell (born 1952). Ancestry Arms See also Baron Methuen
https://en.wikipedia.org/wiki/Sampling%20%28signal%20processing%29
In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples". A sample is a value of the signal at a point in time and/or space; this definition differs from the term's usage in statistics, which refers to a set of such values. A sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points. The original signal can be reconstructed from a sequence of samples, up to the Nyquist limit, by passing the sequence of samples through a type of low-pass filter called a reconstruction filter. Theory Functions of space, time, or any other dimension can be sampled, and similarly in two or more dimensions. For functions that vary with time, let S(t) be a continuous function (or "signal") to be sampled, and let sampling be performed by measuring the value of the continuous function every T seconds, which is called the sampling interval or sampling period.  Then the sampled function is given by the sequence: S(nT),   for integer values of n. The sampling frequency or sampling rate, fs, is the number of samples divided by the interval length over in which occur, thus , with the unit sample per second, sometimes referred to as hertz, for example e.g. 48 kHz is 48,000 samples per second. Reconstructing a continuous function from samples is done by interpolation algorithms. The Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal low-pass filter whose input is a sequence of Dirac delta functions that are modulated (multiplied) by the sample values. When the time interval between adjacent samples is a constant (T), the sequence of delta functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the comb function with s(t). That math
https://en.wikipedia.org/wiki/Borirane
Borirane is a heterocyclic organic compound with the formula C2H4BH. This colourless, flammable gas is the simplest borirane, a three-membered ring consisting of two carbon and one boron atom. It can be viewed as a structural analog of aziridine, with boron replacing the nitrogen atom of aziridine. Borirane is isomeric with ethylideneborane. This compound has five isomers.
https://en.wikipedia.org/wiki/Ureaplasma%20cati
Ureaplasma cati is a species of Ureaplasma, a genus of bacteria belonging to the family Mycoplasmataceae. It has been isolated from cats. Its sequence accession no. (16S rRNA gene) for the type strain: D78649.
https://en.wikipedia.org/wiki/Ingress%20filtering
In computer networking, ingress filtering is a technique used to ensure that incoming packets are actually from the networks from which they claim to originate. This can be used as a countermeasure against various spoofing attacks where the attacker's packets contain fake IP addresses. Spoofing is often used in denial-of-service attacks, and mitigating these is a primary application of ingress filtering. Problem Networks receive packets from other networks. Normally a packet will contain the IP address of the computer that originally sent it. This allows devices in the receiving network to know where it came from, allowing a reply to be routed back (amongst other things), except when IP addresses are used through a proxy or a spoofed IP address, which does not pinpoint a specific user within that pool of users. A sender IP address can be faked (spoofed), characterizing a spoofing attack. This disguises the origin of packets sent, for example in a denial-of-service attack. The same holds true for proxies, although in a different manner than IP spoofing. Potential solutions One potential solution involves implementing the use of intermediate Internet gateways (i.e., those servers connecting disparate networks along the path followed by any given packet) filtering or denying any packet deemed to be illegitimate. The gateway processing the packet might simply ignore the packet completely, or where possible, it might send a packet back to the sender relaying a message that the illegitimate packet has been denied. Host intrusion prevention systems (HIPS) are one example of technical engineering applications that help to identify, prevent and/or deter unwanted, unsuspected or suspicious events and intrusions. Any router that implements ingress filtering checks the source IP field of IP packets it receives and drops packets if the packets don't have an IP address in the IP address block to which the interface is connected. This may not be possible if the end host i
https://en.wikipedia.org/wiki/Poussin%20proof
In number theory, the Poussin proof is the proof of an identity related to the fractional part of a ratio. In 1838, Peter Gustav Lejeune Dirichlet proved an approximate formula for the average number of divisors of all the numbers from 1 to n: where d represents the divisor function, and γ represents the Euler-Mascheroni constant. In 1898, Charles Jean de la Vallée-Poussin proved that if a large number n is divided by all the primes up to n, then the average fraction by which the quotient falls short of the next whole number is γ: where {x} represents the fractional part of x, and π represents the prime-counting function. For example, if we divide 29 by 2, we get 14.5, which falls short of 15 by 0.5.
https://en.wikipedia.org/wiki/Geohash
Geohash is a public domain geocode system invented in 2008 by Gustavo Niemeyer which encodes a geographic location into a short string of letters and digits. Similar ideas were introduced by G.M. Morton in 1966. It is a hierarchical spatial data structure which subdivides space into buckets of grid shape, which is one of the many applications of what is known as a Z-order curve, and generally space-filling curves. Geohashes offer properties like arbitrary precision and the possibility of gradually removing characters from the end of the code to reduce its size (and gradually lose precision). Geohashing guarantees that the longer a shared prefix between two geohashes is, the spatially closer they are together. The reverse of this is not guaranteed, as two points can be very close but have a short or no shared prefix. History The core part of the Geohash algorithm and the first initiative to similar solution was documented in a report of G.M. Morton in 1966, "A Computer Oriented Geodetic Data Base and a New Technique in File Sequencing". The Morton work was used for efficient implementations of Z-order curve, like in this modern (2014) Geohash-integer version (based on directly interleaving 64-bit integers), but his geocode proposal was not human-readable and was not popular. Apparently, in the late 2000s, G. Niemeyer still didn't know about Morton's work, and reinvented it, adding the use of base32 representation. In February 2008, together with the announcement of the system, he launched the website http://geohash.org, which allows users to convert geographic coordinates to short URLs which uniquely identify positions on the Earth, so that referencing them in emails, forums, and websites is more convenient. Many variations have been developed, including OpenStreetMap's short link (using base64 instead of base32) in 2009, the 64-bit Geohash in 2014, the exotic Hilbert-Geohash in 2016, and others. Typical and main usages To obtain the Geohash, the user prov
https://en.wikipedia.org/wiki/Nepal%20Physical%20Society
Nepal Physical Society (NPS) is a professional society of the Nepalese physicists. The small society of about 500 members was established in 1982. Membership To be eligible for membership, one has to hold a minimum of a master's degree in physical science from either Tribhuvan University or another university that is recognized by Tribhuvan University. The NPS offers two kinds of memberships in general: Ordinary Membership and Life Membership. Ordinary Membership is to be renewed every year by paying a nominal membership fee. Life membership can be obtained by paying the fee designated for life membership. An executive body is formed by general convention in every two years. Nepal Physical Society signed the agreement to be a reciprocal society with the American Physical Society (APS) on September 21, 1995. This society is also a member of Association of Asia Pacific Physical societies (AAPPs). Activities The society organizes different scientific and educational activities related to physical sciences in Nepal. These activities include lectures and classes on topics in physics like Group Theory. These lectures are intended for people who want to broaden their knowledge in the field of physics. The society publishes Nepal Physical Society News letters on a regular basis. Previously, the office of the society was situated in Tri-Chandra College, Ghantaghar, Kathmandu, Nepal. Currently, the office is in the Central Department of Physics, Tribhuvan University, Kirtipur, Nepal. Nepal Physical Society participated in the 38th International Physics Olympiad in 2007. See also Nepal Mathematical Society
https://en.wikipedia.org/wiki/Urauchimycin
Urauchimycins are antimycin antibiotics isolated from marine actinomycete.
https://en.wikipedia.org/wiki/Critical%20embankment%20velocity
Critical embankment velocity or critical speed, in transportation engineering, is the velocity value of the upper moving vehicle that causes the severe vibration of the embankment and the nearby ground. This concept and the prediction method was put forward by scholars in civil engineering communities before 1980 and stressed and exhaustively studied by Krylov in 1994 based on the Green function method and predicted more accurately using other methods in the following. When the vehicles such as high-speed trains or airplanes move approaching or beyond this critical velocity (firstly regarded as the Rayleigh wave speed and later obtained by sophisticated calculation or tests), the vibration magnitudes of vehicles and nearby ground increase rapidly and possibly lead to the damage to the passengers and the neighboring residents. This relevant unexpected phenomenon is called the ground vibration boom from 1997 when it was observed in Sweden for the first time. This critical velocity is similar to that of sound which results in the sonic boom. However, there are some differences in terms of the transferring medium. The critical velocity of sound just changes in a small range, although the air quality and the interaction between the jet flight and atmosphere affect the critical velocity. But the embankment including the filling layers and ground soil underneath surface is a typically random medium. Such complex soil-structure coupling vibration system may have several critical velocity values. Therefore, the critical embankment velocity belongs to the general concept, the value of which is not constant and should be acquired by calculation or experiment in accordance with certain engineerings nowadays. Mechanism The wave superposition Under the ideal assumptions, when the moving loads are imposed on the surface of the embankment, they will induce sub-waves which propagate inside and along the surface of the embankment. If the velocity of the moving loads is less th
https://en.wikipedia.org/wiki/Zephyr%20%28operating%20system%29
Zephyr () is a small real-time operating system (RTOS) for connected, resource-constrained and embedded devices (with an emphasis on microcontrollers) supporting multiple architectures and released under the Apache License 2.0. Zephyr includes a kernel, and all components and libraries, device drivers, protocol stacks, file systems, and firmware updates, needed to develop full application software. History Zephyr originated from Virtuoso RTOS for digital signal processors (DSPs). In 2001, Wind River Systems acquired Belgian software company Eonic Systems, the developer of Virtuoso. In November 2015, Wind River Systems renamed the operating system to Rocket, made it open-source and royalty-free. Compared to Wind River's other RTOS, VxWorks, Rocket had a much smaller memory needs, especially suitable for sensors and single-function embedded devices. Rocket could fit into as little as 4 KB of memory, while VxWorks needed 200 KB or more. In February 2016, Rocket became a hosted collaborative project of the Linux Foundation under the name Zephyr. Wind River Systems contributed the Rocket kernel to Zephyr, but still provided Rocket to its clients, charging them for the cloud services. As a result, Rocket became "essentially the commercial version of Zephyr". Since then, early members and supporters of Zephyr include Intel, NXP Semiconductors, Synopsys, Linaro, Texas Instruments, DeviceTone, Nordic Semiconductor, Oticon, and Bose. , Zephyr had the largest number of contributors and commits compared to other RTOSes (including Mbed, RT-Thread, NuttX, and RIOT). Features Zephyr intends to provide all components needed to develop resource-constrained and embedded or microcontroller-based applications. This includes, but is not limited to: A small kernel A flexible configuration and build system for compile-time definition of required resources and modules A set of protocol stacks (IPv4 and IPv6, Constrained Application Protocol (CoAP), LwM2M, MQTT, 802.15.4, Thread, Bl
https://en.wikipedia.org/wiki/Levon%20Kemalyan
Levon John Kemalyan (24 February 1907 Fresno, California – 2 November 1976 Fresno, California) was a model railroading entrepreneur. He founded Kemtron Corporation, a manufacturer of model railway cars, locomotives, parts (especially for scratchbuilders), and accessories. In 1960 it was the world's largest maker of scale railroad kits, producing one million parts a year and selling them worldwide to enthusiasts as far away as India and Australia. Companies owned Kemalyan had ownership stakes in various companies; Fresno Photo-Engraving Company, U.S. Hobbies, Inc., and Kemtron Corporation. Fresno Photo-Engraving Company was founded in 1903 by A. F. Kemalyan. Levon purchased the company with his brother-in-law in 1929. In 1935, he took over the photo-engraving company himself. Levon sold the firm December 26, 1962, to his brothers-in-law, Thomas N. Vartanian and Jerry Mootafian. Kemtron Corporation was founded and owned by Kemalyan. :Kemalyan started Kemtron in Fresno in the early '50s (possibly even 1948 or 1949), and he provided layout space for the Fresno Model Railroad club in the early '50s. In 1960, the Kemtron plant was with 15 employees. One of Kemtron's product lines, photo engraved car kits, particularly the flats, often used zinc (or a very high zinc content brass) sheet, as opposed to brass. The 'blue' coating, was the 'photo resist' that was not cleaned off. Kemtron was initially a sideline of Fresno Photo Engraving, which explains why common photo engraving materials were often used. In the mid-1960s Kemtron also produced a line of slot cars and accessories. Lawrence S. Kazoyan (b. November 7, 1931 – d. April 15, 2000, Palm Beach), a retired aerospace engineer, acquired Kemtron in 1970, and moved it from Fresno to Los Angeles. T. Fredrick Hill and Wayne Lyndon, owners of The Original Whistle Stop Inc., acquired Kemtron in 1978 and moved it to Sacramento. The Precision Scale Company, Inc. acquired Kemtron as a merger in 1986. Former Kemtron employe
https://en.wikipedia.org/wiki/Piezoelectric%20microelectromechanical%20systems
A piezoelectric microelectromechanical system (piezoMEMS) is a miniature or microscopic device that uses piezoelectricity to generate motion and carry out its tasks. It is a microelectromechanical system that takes advantage of an electrical potential that appears under mechanical stress. PiezoMEMS can be found in a variety of applications, such as switches, inkjet printer heads, sensors, micropumps, and energy harvesters. Development Interest in piezoMEMS technology began around the early 1990s as scientists explored alternatives to electrostatic actuation in radio frequency (RF) microelectromechanical systems (MEMS). For RF MEMS, electrostatic actuation specialized high voltage charge pump circuits due to small electrode gap spacing and large driving voltages. In contrast, piezoelectric actuation allowed for high sensitivity as well as low voltage and power consumption as low as a few millivolts. It also had the ability to close large vertical gaps while still allowing for low microsecond operating speeds. Lead zirconate titanate (PZT), in particular, offered the most promise as a piezoelectric material because of its high piezoelectric coefficient, tunable dielectric constant, and electromechanical coupling coefficient. PiezoMEMS have been applied to various different technologies from switches to sensors, and further research have led to the creation of piezoelectric thin films, which aided in the realization of highly integrated piezoMEMS devices. The first reported piezoelectrically actuated RF MEMS switch was developed by scientists at the LG Electronics Institute of Technology in Seoul, South Korea in 2005. The researchers designed and actualized a RF MEMS switch with a piezoelectric cantilever actuator that had an operation voltage of 2.5 volts. In 2017, researchers from the U.S. Army Research Laboratory (ARL) evaluated the radiation effects in the piezoelectric response of PZT thin films for the first time. They determined that PZT exhibited a degree o
https://en.wikipedia.org/wiki/Handala
Handala (), also Handhala, Hanzala or Hanthala, is a prominent national symbol and personification of the Palestinian people. The character was created in 1969 by political cartoonist Naji al-Ali, and first took its current form in 1973. Handala became the signature of Naji al-Ali's cartoons and remains an iconic symbol of Palestinian identity and defiance. The character has been described as "portraying war, resistance, and the Palestinian identity with astounding clarity". The name comes from Citrullus colocynthis (), a perennial plant local to the region of Palestine which bears a bitter fruit, grows back when cut and has deep roots. Handala's impact has continued in the decades after al-Ali's 1987 assassination; today the character remains widely popular as a representative of the Palestinian people, and is found on numerous walls and buildings throughout the West Bank (notably as West Bank Wall graffiti art), Gaza and other Palestinian refugee camps, and as a popular tattoo and jewellery motif. It has also been used by movements such as Boycott, Divestment and Sanctions and the Iranian Green Movement. Early publication Handala appeared for the first time in Al-Seyassah in Kuwait on 13 July 1969, and first turned his back to the viewer and clasped his hands behind his back from 1973 onwards. Symbolism Handala's age – ten years old – represents Naji al-Ali's age in 1948 when he was forced to leave Palestine and would not grow up until he could return to his homeland: Al-Ali wrote that: Handala was born 10 years old and he will always be 10 years old. It was at that age that I left my homeland. When Handala returns, he will still be 10 years old, and then he will start growing up. His posture, with his turned back and clasped hands symbolise the character's "rejection at a time when solutions are presented to us the American way" and as "a symbol of rejection of all the present negative tides in our region." Handala's ragged clothes and standing barefoot sy
https://en.wikipedia.org/wiki/Bed%20rotting
Bed rotting or lying in bed is a phrase for a social media trend wherein a person stays in bed for an entire day without engaging in daily activities and chores. Participants commonly spend their time on their computer or phone. The behaviour may have negative impact in individuals experiencing depression. Some observers have interpreted this as a reaction to stress and or anxiety. Participants may also be evading responsibility. Lifehacker has described it as an aspect of Joy of Missing Out. See also Bed-ins for Peace
https://en.wikipedia.org/wiki/Hoyle%E2%80%93Narlikar%20theory%20of%20gravity
The Hoyle–Narlikar theory of gravity is a Machian and conformal theory of gravity proposed by Fred Hoyle and Jayant Narlikar that originally fits into the quasi steady state model of the universe. Description The gravitational constant G is arbitrary and is determined by the mean density of matter in the universe. The theory was inspired by the Wheeler–Feynman absorber theory for electrodynamics. When Feynman, as a graduate student, lectured on the Wheeler–Feynman absorber theory in the weekly physics seminar at Princeton, Albert Einstein was in the audience and stated at question time that he was trying to achieve the same thing for gravity. Incompatibility Stephen Hawking showed in 1965 that the theory is incompatible with an expanding universe, because the Wheeler–Feynman advanced solution would diverge. However, at that time the accelerating expansion of the universe was not known, which resolves the divergence issue because of the cosmic event horizon. Comparison with Einstein's General Relativity The Hoyle–Narlikar theory reduces to Einstein's general relativity in the limit of a smooth fluid model of particle distribution constant in time and space. Hoyle–Narlikar's theory is consistent with some cosmological tests. Hypothesis Unlike the standard cosmological model, the quasi steady state hypothesis implies the universe is eternal. According to Narlikar, multiple mini bangs would occur at the center of quasars, with various creation fields (or C-field) continuously generating matter out of empty space due to local concentration of negative energy that would also prevent violation of conservation laws, in order to keep the mass density constant as the universe expands. The low-temperature cosmic background radiation would not originate from the Big Bang but from metallic dust made from supernovae, radiating the energy of stars. Challenge However, the quasi steady-state hypothesis is challenged by observation as it does not fit into WMAP data. See also
https://en.wikipedia.org/wiki/Mindspeed%20Technologies
Mindspeed Technologies, Inc. designs, manufactures, develops, and sells fabless semiconductors for communications applications in wireless and wired networks. Products Wireless Mindspeed’s products are used in wireless infrastructure and small cell base-stations. The company's ARM-based processors include low-power, multi-core digital signal processor system-on-chip (SoC) products for fixed and mobile (3G/4G) carrier infrastructure (the Transcede family) and Picochip's SoCs for 3G (HSPA) femtocells and small cells. Mindspeed announced the Transcede family of wireless baseband processors in 2010, including the single-core Transcede 3000, which serves the eNodeB processing needs of a picocell, while consuming less than 10 watts (W) of power, and the dual-core Transcede 4000, which delivers three sectors of LTE processing for macro cells serving thousands of subscribers. The Transcede 4000 integrates 26 programmable processors into a single device, including two ARM® Cortex A9® multi-core symmetric multiprocessing (SMP) reduced instruction set computer (RISC) processors, 10 CEVA (digital signal processors (DSPs) and 10 DSP accelerators, enabling equipment manufacturers to fully support the complete processing needs of single- and multi-sector base stations using the WCDMA/HSPA, LTE, LTE time-division duplex (TD-LTE), time division synchronous code division multiple access (TD-SCDMA, in China) and other air-interface standards. The Transcede 4000 processor was honored as the Best Mobile Technology Breakthrough at the 2010 Mobile Excellence Awards. Following the acquisition of Picochip, that company's small cell SoC products are part of Mindspeed's Wireless Business Unit. Communications convergence processing Mindspeed’s ARM-based products include low-power, multi-core digital signal processor system-on-chip (SoC) for residential and enterprise platforms (the Comcerto Product line) and carrier-grade VoIP systems. These are embedded packet processors for a wide
https://en.wikipedia.org/wiki/Dorian%20M.%20Goldfeld
Dorian Morris Goldfeld (born January 21, 1947) is an American mathematician working in analytic number theory and automorphic forms at Columbia University. Professional career Goldfeld received his B.S. degree in 1967 from Columbia University. His doctoral dissertation, entitled "Some Methods of Averaging in the Analytical Theory of Numbers", was completed under the supervision of Patrick X. Gallagher in 1969, also at Columbia. He has held positions at the University of California at Berkeley (Miller Fellow, 1969–1971), Hebrew University (1971–1972), Tel Aviv University (1972–1973), Institute for Advanced Study (1973–1974), in Italy (1974–1976), at MIT (1976–1982), University of Texas at Austin (1983–1985) and Harvard (1982–1985). Since 1985, he has been a professor at Columbia University. He is a member of the editorial board of Acta Arithmetica and of The Ramanujan Journal. On January 1, 2018 he became the Editor-in-Chief of the Journal of Number Theory. He is a co-founder and board member of Veridify Security, formerly SecureRF, a corporation that has developed the world's first linear-based security solutions. Goldfeld advised several doctoral students including M. Ram Murty. In 1986, he brought Shou-Wu Zhang to the United States to study at Columbia. Research interests Goldfeld's research interests include various topics in number theory. In his thesis, he proved a version of Artin's conjecture on primitive roots on the average without the use of the Riemann Hypothesis. In 1976, Goldfeld provided an ingredient for the effective solution of Gauss' class number problem for imaginary quadratic fields. Specifically, he proved an effective lower bound for the class number of an imaginary quadratic field assuming the existence of an elliptic curve whose L-function had a zero of order at least 3 at . (Such a curve was found soon after by Gross and Zagier). This effective lower bound then allows the determination of all imaginary fields with a given class numbe
https://en.wikipedia.org/wiki/Biosolarization
Biosolarization is an alternative technology to soil fumigation used in agriculture. It is closely related to biofumigation and soil solarization, or the use of solar power to control nematodes, bacteria, fungi and other pests that damage crops. In solarization, the soil is mulched and covered with a tarp to trap solar radiation and heat the soil to a temperature that kills pests. Biosolarization adds the use of organic amendments or compost to the soil before it is covered with plastic, which speeds up the solarization process by decreasing the soil treatment time through increased microbial activity. Research conducted in Spain on the use of biosolarization in strawberry fruit production has shown it to be a sustainable and cost effective option. The practice of biosolarization is being used among small agricultural operations in California. Biosolarization is a growing practice in response to the need for methods for organic soil solarization. The option for more widespread use of biosolarization is being studied by researchers at the Western Center for Agricultural Health and Safety at the University of California at Davis in order to validate the effectiveness of biosolarization in commercial agriculture in California, where it has the potential to greatly reduce the use of conventional fumigants. Biosolarization can also use as organic waste management practice. Recent studies showed the potential of food industrial residues as soil amendments that can improve the efficiency of biosolarization.
https://en.wikipedia.org/wiki/Zond%205
Zond 5 () was a spacecraft of the Soviet Zond program. In September 1968 it became the first spaceship to travel to and circle the Moon in a circumlunar trajectory, the first Moon mission to include animals, and the first to return safely to Earth. Zond 5 carried the first terrestrial organisms to the vicinity of the Moon, including two tortoises, fruit fly eggs, and plants. The Russian tortoises underwent biological changes during the flight, but it was concluded that the changes were primarily due to starvation and that they were little affected by space travel. The Zond spacecraft was a version of the Soyuz 7K-L1 crewed lunar-flyby spacecraft. It was launched by a Proton-K carrier rocket with a Block D upper-stage to conduct scientific studies during its lunar flyby. Background Out of the first four circumlunar missions launched by the Soviet Union there was one partial success, Zond 4, and three failures. After 's mission in March 1968, a follow-up, Zond 1968A, was launched on 23 April. The launch failed when an erroneous abort command shut down the Proton rocket's second stage. The escape rocket fired and pulled the descent module to safety. In July, Zond 1968B was being prepared for launch when the second-stage rocket exploded on the launchpad, killing three people, but leaving the Proton first-stage booster rocket and the spacecraft itself with only minor damage. The mission was originally planned to fly cosmonauts around the Moon, but the failures of and led the Soviets to send an uncrewed mission instead, from fear of the negative propaganda of an unsuccessful crewed flight. Payload Two Russian tortoises (Agrionemys horsfieldii) were included in the biological payload, weighing each pre-flight. Soviet scientists chose tortoises since they were easy to tightly secure. There were also two tortoises used as control specimens and four more in a vivarium. Twelve days before launch, the two space-bound tortoises were secured in the vehicle and deprive
https://en.wikipedia.org/wiki/Lithodesmium
Lithodesmium is a genus of diatoms belonging to the family Lithodesmiaceae. The genus has cosmopolitan distribution. Species Species: Lithodesmium africanum Lithodesmium asketogonium Lithodesmium biceps Lithodesmium californicum Grunow, 1883 Lithodesmium contractum L.W.Bailey, 1861 Lithodesmium cornigerum J.Brun, 1896 Lithodesmium duckerae Von Stosch, 1987 Lithodesmium ehrenbergii Forti Lithodesmium hirtum (Ehrenberg) Lithodesmium margaritaceum Long, Fuge & Smith, 1946 Lithodesmium minusculum Lithodesmium pliocenicum Schrader, 1973 Lithodesmium reynoldsii Barron, 1976 Lithodesmium rotunda Schrader, 1976 Lithodesmium undulatum Ehrenberg, 1839 Lithodesmium variabile Takano, 1979 Lithodesmium victoriae Karsten, 1906
https://en.wikipedia.org/wiki/Algebraic%20number%20field
In mathematics, an algebraic number field (or simply number field) is an extension field of the field of rational numbers such that the field extension has finite degree (and hence is an algebraic field extension). Thus is a field that contains and has finite dimension when considered as a vector space over The study of algebraic number fields, and, more generally, of algebraic extensions of the field of rational numbers, is the central topic of algebraic number theory. This study reveals hidden structures behind usual rational numbers, by using algebraic methods. Definition Prerequisites The notion of algebraic number field relies on the concept of a field. A field consists of a set of elements together with two operations, namely addition, and multiplication, and some distributivity assumptions. A prominent example of a field is the field of rational numbers, commonly denoted together with its usual operations of addition and multiplication. Another notion needed to define algebraic number fields is vector spaces. To the extent needed here, vector spaces can be thought of as consisting of sequences (or tuples) (x1, x2, …) whose entries are elements of a fixed field, such as the field Any two such sequences can be added by adding the corresponding entries. Furthermore, any sequence can be multiplied by a single element c of the fixed field. These two operations known as vector addition and scalar multiplication satisfy a number of properties that serve to define vector spaces abstractly. Vector spaces are allowed to be "infinite-dimensional", that is to say that the sequences constituting the vector spaces are of infinite length. If, however, the vector space consists of finite sequences (x1, x2, …, xn), the vector space is said to be of finite dimension, n. Definition An algebraic number field (or simply number field) is a finite-degree field extension of the field of rational numbers. Here degree means the dimension of the field as a vector space
https://en.wikipedia.org/wiki/Shortest%20common%20supersequence
In computer science, the shortest common supersequence of two sequences X and Y is the shortest sequence which has X and Y as subsequences. This is a problem closely related to the longest common subsequence problem. Given two sequences X = < x1,...,xm > and Y = < y1,...,yn >, a sequence U = < u1,...,uk > is a common supersequence of X and Y if items can be removed from U to produce X and Y. A shortest common supersequence (SCS) is a common supersequence of minimal length. In the shortest common supersequence problem, two sequences X and Y are given, and the task is to find a shortest possible common supersequence of these sequences. In general, an SCS is not unique. For two input sequences, an SCS can be formed from a longest common subsequence (LCS) easily. For example, the longest common subsequence of X and Y is Z. By inserting the non-LCS symbols into Z while preserving their original order, we obtain a shortest common supersequence U. In particular, the equation holds for any two input sequences. There is no similar relationship between shortest common supersequences and longest common subsequences of three or more input sequences. (In particular, LCS and SCS are not dual problems.) However, both problems can be solved in time using dynamic programming, where is the number of sequences, and is their maximum length. For the general case of an arbitrary number of input sequences, the problem is NP-hard. Shortest common superstring The closely related problem of finding a minimum-length string which is a superstring of a finite set of strings = { 1,2,...,n } is also NP-hard. Several constant factor approximations have been proposed throughout the years, and the current best known algorithm has an approximation factor of 2.475. However, perhaps the simplest solution is to reformulate the problem as an instance of weighted set cover in such a way that the weight of the optimal solution to the set cover instance is less than twice the length of t
https://en.wikipedia.org/wiki/Stokes%27%20theorem
Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the flux of its curl through the enclosed surface. It is illustrated in the figure, where the direction of positive circulation of the bounding contour , and the direction of positive flux through the surface , are related by a right-hand-rule. For the right hand the fingers circulate along and the thumb is directed along . Stokes' theorem is a special case of the generalized Stokes theorem. In particular, a vector field on can be considered as a 1-form in which case its curl is its exterior derivative, a 2-form. Theorem Let be a smooth oriented surface in with boundary . If a vector field is defined and has continuous first order partial derivatives in a region containing , then More explicitly, the equality says that The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary. Surfaces such as the Koch snowflake, for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in Lebesgue theory cannot be defined for a non-Lipschitz surface. One (advanced) technique is to pass to a weak formulation and then apply the machinery of geometric measure theory; for that approach see the coarea formula. In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of . A more detailed statement will be given for subsequent discussions. Let be a piecewise smooth Jordan plane curve. The Jordan curve the
https://en.wikipedia.org/wiki/Singapore%20Synchrotron%20Light%20Source
Singapore Synchrotron Light Source (SSLS) is a synchrotron radiation facility located on Kent Ridge campus of the National University of Singapore. Footnotes
https://en.wikipedia.org/wiki/Pierre%20Dolbeault
Pierre Dolbeault (October 10, 1924 – June 12, 2015) was a French mathematician. Dolbeault studied with Henri Cartan and graduated in 1944 from the École Normale Supérieure. He completed his Ph.D. at the University of Paris in 1955 under the supervision of Cartan, with a dissertation titled Formes différentielles et cohomologie sur une variété analytique complexe. He taught in the 1950s at the University of Montpellier and the University of Bordeaux, and later at the Pierre and Marie Curie University (Jussieu). Together with Pierre Lelong and Henri Skoda he held an Analysis seminar in Paris. Dolbeault cohomology is named after him, and so is the Dolbeault theorem. External links Pierre Dolbeault's professional webpage "On the Mathematical Works of Pierre Dolbeault", EMS Newsletter, March 2016 1924 births 2015 deaths 20th-century French mathematicians 21st-century French mathematicians École Normale Supérieure alumni Complex analysts Mathematical analysts Academic staff of the University of Montpellier Academic staff of the University of Bordeaux Academic staff of Pierre and Marie Curie University
https://en.wikipedia.org/wiki/Green%20border
A green border is a weakly protected section of the national border. The term green border comes from the area covered with vegetation: green borders are usually forests, thickets and meadows, often with varied terrain. The act of (illegal) crossing to the green border is associated with the phenomenon of smuggling goods and persons of a criminal nature, but it has sometimes also been politically motivated. Green borders are and have been crossed by participants of the political activists illegally operating in their countries to contact with foreign collaborators, allies, emigres and the like, or to emigrate and seek refuge. Green border in Schengen zone Green borders exist within the European Union as the state borders internal to the European Union, crossed by tourists outside the area of former border crossings. After the Schengen Agreement became effective, crossing borders between countries where the Agreement applies is allowed at every section of the border. Article 22 of the Schengen Borders Code mentions this. Only persons without EU citizenship who do not have a visa to enter the whole territory are excluded from this regulation.
https://en.wikipedia.org/wiki/Don%27t%20Eat%20the%20Yellow%20Snow
"Don't Eat the Yellow Snow" is a suite by the American musician Frank Zappa, made up of the first four tracks of his 1974 album Apostrophe ('): "Don't Eat the Yellow Snow", "Nanook Rubs It", "St. Alfonzo's Pancake Breakfast", and "Father O'Blivion". Each song in the suite is loosely connected, although the songs are not all connected by one overall story/theme. The suite was only played in full from 1973 to 1974 and 1978 to 1980. "Saint Alfonzo's Pancake Breakfast" contains Zappa's percussionist Ruth Underwood on marimba, who added a very distinct sound to many of his songs in the early 1970s. In keeping with the arctic theme of the song, after the first lyric "Dreamed I was an Eskimo" there is a musical quotation from the 1947 jazz tune "Midnight Sun". Story "Don't Eat the Yellow Snow" is a song about a man who dreams that he was an Eskimo named Nanook. His mother warns him "Watch out where the huskies go, and don't you eat that yellow snow." The song directly transitions into "Nanook Rubs It". The song is about Nanook encountering a fur trapper "strictly from commercial" who is whipping Nanook's "favorite baby seal" with a "lead-filled snow shoe". Eventually Nanook gets so mad he rubs husky "wee wee" into the fur trapper's eyes, blinding him. According to the lyrics, this scene is destined to take the place of "The Mud Shark" (a song from the live album Fillmore East – June 1971) in Zappa mythology. Zappa then sings in the fur trapper's perspective, who laments over the fact that he has been blinded. The fur trapper then makes his way to the parish of St. Alfonzo, introducing the next song "St. Alfonzo's Pancake Breakfast". From this point forward, the suite almost completely abandons the previous storyline (the fur trapper's blindness is never explicitly healed). In this song a man attending St. Alfonzo's Pancake Breakfast engages in such appalling deportment as stealing margarine pats from the tables, urinating on the bingo cards, and instigating an affair wi
https://en.wikipedia.org/wiki/Zineb
Zineb is the chemical compound with the formula {Zn[S2CN(H)CH2CH2N(H)CS2]}n. Structurally, it is classified as a coordination polymer and a dithiocarbamate complex. This pale yellow solid is used as fungicide. Production and applications It is produced by treating ethylene bis(dithiocarbamate) sodium salt, "nabam", with zinc sulfate. This procedure can be carried out by mixing nabam and zinc sulfate in a spray tank. Its uses include control of downy mildews, rusts, and redfire disease. In the US it was once registered as a "General Use Pesticide", however all registrations were voluntarily cancelled following an EPA special review. It continues to be used in many other countries. Structure Zineb is a polymeric complex of zinc with a dithiocarbamate. The polymer is composed of Zn(dithiocarbamate)2 subunits linked by an ethylene (-CH2CH2-) backbone. A reference compound is [Zn(S2CNEt2)2]2, which features a pair of tetrahedral Zn centers bridged by one sulfur center. See also Metam sodium - A related dithiocarbamate salt which is also used as a fungicide. Maneb - ethylene bis(dithiocarbamate) with manganese instead of zinc. Mancozeb - A common fungicide containing Zineb and Maneb.
https://en.wikipedia.org/wiki/Teledeltos
Teledeltos paper is an electrically conductive paper. It is formed by a coating of carbon on one side of a sheet of paper, giving one black and one white side. Western Union developed Teledeltos paper in the late 1940s (several decades after it was already in use for mathematical modelling) for use in spark printer based fax machines and chart recorders. Teledeltos paper has several uses within engineering that are far removed from its original use in spark printers. Many of these use the paper to model the distribution of electric and other scalar fields. Use Teledeltos provides a sheet of a uniform resistor, with isotropic resistivity in each direction. As it is cheap and easily cut to shape, it may be used to make one-off resistors of any shape needed. The paper backing also forms a convenient insulator from the bench. These are usually made to represent or model some real-world example of a two-dimensional scalar field, where is it is necessary to study the field's distribution. This field may be an electric field, or some other field following the same linear distribution rules. The resistivity of Teledeltos is around 6 kilohms / square. This is low enough that it may be used with safe low voltages, yet high enough that the currents remain low, avoiding problems with contact resistance. Connections are made to the paper by painting on areas of silver-loaded conductive paint and attaching wires to these, usually with spring clips. Each painted area has a low resistivity (relative to the carbon) and so may be assumed to be at a constant voltage. With the voltages applied, the current flow through the sheet will emulate the field distribution. Voltages may be measured within the sheet by applying a voltmeter probe (relative to one of the known electrodes) or current flows may be measured. As the sheet's resistivity is constant, the simplest way to measure a current flow is to use a small two-probe voltmeter to measure the voltage difference between the pro
https://en.wikipedia.org/wiki/Sqlmap
sqlmap is a software utility for automated discovering of SQL injection vulnerabilities in web applicatons. Usage The tool was used in the 2015 data breach of TalkTalk. In 2016, the Illinois Board of Election was breached using the tool, combined with Acunetix and DirBuster.
https://en.wikipedia.org/wiki/John%20Radford%20Young
John Radford Young (born 8 April 1799, in Southwark – 5 March 1885, in Peckham) was an English mathematician, professor and author, who was almost entirely self-educated. He was born of humble parents in London. At an early age he became acquainted with Olinthus Gilbert Gregory, who perceived his mathematical ability and assisted him in his studies. In 1823, while working in a private establishment for the deaf, he published An Elementary Treatise on Algebra with a dedication to Gregory. This treatise was followed by a series of elementary works, in which, following in the steps of Robert Woodhouse, Young familiarized English students with continental methods of mathematical analysis. In 1833, he was appointed Professor of Mathematics at Belfast College. When Queen's College, Belfast, opened in 1849, the presbyterian party in control there prevented Young's reappointment as Professor in the new establishment. From that time he devoted himself more completely to the study of mathematical analysis, and made several original discoveries. In 1847, he published in the Transactions of the Cambridge Philosophical Society a paper "On the Principle of Continuity in reference to certain Results of Analysis", and, in 1848, in the Transactions of the Royal Irish Academy a paper "On an Extension of a Theorem of Euler". As early as 1844, he had discovered and published a proof of Newton's rule for determining the number of imaginary roots in an equation. In 1866, he completed his proof, publishing in The Philosophical Magazine a demonstration of a principle which in his earlier paper he had assumed as axiomatic. In 1868, he contributed to the Proceedings of the Royal Irish Academy a memoir "On the Imaginary Roots of Numerical Equations". Young died at Peckham on 5 March 1885. He was married and had at least two sons and four daughters. Works An Elementary Treatise on Algebra 1823, 1832, 1834 Elements of Geometry 1827 Elements of Analytical Geometry 1830 An Elementary Ess
https://en.wikipedia.org/wiki/Pyrrole%E2%80%93imidazole%20polyamides
Pyrrole–imidazole polyamides (PIPs) are a class of polyamides have the ability to bind to minor grooves found in the DNA helix. Scientists are experimenting with it as a drug-delivery mode that can switch genes on and off, as well as epigenetic modification in gene therapy.
https://en.wikipedia.org/wiki/Optical%20properties%20of%20water%20and%20ice
The refractive index of water at 20 °C for visible light is 1.33. The refractive index of normal ice is 1.31 (from List of refractive indices). In general, an index of refraction is a complex number with real and imaginary parts, where the latter indicates the strength of absorption loss at a particular wavelength. In the visible part of the electromagnetic spectrum, the imaginary part of the refractive index is very small. However, water and ice absorb in infrared and close the infrared atmospheric window thereby contributing to the greenhouse effect ... The absorption spectrum of pure water is used in numerous applications, including light scattering and absorption by ice crystals and cloud water droplets, theories of the rainbow, determination of the single-scattering albedo, ocean color, and many others. Quantitative description of the refraction index Over the wavelengths from 0.2 μm to 1.2 μm, and over temperatures from −12 °C to 500 °C, the real part of the index of refraction of water can be calculated by the following empirical expression: Where: , , and and the appropriate constants are = 0.244257733, = 0.00974634476, = −0.00373234996, = 0.000268678472, = 0.0015892057, = 0.00245934259, = 0.90070492, = −0.0166626219, = 273.15 K, = 1000 kg/m3, = 589 nm, = 5.432937, and = 0.229202. In the above expression, T is the absolute temperature of water (in K), is the wavelength of light in nm, is the density of the water in kg/m3, and n is the real part of the index of refraction of water. Volumic mass of water In the above formula, the density of water also varies with temperature and is defined by: with: = −3.983035 °C = 301.797 °C = 522528.9 °C2 = 69.34881 °C = 999.974950 kg / m3 Refractive index (real and imaginary parts) for liquid water The total refractive index of water is given as m = n + ik. The absorption coefficient α' is used in the Beer–Lambert law with the prime here signifying base e convention. Values are for water
https://en.wikipedia.org/wiki/The%20Verifier
The Verifier refers to a proprietary fuel gauge technology, used for advanced measurement of heating oil in residential and commercial tanks ranging from 1,000 – 20,000 gallons. First developed in 2005, by U.S. Energy Group’s Jerry Pindus, Verifier technology was granted Patent Approval 11/095,914 as “The Verifier Digital Fuel Gauge” from The United States Patent and Trademark Office on September 21, 2007. The Verifier is also ETL SEMKO certified. Technology The Verifier makes use of ultrasound technology, a method of using high-intensity acoustic energy which is above the limits of human hearing. Just as it has done for other fields, ultrasound technology offers unprecedented access and accuracy with The Verifier. The old system of measuring oil deliveries and inventory involves either climbing onto the tank and ‘sticking it’ with a ruler or pumping air into a petrometer and converting a pressure reading of the weight of the oil. These systems were developed when oil was inexpensive and when accuracy and convenience were not essential. Verifier technology confirms an oil delivery within 1/10 of an inch, recording the exact date, time and amount of heating oil delivered to the tank and checking the amount of heating oil delivered. An ultrasonic ping is sent from the device to the heating oil, an echo is received back, and the Verifier’s complex, patented algorithm then figures the time between when the ping was sent and the echo received, compensating for oil temperature variations. The system is precise, safe, and clean. The history of ultrasound technology began in 1794, when Italian Lazzaro Spallanzani (1729–1799) first noticed the phenomenon in nature. The human application of ultrasound technology developed remarkably in the last 100 years, and was influenced heavily by World War I and by World War II, in which Sonar, or sound navigation and ranging, was utilized. Following World War II, applications of ultrasound developed, transforming many industrie
https://en.wikipedia.org/wiki/MUSASINO-1
The MUSASINO-1 was one of the earliest electronic digital computers built in Japan. Construction started at the Electrical Communication Laboratories of NTT at Musashino, Tokyo in 1952 and was completed in July 1957. The computer was used until July 1962. Saburo Muroga, a University of Illinois visiting scholar and member of the ILLIAC I team, returned to Japan and oversaw the construction of MUSASINO-1. Using 519 vacuum tubes and 5,400 parametrons, the MUSASINO-1 possessed a magnetic core memory, initially of 32 (later expanded to 256) words. A word was composed of 40 bits, and two instructions could be stored in a single word. Addition time was clocked at 1,350 microseconds, multiplication at 6,800 microseconds, and division time at 26.1 milliseconds. The MUSASINO-1's instruction set was a superset of the ILLIAC I's instructions, so it could generally use the latter's software. However, many of the programs for the ILLIAC used some of the unused bits in the instructions to store data, and these would be interpreted as a different instructions by the MUSASINO-1 control circuitry. See also FUJIC ILLIAC I List of vacuum-tube computers
https://en.wikipedia.org/wiki/Minimal%20surface
In mathematics, a minimal surface is a surface that locally minimizes its area. This is equivalent to having zero mean curvature (see definitions below). The term "minimal surface" is used because these surfaces originally arose as surfaces that minimized total surface area subject to some constraint. Physical models of area-minimizing minimal surfaces can be made by dipping a wire frame into a soap solution, forming a soap film, which is a minimal surface whose boundary is the wire frame. However, the term is used for more general surfaces that may self-intersect or do not have constraints. For a given constraint there may also exist several minimal surfaces with different areas (for example, see minimal surface of revolution): the standard definitions only relate to a local optimum, not a global optimum. Definitions Minimal surfaces can be defined in several equivalent ways in . The fact that they are equivalent serves to demonstrate how minimal surface theory lies at the crossroads of several mathematical disciplines, especially differential geometry, calculus of variations, potential theory, complex analysis and mathematical physics. Local least area definition: A surface is minimal if and only if every point p ∈ M has a neighbourhood, bounded by a simple closed curve, which has the least area among all surfaces having the same boundary. This property is local: there might exist regions in a minimal surface, together with other surfaces of smaller area which have the same boundary. This property establishes a connection with soap films; a soap film deformed to have a wire frame as boundary will minimize area. Variational definition: A surface is minimal if and only if it is a critical point of the area functional for all compactly supported variations. This definition makes minimal surfaces a 2-dimensional analogue to geodesics, which are analogously defined as critical points of the length functional. Mean curvature definition: A surface is minimal i
https://en.wikipedia.org/wiki/Elevation
The elevation of a geographic location is its height above or below a fixed reference point, most commonly a reference geoid, a mathematical model of the Earth's sea level as an equipotential gravitational surface (see Geodetic datum § Vertical datum). The term elevation is mainly used when referring to points on the Earth's surface, while altitude or geopotential height is used for points above the surface, such as an aircraft in flight or a spacecraft in orbit, and depth is used for points below the surface. Elevation is not to be confused with the distance from the center of the Earth. Due to the equatorial bulge, the summits of Mount Everest and Chimborazo have, respectively, the largest elevation and the largest geocentric distance. Aviation In aviation, the term elevation or aerodrome elevation is defined by the ICAO as the highest point of the landing area. It is often measured in feet and can be found in approach charts of the aerodrome. It is not to be confused with terms such as the altitude or height. Maps and GIS GIS or geographic information system is a computer system that allows for visualizing, manipulating, capturing, and storage of data with associated attributes. GIS offers better understanding of patterns and relationships of the landscape at different scales. Tools inside the GIS allow for manipulation of data for spatial analysis or cartography. A topographical map is the main type of map used to depict elevation, often through use of contour lines. In a Geographic Information System (GIS), digital elevation models (DEM) are commonly used to represent the surface (topography) of a place, through a raster (grid) dataset of elevations. Digital terrain models are another way to represent terrain in GIS. USGS (United States Geologic Survey) is developing a 3D Elevation Program (3DEP) to keep up with growing needs for high quality topographic data. 3DEP is a collection of enhanced elevation data in the form of high quality LiDAR data over the c
https://en.wikipedia.org/wiki/Lipofectamine
Lipofectamine or Lipofectamine 2000 is a common transfection reagent, produced and sold by Invitrogen, used in molecular and cellular biology. It is used to increase the transfection efficiency of RNA (including mRNA and siRNA) or plasmid DNA into in vitro cell cultures by lipofection. Lipofectamine contains lipid subunits that can form liposomes in an aqueous environment, which entrap the transfection payload, e.g. DNA plasmids. Lipofectamine consists of a 3:1 mixture of DOSPA (2,3‐dioleoyloxy‐N‐ [2(sperminecarboxamido)ethyl]‐N,N‐dimethyl‐1‐propaniminium trifluoroacetate) and DOPE, which complexes with negatively charged nucleic acid molecules to allow them to overcome the electrostatic repulsion of the cell membrane. Lipofectamine's cationic lipid molecules are formulated with a neutral co-lipid (helper lipid). The DNA-containing liposomes (positively charged on their surface) can fuse with the negatively charged plasma membrane of living cells, due to the neutral co-lipid mediating fusion of the liposome with the cell membrane, allowing nucleic acid cargo molecules to cross into the cytoplasm for replication or expression. In order for a cell to express a transgene, the nucleic acid must reach the nucleus of the cell to begin transcription. However, the transfected genetic material may never reach the nucleus in the first place, instead being disrupted somewhere along the delivery process. In dividing cells, the material may reach the nucleus by being trapped in the reassembling nuclear envelope following mitosis. But also in non-dividing cells, research has shown that Lipofectamine improves the efficiency of transfection, which suggests that it additionally helps the transfected genetic material penetrate the intact nuclear envelope. This method of transfection was invented by Dr. Yongliang Chu. See also Lipofection Transfection Vectors in gene therapy Cationic liposome
https://en.wikipedia.org/wiki/Fat%2C%20Sick%20and%20Nearly%20Dead
Fat, Sick and Nearly Dead is a 2010 American documentary film which follows the 60-day journey of Australian Joe Cross across the United States as he follows a juice fast to regain his health under the care of Joel Fuhrman, Nutrition Research Foundation's Director of Research. Summary The feature-length film follows Cross, who was depressed, weighed 310 lbs, suffered from a serious autoimmune disease, and was on steroids at the start of the film, as he embarks on a juice fast. Cross and Robert Mac, co-creators of the film, both serve on the Nutrition Research Foundation's Advisory Board. Following his fast and the adoption of a plant-based diet, Cross states in a press release that he lost 100 pounds and discontinued all medications. During his road-trip Cross meets Phil Staples, a morbidly obese truck driver from Sheldon, Iowa, in a truck stop in Arizona and inspires him to try juice fasting. A sequel to the first film, Fat, Sick and Nearly Dead 2, was released in 2014. Awards Fat, Sick, and Nearly Dead won the Turning Point Award and shared the Audience Choice Award – Documentary Film at the 2010 Sonoma International Film Festival. Critical reception The film has received mixed reviews with review aggregation website Rotten Tomatoes giving it a rating of 69% "fresh" and Metacritic having an average score of 45 out of 100, based on 5 reviews. The Hollywood Reporter called it an "infomercial passing itself off a documentary". The New York Times stated that the film is "no great shakes as a movie, but as an ad for Mr. Cross's wellness program its now-healthy heart is in the right place". Journalist Avery Yale Kamila reviewed the film in 2011, reporting Cross planned to continue avoiding junk food and "eating a diet centered around whole food." She reported Cross had created an online community called Reboot Your Life. See also List of vegan media
https://en.wikipedia.org/wiki/Anti-roll%20bar
An anti-roll bar (roll bar, anti-sway bar, sway bar, stabilizer bar) is an automobile suspension part that helps reduce the body roll of a vehicle during fast cornering or over road irregularities. It links opposite front or rear wheels to a torsion spring using short lever arms for anchors. This increases the suspension's roll stiffness—its resistance to roll in turns. The first stabilizer bar patent was awarded to Canadian inventor Stephen Coleman of Fredericton, New Brunswick on April 22, 1919. Anti-roll bars were unusual on pre-WW2 cars due to the generally much stiffer suspension and acceptance of body roll. From the 1950s on, however, production cars were more commonly fitted with anti-roll bars, especially those vehicles with softer coil spring suspension. Purpose and operation An anti-sway or anti-roll bar is intended to reduce the lateral tilt (roll) of the vehicle on curves, sharp corners, or large bumps. Although there are many variations in design, the object is to induce a vehicle's body to remain as level as possible by forcing the opposite wheel's shock absorber, spring, or suspension rod in the same direction as the one being impacted. In a turn a vehicle compresses its outer wheel's suspension. The anti-roll bar forces the opposite wheel's to also compress, thereby keeping the body in a more level lateral attitude. This has the additional benefit of lowering its center of gravity during a turn, increasing its stability. When both front and rear anti-roll bars are fitted, their combined effect can help maintain a vehicle's tendency to roll towards the general slope of the terrain. Principles An anti-roll bar is usually a torsion spring anchored to resist body roll motions. It is usually constructed out of a cylindrical steel bar, formed into a "U" shape, that connects to the body at two points along its longer center section, and on each end. When the left and right wheels move together the bar simply rotates on its central mounting points.
https://en.wikipedia.org/wiki/Microtrauma
Microtrauma is any of many possible small injuries to the body. Microtrauma can include the microtearing of muscle fibres, the sheath around the muscle and the connective tissue. It can also include stress to the tendons, and to the bones (see Wolff's law). It is unknown whether or not the ligaments adapt like this. Microtrauma to the skin (compression, impact, abrasion) can also cause increases in a skin's thickness, as seen from the calluses formed from running barefoot or the hand calluses that result from rock climbing. This might be due to increased skin cell replication at sites under stress where cells rapidly slough off or undergo compression or abrasion. Most microtrauma cause a low level of inflammation that cannot be seen or felt. These injuries can arise in muscle, ligament, vertebrae, and discs, either singly or in combination. Repetitive microtrauma which are not allowed time to heal can result in the development of more serious conditions. Negative effects Back pain can develop gradually as a result of microtrauma brought about by repetitive activity over time. Because of the slow and progressive onset of this internal injury, the condition is often ignored until the symptoms become acute, often resulting in disabling injury. Acute back injuries can arise from stressful lifting techniques done without adequate recovery, especially when experimenting with more ballistic work, or work where the extensor spinae are stressed during spinal flexion when much of the load is commonly taken up by the slower to heal ligaments which may not adapt progressively to the stress. While the acute injury may seem to be caused by a single well-defined incident, it may have been preventable or lessened if not for the years of injury to the musculoskeletal support mechanism by repetitive microtrauma. Positive effects After microtrauma from stress (such as lifting weights) to muscles, they can be rebuilt and overcompensate to reduce the likeliness of re-injury.
https://en.wikipedia.org/wiki/Resiniferatoxin
Resiniferatoxin (RTX) is a naturally occurring chemical found in resin spurge (Euphorbia resinifera), a cactus-like plant commonly found in Morocco, and in Euphorbia poissonii found in northern Nigeria. It is a potent functional analog of capsaicin, the active ingredient in chili peppers. Biological activity Resiniferatoxin has a score of 16 billion Scoville heat units, making pure resiniferatoxin about 500 to 1000 times hotter than pure capsaicin. Resiniferatoxin activates transient vanilloid receptor 1 (TRPV1) in a subpopulation of primary afferent sensory neurons involved in nociception, the transmission of physiological pain. TRPV1 is an ion channel in the plasma membrane of sensory neurons and stimulation by resiniferatoxin causes this ion channel to become permeable to cations, especially calcium. The influx of cations causes the neuron to depolarize, transmitting signals similar to those that would be transmitted if the innervated tissue were being burned or damaged. This stimulation is followed by desensitization and analgesia, in part because the nerve endings die from calcium overload. Total synthesis A total synthesis of (+)-resiniferatoxin was completed by the Paul Wender group at Stanford University in 1997. The process begins with a starting material of 1,4-pentadien-3-ol and consists of more than 25 significant steps. As of 2007, this represented the only complete total synthesis of any member of the daphnane family of molecules. One of the main challenges in synthesizing a molecule such as resiniferatoxin is forming the three-ring backbone of the structure. The Wender group was able to form the first ring of the structure by first synthesizing Structure 1 in Figure 1. By reducing the ketone of Structure 1 followed by oxidizing the furan nucleus with m-CPBA and converting the resulting hydroxy group to an oxyacetate, Structure 2 can be obtained. Structure 2 contains the first ring of the three-ring structure of RTX. It reacts through an oxidopy
https://en.wikipedia.org/wiki/MOSFET%20Gate%20Driver
MOSFET gate driver is a specialized circuit that is used to drive the gate (gate driver) of power MOSFETs effectively and efficiently in high-speed switching applications. The addition of high-speed MOSFET gate drivers are the last step if the turn-on is intended to fully enhance the conducting channel of the MOSFET technology. MOSFET technology The gate driver works under the same principle as the MOSFET transistor. It provides an output current that provides a charge to the semiconductor by a control electrode. It is also simple to drive and has resistive nature for power uses.
https://en.wikipedia.org/wiki/Height%20Modernization
Height Modernization is the name of a series of state-by-state programs recently begun by the United States' National Geodetic Survey, a division of the National Oceanic and Atmospheric Administration. The goal of each state program is to place GPS base stations at various locations within each participating state to measure topographic changes in the directions of latitude and longitude caused by subsidence or earthquakes, as well as to measure changes in height (elevation).
https://en.wikipedia.org/wiki/EMI%20%28protocol%29
External Machine Interface (EMI), an extension to Universal Computer Protocol (UCP), is a protocol primarily used to connect to short message service centres (SMSCs) for mobile telephones. The protocol was developed by CMG Wireless Data Solutions, now part of Mavenir. Syntax A typical EMI/UCP exchange looks like this : ^B01/00045/O/30/66677789///1//////68656C6C6F/CE^C ^B01/00041/R/30/A//66677789:180594141236/F3^C The start of the packet is signaled by ^B (STX, hex 02) and the end with ^C (ETX, hex 03). Fields within the packet are separated by / characters. The first four fields form the mandatory header. the third is the operation type (O for operation, R for result), and the fourth is the operation (here 30, "short message transfer"). The subsequent fields are dependent on the operation. In the first line above, '66677789' is the recipient's address (telephone number) and '68656C6C6F' is the content of the message, in this case the ASCII string "hello". The second line is the response with a matching transaction reference number, where 'A' indicates that the message was successfully acknowledged by the SMSC, and a timestamp is suffixed to the phone number to show time of delivery. The final field is the checksum, calculated simply by summing all bytes in the packet (including slashes) and taking the 8 least significant bits from the result. The full specification is available on the LogicaCMG website developers' forum, but registration is required. Technical limitations The two-digit transaction reference number means that an entity sending text messages can only have 100 outstanding messages (per session); this can limit performance, but only over a slow network and with incorrectly configured applications on one's SMSC (for example one session, with number of windows greater than 100). In practice it does not have any impact on delivery throughput. The EMI UCP documentation does not specify a default alphabet for alphanumeric messages after
https://en.wikipedia.org/wiki/Ureaplasma%20gallorale
Ureaplasma gallorale is a species of Ureaplasma, a genus of bacteria belonging to the family Mycoplasmataceae. It has been isolated from chickens and barnyard fowl. It possesses the sequence accession no. (16S rRNA gene) for the type strain: U62937. It is a commensal species with its host organism but has the ability to colonize and create infection. In the presence of virulence factors (H2O2, antigen proteins, etc.) is when these species start to over colonize . They have relatively small genomes, utilizing their host organisms natural processes to further their growth and survival. Nutrient required by the Ureaplasma species to continue metabolism are taken directly from the host. They proliferate in environments with a pH of 6.0-6.5 and a temperature of 35-37 °C. These characteristics are common to most biological environments which is why Ureaplasma species regularly cause infection. These infections can be found in the genital and respiratory tracks of avian species (chickens and turkey). Ureaplasma gallorale infections cannot always be managed by the host due to the mechanisms the bacteria have adapted. A host will release immune signals of IgA molecules to the bacterial cells to signify infection but the Ureaplasmas can secrete an enzyme known as IgAse that destroys IgA, rendering the signal inactive and leaving the host susceptible to health concerns. These infections, known as the condition Ureaplasmosis, have further ramifications for the barnyard fowl such as low egg production, weight loss, reduced feed conversion efficiency and even death. These health issues are a serious concern in maintaining adequate production for the agricultural industry. Classification All species of Ureaplasma appear as gram-negative bacteria that utilize the hydrolysis or urea to produce energy for growth. Products of urea hydrolysis include carbon dioxide and ammonia. This reaction is coordinated by the enzyme urease, which Ureaplasma species potently secrete. Unlike most m
https://en.wikipedia.org/wiki/Deep%20dorsal%20vein%20of%20clitoris
The deep dorsal vein of clitoris is a vein which drains to the vesical plexus.
https://en.wikipedia.org/wiki/Alojz%20Kodre
Alojzij Franc 'Alojz' Kodre (born 22 February 1944 in Villach, Austria) is a Slovenian physicist and translator. Kodre was a professor at the Faculty of Mathematics and Physics in Ljubljana, where he lectured Mathematical Physics and Model Analysis. In Mathematical Physics, he succeeded Ivan Kuščer, who was the first lecturer of this subject at the University of Ljubljana. Kodre researches atomic physics (inner shells), low-energy spectroscopy and excitation phenomena caused by synchrotron light. After he retired, he was bestowed with the title of an emeritus by the University. He has translated a number of English and American science fiction works to Slovene: Douglas Adams: The Hitchhiker's Guide to the Galaxy (1988), The Restaurant at the End of the Universe (1988), Life, the Universe and Everything (1989), So Long, and Thanks for All the Fish (2000), Mostly Harmless (1993), The Salmon of Doubt: Hitchhiking the Galaxy One Last Time (2002); Ray Bradbury: The Martian Chronicles (1980); Robert Anson Heinlein: The Door into Summer (1976); Fred Hoyle: Fifth Planet (1972); and others. He is also known from a song by the singer-songwriter Marko Brecelj, Alojz valček (Alojz Waltz). Brecelj sang about a "master of mafia" (i.e., of mathematical physics). In October 2022, Alojz Kodre was awarded the Blinc Award (named after Robert Blinc) by the Jožef Stefan Institute for lifetime achievement. Selected works In collaboration with Ivan Kuščer, Matematika v fiziki in tehniki, Matematika – fizika : zbirka univerzitetnih učbenikov in monografij [Mathematics - Physics : Collection of University Textbooks and Monographs], 36, Ljubljana, DMFA – publishing, 1994, 2006, ISBN 961-212-033-1
https://en.wikipedia.org/wiki/Sterile%20male%20plant
Sterile male plants are plants which are incapable of producing pollen. This is sometimes attributed to mutations in the mitochondrial DNA which affects the Tapetum cells in anthers which are responsible for nursing developing pollen. The mutations cause the breakdown of the mitochondria in these specific cells and result in cell death and so pollen production is interrupted. These observations have now led to transgenic sterile male plants to be made in order to create hybrid seeds, by inserting transgenes which are specifically poisonous to Tapetum cells. Plant reproduction
https://en.wikipedia.org/wiki/Technology%20aware%20design
Technology Aware Design (TAD) is a research program that started in 2001 at IMEC (an international research & development organization), located at Leuven, in Belgium. It anticipates the end of the traditional "happy scaling" paradigm, where CMOS technology and CMOS design evolved on formally separate tracks, the interface between the two being standard cell, or SPICE compact models (see transistor models for circuit design). Today, both sides (design and technology) are confronted with the need to understand the other in order to overcome new scaling induced issues. The TAD program pursues analysis and solutions for these scaling induced problems.
https://en.wikipedia.org/wiki/Turbomachinery
Turbomachinery, in mechanical engineering, describes machines that transfer energy between a rotor and a fluid, including both turbines and compressors. While a turbine transfers energy from a fluid to a rotor, a compressor transfers energy from a rotor to a fluid. These two types of machines are governed by the same basic relationships including Newton's second Law of Motion and Euler's pump and turbine equation for compressible fluids. Centrifugal pumps are also turbomachines that transfer energy from a rotor to a fluid, usually a liquid, while turbines and compressors usually work with a gas. History The first turbomachines could be identified as water wheels, which appeared between the 3rd and 1st centuries BCE in the Mediterranean region. These were used throughout the medieval period and began the first Industrial Revolution. When steam power started to be used, as the first power source driven by the combustion of a fuel rather than renewable natural power sources, this was as reciprocating engines. Primitive turbines and conceptual designs for them, such as the smoke jack, appeared intermittently but the temperatures and pressures required for a practically efficient turbine exceeded the manufacturing technology of the time. The first patent for gas turbines were filed in 1791 by John Barber. Practical hydroelectric water turbines and steam turbines did not appear until the 1880s. Gas turbines appeared in the 1930s. The first impulse type turbine was created by Carl Gustaf de Laval in 1883. This was closely followed by the first practical reaction type turbine in 1884, built by Charles Parsons. Parsons’ first design was a multi-stage axial-flow unit, which George Westinghouse acquired and began manufacturing in 1895, while General Electric acquired de Laval's designs in 1897. Since then, development has skyrocketed from Parsons’ early design, producing 0.746 kW, to modern nuclear steam turbines producing upwards of 1500 MW. Furthermore, steam turbines ac
https://en.wikipedia.org/wiki/Coded%20wire%20tag
A coded wire tag (CWT) is an animal tagging device, most often used for identifying batches of fish. It consists of a length of magnetized stainless steel wire 0.25 mm in diameter and typically 1.1 mm long. The tag is marked with rows of numbers denoting specific batch or individual codes. The tag is usually injected into the snout or cheek of a fish so that it may be tracked for research or fisheries management. Fish, crustaceans, insects, gastropods, and many other animals have been successfully tagged with Coded Wire Tags. The coded wire tag program in the Pacific Northwest has been described as the largest animal tagging program in history, with over 1 billion salmon tagged. Data retrieval The CWT is not visible once inside the fish; its presence is detected at close range by using a handheld wand or tunnel type detector that can sense the magnetized metal. A number code unique to either a group of fish or an individual fish is etched into the surface of the CWT, and to read the code the fish is killed first so that the tag can be removed and inspected under a microscope. Upon insertion, information about the fish such as hatching date, release date, location, species, sex, and length are recorded along with the corresponding tag code into a database, so that codes on recovered tags can be matched to information within that database. Coded wire tags have been used to research fish species from over 40 different families. History Coded wire tags were first introduced in the 1960s as an alternative to fin clipping. The first CWTs used colored stripes and allowed about 5000 different color combinations to uniquely identify groups of tags. The next development was binary codes marked on tags by electrical discharge machining in 1971. Binary codes allowed 250,000 unique code combinations. The sequential coded wire tag was introduced in 1985, which allowed tags with sequentially increasing numbers, or codes, to be cut from the same spool of wire, so that individu
https://en.wikipedia.org/wiki/Genomic%20DNA
Genomic deoxyribonucleic acid (abbreviated as gDNA) is chromosomal DNA, in contrast to extra-chromosomal DNAs like plasmids. Most organisms have the same genomic DNA in every cell; however, only certain genes are active in each cell to allow for cell function and differentiation within the body. The genome of an organism (encoded by the genomic DNA) is the (biological) information of heredity which is passed from one generation of organism to the next. That genome is transcribed to produce various RNAs, which are necessary for the function of the organism. Precursor mRNA (pre-mRNA) is transcribed by RNA polymerase II in the nucleus. pre-mRNA is then processed by splicing to remove introns, leaving the exons in the mature messenger RNA (mRNA). Additional processing includes the addition of a 5' cap and a poly(A) tail to the pre-mRNA. The mature mRNA may then be transported to the cytosol and translated by the ribosome into a protein. Other types of RNA include ribosomal RNA (rRNA) and transfer RNA (tRNA). These types are transcribed by RNA polymerase I and RNA polymerase III, respectively, and are essential for protein synthesis. However 5s rRNA is the only rRNA which is transcribed by RNA Polymerase III.
https://en.wikipedia.org/wiki/Bogoliubov%20inner%20product
The Bogoliubov inner product (also known as the Duhamel two-point function, Bogolyubov inner product, Bogoliubov scalar product, or Kubo–Mori–Bogoliubov inner product) is a special inner product in the space of operators. The Bogoliubov inner product appears in quantum statistical mechanics and is named after theoretical physicist Nikolay Bogoliubov. Definition Let be a self-adjoint operator. The Bogoliubov inner product of any two operators X and Y is defined as The Bogoliubov inner product satisfies all the axioms of the inner product: it is sesquilinear, positive semidefinite (i.e., ), and satisfies the symmetry property where is the complex conjugate of . In applications to quantum statistical mechanics, the operator has the form , where is the Hamiltonian of the quantum system and is the inverse temperature. With these notations, the Bogoliubov inner product takes the form where denotes the thermal average with respect to the Hamiltonian and inverse temperature . In quantum statistical mechanics, the Bogoliubov inner product appears as the second order term in the expansion of the statistical sum:
https://en.wikipedia.org/wiki/Fhourstones
In computer science, Fhourstones is an integer benchmark that efficiently solves positions in the game of Connect-4. It was written by John Tromp in 1996-2008, and is incorporated into the Phoronix Test Suite. The measurements are reported as the number of game positions searched per second. Available in both ANSI-C and Java, it is quite portable and compact (under 500 lines of source), and uses 50Mb of memory. The benchmark involves (after warming up on three easier positions) solving the entire game, which takes about ten minutes on contemporary PCs, scoring between 1000 and 12,000 kpos/sec. It has been described as more realistic than some other benchmarks. Fhourstones was named as a pun on Dhrystone (itself a pun on Whetstone), as "dhry" sounds the same as "drei", German for "three": fhourstones is an increment on dhrystones.
https://en.wikipedia.org/wiki/Digital%20fashion
Digital Fashion is the visual representation of clothing built using computer technologies and 3D software. This industry is on the rise due to ethical awareness and uses of digital fashion technology such as artificial intelligence to create products with complex social and technical software. Digital fashion is also the interplay between digital technology and couture. Human AI is an intersection of technology and human representation. Where human value is emphasized and enhanced by technology and the possibilities of discovering design. Information and communication technologies (ICTs) have been deeply integrated both into the fashion industry, as well as within the experience of clients and prospects. Such interplay has happened at three main levels. ICTs are used to design and produce fashion products, while also the industry organization leverages onto digital technologies. ICTs impact marketing, distribution and sales. ICTs are extensively used in communication activities with all relevant stakeholders, and contribute to co-create the fashion world. The fashion industry in general has paved the way for digital fashion to be introduced with more technology being in the industry like virtual dressing rooms and the gamification of the fashion industry. Digital fashion is also seen in many different online fashion retail websites. It may be seen on common websites you shop on. This evolution in the fashion industry has called for more education and research of digital fashion which will also be discussed in this article. Design, production, and organization Among the many applications available to fashion designers to model the fusion of creativity with digital avenues, the Digital Textile Printing can be mentioned here. Digital textile printing Digital textile printing has brought together the worlds of fashion, technology, art, chemistry, and printing to produce a new process for printing textiles on clothing. Digital printing is a process in which prin
https://en.wikipedia.org/wiki/Oliver%20Wolcott%20Jr.
Oliver Wolcott Jr. (January 11, 1760 – June 1, 1833) was an American politician and judge. He was the second United States Secretary of the Treasury, a judge of the United States Circuit Court for the Second Circuit, and the 24th Governor of Connecticut. His adult life began with working in Connecticut, followed by participating in the U.S. federal government in the Department of Treasury, before returning to Connecticut, where he spent his life before his death. Throughout his time in politics, Wolcott's political views shifted from Federalist, to Toleration, and finally Jacksonian. Oliver Wolcott Jr. is the son to Oliver Wolcott Sr., part of the Griswold-Wolcott family. Early life Born on January 11, 1760, in Litchfield, Connecticut Colony, British America, Wolcott served in the Continental Army from 1777 to 1779, during the American Revolutionary War, then graduated from Yale University in 1778, where he was a member of Brothers in Unity and read law in 1781. Oliver Wolcott Jr. and the Wolcott family were part of the Standing Order in Connecticut. This was a small group of families that were the elite members of Connecticut society, influencing politics, religion, and social power. John Adams compared them to the aristocracy of Connecticut. Wolcott Jr. developed his own sense of beliefs that differed from the Standing Order and was more inclusive of the general population in Connecticut. Career Early career He was clerk of the Connecticut Committee on Pay-Table from 1781 to 1782. He was a member of the Connecticut Committee on Pay-Table from 1782 to 1784. This group was responsible for accounting Connecticut's military expenses during the American Revolution. Wolcott was also commissioner to settle claims of Connecticut against the United States from 1784 to 1788. He was Comptroller of Public Accounts for Connecticut from 1788 to 1789. He was Auditor for the United States Department of the Treasury from 1789 to 1791. He was Comptroller for the United State
https://en.wikipedia.org/wiki/International%20Swift%20Conference
International Swift Conferences are biennial meetings of ornithologists that focus on birds of the swift family. The first two conventions focused exclusively on the common swift (Apus apus), whereas the following ones broadened the scope to additional species, such as the Pallid swift (Apus pallidus) and the Alpine swift (Tachymarptis melba), presented in the Cambridge 2014 conference. Conferences 2010 Berlin Common Swift Seminars The first conference was held between 8 and 11 April 2010, in Berlin, Germany. 34 participants attended the conference, hosted by the Evangelical School of Neukölln. The presentations covered scientific studies of the common swift, amateur interaction with the birds, and also legal conservational aspects of protecting swifts. It was sponsored by the hosting school, and by APUSlife, the Commonswift Worldwide organization's online virtual magazine. 2012 Berlin Common Swift Seminars The second conference was also held in Berlin, between 10 and 12 April 2012, again, at the Evangelical School of Neukölln. It was attended by 78 participants, from 20 countries, including Belgium, China, Czech Republic, England, Germany, Guernsey, Indonesia, Israel, Italy, Netherlands, Northern Ireland, Poland, Romania, Russia, Scotland, Slovakia, Spain, Sweden, Switzerland and Turkey. 2014 Cambridge International Swift Conference Held between 8 and 10 April 2014 at the Parkside Community College, Cambridge. As mentioned above, this conference was the first in the series to cover species other than the common swift. 2016 Szczecin International Swift Conference Held in between 7 and 10 April 2016, at the Hotel Nord in Szczecin, Poland, with 84 attendants from 23 countries: Australia, Belgium, Cyprus, Czech Republic, England, France, Germany, Greece, Guernsey, Ireland, Israel, Italy, Netherlands, Northern Ireland, Norway, Poland, Russia, Serbia, Spain, Sweden, Switzerland, Turkey and Uzbekistan. Among the presentations was Michael Tarburton's, about the Aust
https://en.wikipedia.org/wiki/Hyperinteger
In nonstandard analysis, a hyperinteger n is a hyperreal number that is equal to its own integer part. A hyperinteger may be either finite or infinite. A finite hyperinteger is an ordinary integer. An example of an infinite hyperinteger is given by the class of the sequence in the ultrapower construction of the hyperreals. Discussion The standard integer part function: is defined for all real x and equals the greatest integer not exceeding x. By the transfer principle of nonstandard analysis, there exists a natural extension: defined for all hyperreal x, and we say that x is a hyperinteger if Thus the hyperintegers are the image of the integer part function on the hyperreals. Internal sets The set of all hyperintegers is an internal subset of the hyperreal line . The set of all finite hyperintegers (i.e. itself) is not an internal subset. Elements of the complement are called, depending on the author, nonstandard, unlimited, or infinite hyperintegers. The reciprocal of an infinite hyperinteger is always an infinitesimal. Nonnegative hyperintegers are sometimes called hypernatural numbers. Similar remarks apply to the sets and . Note that the latter gives a non-standard model of arithmetic in the sense of Skolem.
https://en.wikipedia.org/wiki/Novolak
Novolaks (sometimes: novolacs) are low molecular weight polymers derived from phenols and formaldehyde. They are related to Bakelite, which is more highly crosslinked. The term comes from Swedish "lack" for lacquer and Latin "novo" for new, since these materials were envisioned to replace natural lacquers such as copal resin. Typically novolaks are prepared by the condensation of phenol or a mixture of p- and m-cresol with formaldehyde (as formalin). The reaction is acid catalyzed. Oxalic acid is often used because it can be subsequently removed by thermal decomposition. Novolaks have a degree of polymerization of approximately 20-40. The branching density, determined by the processing conditions, m- vs p-cresol ratio, as well as CH2O/cresol ratio is typically around 15%. Novolaks are especially important in microelectronics where they are used as photoresist materials. They are also used as tackifiers in rubber. See also Epoxy
https://en.wikipedia.org/wiki/Sulfophobococcus
In taxonomy, Sulfophobococcus is a genus of the Desulfurococcaceae. See also List of Archaea genera
https://en.wikipedia.org/wiki/Menger%20sponge
In mathematics, the Menger sponge (also known as the Menger cube, Menger universal curve, Sierpinski cube, or Sierpinski sponge) is a fractal curve. It is a three-dimensional generalization of the one-dimensional Cantor set and two-dimensional Sierpinski carpet. It was first described by Karl Menger in 1926, in his studies of the concept of topological dimension. Construction The construction of a Menger sponge can be described as follows: Begin with a cube. Divide every face of the cube into nine squares, like a Rubik's Cube. This sub-divides the cube into 27 smaller cubes. Remove the smaller cube in the middle of each face, and remove the smaller cube in the center of the more giant cube, leaving 20 smaller cubes. This is a level-1 Menger sponge (resembling a void cube). Repeat steps two and three for each of the remaining smaller cubes, and continue to iterate ad infinitum. The second iteration gives a level-2 sponge, the third iteration gives a level-3 sponge, and so on. The Menger sponge itself is the limit of this process after an infinite number of iterations. Properties The th stage of the Menger sponge, , is made up of smaller cubes, each with a side length of (1/3)n. The total volume of is thus . The total surface area of is given by the expression . Therefore, the construction's volume approaches zero while its surface area increases without bound. Yet any chosen surface in the construction will be thoroughly punctured as the construction continues so that the limit is neither a solid nor a surface; it has a topological dimension of 1 and is accordingly identified as a curve. Each face of the construction becomes a Sierpinski carpet, and the intersection of the sponge with any diagonal of the cube or any midline of the faces is a Cantor set. The cross-section of the sponge through its centroid and perpendicular to a space diagonal is a regular hexagon punctured with hexagrams arranged in six-fold symmetry. The number of these hexagrams, in d
https://en.wikipedia.org/wiki/Wetzel%27s%20problem
In mathematics, Wetzel's problem concerns bounds on the cardinality of a set of analytic functions that, for each of their arguments, take on few distinct values. It is named after John Wetzel, a mathematician at the University of Illinois at Urbana–Champaign. Let F be a family of distinct analytic functions on a given domain with the property that, for each x in the domain, the functions in F map x to a countable set of values. In his doctoral dissertation, Wetzel asked whether this assumption implies that F is necessarily itself countable. Paul Erdős in turn learned about the problem at the University of Michigan, likely via Lee Albert Rubel. In his paper on the problem, Erdős credited an anonymous mathematician with the observation that, when each x is mapped to a finite set of values, F is necessarily finite. However, as Erdős showed, the situation for countable sets is more complicated: the answer to Wetzel's question is yes if and only if the continuum hypothesis is false. That is, the existence of an uncountable set of functions that maps each argument x to a countable set of values is equivalent to the nonexistence of an uncountable set of real numbers whose cardinality is less than the cardinality of the set of all real numbers. One direction of this equivalence was also proven independently, but not published, by another UIUC mathematician, Robert Dan Dixon. It follows from the independence of the continuum hypothesis, proved in 1963 by Paul Cohen, that the answer to Wetzel's problem is independent of ZFC set theory. Erdős' proof is so short and elegant that it is considered to be one of the Proofs from THE BOOK. In the case that the continuum hypothesis is false, Erdős asked whether there is a family of analytic functions, with the cardinality of the continuum, such that each complex number has a smaller-than-continuum set of images. As Ashutosh Kumar and Saharon Shelah later proved, both positive and negative answers to this question are consistent.
https://en.wikipedia.org/wiki/Bending
In applied mechanics, bending (also known as flexure) characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. The structural element is assumed to be such that at least one of its dimensions is a small fraction, typically 1/10 or less, of the other two. When the length is considerably longer than the width and the thickness, the element is called a beam. For example, a closet rod sagging under the weight of clothes on clothes hangers is an example of a beam experiencing bending. On the other hand, a shell is a structure of any geometric form where the length and the width are of the same order of magnitude but the thickness of the structure (known as the 'wall') is considerably smaller. A large diameter, but thin-walled, short tube supported at its ends and loaded laterally is an example of a shell experiencing bending. In the absence of a qualifier, the term bending is ambiguous because bending can occur locally in all objects. Therefore, to make the usage of the term more precise, engineers refer to a specific object such as; the bending of rods, the bending of beams, the bending of plates, the bending of shells and so on. Quasi-static bending of beams A beam deforms and stresses develop inside it when a transverse load is applied on it. In the quasi-static case, the amount of bending deflection and the stresses that develop are assumed not to change over time. In a horizontal beam supported at the ends and loaded downwards in the middle, the material at the over-side of the beam is compressed while the material at the underside is stretched. There are two forms of internal stresses caused by lateral loads: Shear stress parallel to the lateral loading plus complementary shear stress on planes perpendicular to the load direction; Direct compressive stress in the upper region of the beam, applicable mostly to cement concreted elements and, Direct tensile stress, ap
https://en.wikipedia.org/wiki/List%20of%20U.S.%20state%20and%20territory%20trees
This is a list of U.S. state, federal district, and territory trees, including official trees of the following of the states, of the federal district, and of the territories. See also List of U.S. state, district, and territorial insignia National Grove of State Trees National Register of Champion Trees Notes
https://en.wikipedia.org/wiki/Sacrococcygeal%20symphysis
The sacrococcygeal symphysis (sacrococcygeal articulation, articulation of the sacrum and coccyx) is an amphiarthrodial joint, formed between the oval surface at the apex of the sacrum, and the base of the coccyx. It is a slightly moveable joint which is frequently, partially or completely, obliterated in old age, homologous with the joints between the bodies of the vertebrae. Structure Articular disc The sacrococcygeal disc or interosseus ligament is similar to the intervertebral discs but thinner, thicker in front and behind than at the sides, and with a firmer texture. The articular surfaces are elliptical with longer transversal axes. The surface on the sacrum is convex and that on the coccyx concave. Occasionally the coccyx is freely movable on the sacrum, most notably during pregnancy; in such cases a synovial membrane is present. Ligaments The joint is strengthened by a series of ligaments: The ventral or anterior sacrococcygeal ligament is an extension of the anterior longitudinal ligament (ALL) that runs down along the spine on the anterior sides of the bodies of the vertebrae. It consists of a few irregular fibers that attach to the anterior sides of the sacrum and coccyx and blend with the periosteum. The dorsal or posterior sacrococcygeal ligament has a deep and a superficial part: The deep dorsal ligament is a flat band which corresponds to the posterior longitudinal ligament (PLL) that run down inside the vertebral canal on the posterior surfaces of the bodies of the vertebrae. From the posterior side of the fifth sacral body inside the sacral canal, the dorsal ligament stretches to the posterior side of the coccyx, to attach deep to the superficial dorsal ligament. The superficial dorsal ligament corresponds to the ligamenta flava and closes the posterior aspect of the distal end of the vertebral canal. It stretches from median sacral crest and the free margin of the sacral hiatus to the dorsal surface of the coccyx. The lateral sacro
https://en.wikipedia.org/wiki/ReactOS
ReactOS is a free and open-source operating system for amd64/i686 personal computers intended to be binary-compatible with computer programs and device drivers developed for Windows Server 2003 and later versions of Microsoft Windows. ReactOS has been noted as a potential open-source drop-in replacement for Windows and for its information on undocumented Windows APIs. ReactOS has been in development since 1996. , it is still considered feature-incomplete alpha software, and is therefore recommended by the developers only for evaluation and testing purposes. However, many Windows applications are working, such as Adobe Reader 9.3, GIMP 2.6, and LibreOffice 5.4. ReactOS is primarily written in C, with some elements, such as ReactOS File Explorer, written in C++. The project partially implements Windows API functionality and has been ported to the AMD64 processor architecture. ReactOS, as part of the FOSS ecosystem, re-uses and collaborates with many other FOSS projects, most notably the Wine project, which presents a Windows compatibility layer for Unix-like operating systems. History Early development Around 1996, a group of free and open-source software developers started a project called FreeWin95 to implement a clone of Windows 95. The project stalled in discussions of the design of the system. While FreeWin95 had started out with high expectations, there still had not been any builds released to the public by the end of 1997. As a result, the project members, led by then coordinator Jason Filby, joined together to revive the project. The revived project sought to duplicate the functionality of Windows NT. In creating the new project, a new name, ReactOS, was chosen. The project began development in February 1998 by creating the basis for a new NT kernel and basic drivers. The name ReactOS was coined during an IRC chat. While the term "OS" stood for operating system, the term "react" referred to the group's dissatisfaction with – and reaction to – Microsoft's
https://en.wikipedia.org/wiki/Chief%20Moccanooga
Chief Moccanooga was the former athletic mascot for the University of Tennessee at Chattanooga, until 1996, when the university abandoned the mascot as potentially offensive at the request of the Chattanooga InterTribal Association. Chief Moccanooga was replaced with a mockingbird, the state bird of Tennessee, and the nickname for Chattanooga athletics was changed from 'Moccasins' to simply 'Mocs'. Chattanooga's decision to remove Chief Moccanooga as mascot was similar to the actions of several other athletic franchises. In 1986, Chief Noc-A-Homa, the former drum-thumping mascot of Major League Baseball's the Atlanta Braves, was removed. The Cleveland Indians continued to employ a potentially offensive Native American mascot, Chief Wahoo until 2018.
https://en.wikipedia.org/wiki/4MLinux
4MLinux is a lightweight Linux distribution made for both the 32 bit and 64 bit architectures. It is named "4MLinux" since it has 4 main components of the OS. Maintenance (it can be used a rescue Live CD), Multimedia (There is inbuilt support for almost every multimedia format), Miniserver (It comes with a 64-bit server is included running LAMP suite), and Mystery (Includes a collection of classic Linux games). The distribution is developed in Poland, and was first released in 2010. The distribution does not include a package manager, and uses JWM (Joe's Window Manager) as its default window manager. It also comes with Conky preinstalled. When installing programs with the distribution, the distribution will retrieve the Windows version rather than the Linux version due to it coming pre-installed with Wine (A compatibility layer for Windows applications), and not having any package manager. The distribution is geared towards people who prefer a lightweight distribution. There is a version of the distribution called the "4MLinux Game Edition" which provides 90s games natively such as Doom, and Hexen. The distribution comes in two different version, 4MServer, and 4MLinux. 4MLinux requires 128 MB of RAM when installed to an HDD, and 1024 MB of RAM when being used as a live CD/USB. Whereas, 4MServer requires 256 MB of RAM when installed to an HDD, and 2048 MB of RAM when being used as a live CD/USB. Resources External links 4MLinux official website Linux distributions Linux distributions without systemd Operating system distributions bootable from read-only media
https://en.wikipedia.org/wiki/U.S.%20Army%20Acquisition%20Support%20Center
The U.S. Army Acquisition Support Center (USAASC) is part of the Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)). USAASC is headquartered at Fort Belvoir, Va. Overview USAASC was established in its current form on October 1, 2002. USAASC was designated as a direct reporting unit (DRU) of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)) on October 16, 2006. Its core functions include: Providing career development support for the Army Acquisition Workforce and the United States Army Acquisition Corps, military and civilian acquisition leaders. Providing customer service and support to the Army program executive offices in the areas of human resources, resource management (manpower and budget), program structure, and acquisition information management. Advising the ASA(ALT) and others on acquisition issues. USAASC leadership Mr. Craig Spisak has served as the director of the USAASC since June 2005. Mr. Spisak also serves as the Deputy Director for Acquisition Career Management. Mr. Spisak retired from federal service in July 2021. He was succeeded by Mr. Ronald (Rob) Richardson as the ASC Director and Director for Acquisition Career Management (DACM) in August 2021. Director, Acquisition Career Management (DACM) The DACM Office is the organization within the USAASC responsible for providing professional development opportunities for the Army Acquisition Workforce and establishing the procedures that train, educate, and develop members of the workforce. The Army DACM Office works directly with the Defense Acquisition University (DAU), the Assistant Secretary of Defense (Acquisition), and the Under Secretary of Defense (Acquisition, Technology and Logistics) to enable workforce initiatives and to serve as advocates for the Army Acquisition Workforce. The Defense Acquisition Workforce Improvement Act, aimed to professionalize the defense acquisition workforce. T
https://en.wikipedia.org/wiki/Tower%20Mounted%20Amplifier
A Tower Mounted Amplifier (TMA), or Mast Head Amplifier (MHA), is a low-noise amplifier (LNA) mounted as close as practical to the antenna in mobile masts or base transceiver stations. A TMA reduces the base transceiver station noise figure (NF) and therefore improves its overall sensitivity; in other words the mobile mast is able to receive weaker signals. The power to feed the amplifier (in the top of the mast) is usually a DC component on the same coaxial cable that feeds the antenna, otherwise an extra power cable has to be run to the TMA/MHA to supply it with power. Benefits in mobile communications In two way communications systems, there are occasions when one way, one link, is weaker than the other, normally referenced as unbalanced links. This can be fixed by making the transmitter on that link stronger or the receiver more sensitive to weaker signals. TMAs are used in mobile networks to improve the sensitivity of the uplink in mobile phone masts. Since the transmitter in a mobile phone it cannot be easily modified to transmit stronger signals. Improving the uplink translates into a combination of better coverage and mobile transmitting at less power, which in turn implies a lower drain from its batteries, thus a longer battery charge. There are occasions when the cable between the antenna and the receiver is so lossy (too thin or too long) that the signal weakens from the antenna before reaching the receiver; therefore it may be decided to install TMAs from the start to make the system viable. In other words, the TMA can only partially correct, or palliate, the link imbalance. Drawbacks/pitfalls If the received signal is not weak, installing a TMA will not deliver its intended benefit. If the received signal is strong enough, it may cause the TMA to create its own interference which is passed on to the receiver. In some mobile networks (e.g. IS-95 or WCDMA - aka European 3G -), it is not simple to detect and correct unbalanced links since the li
https://en.wikipedia.org/wiki/Key%20derivation%20function
In cryptography, a key derivation function (KDF) is a cryptographic algorithm that derives one or more secret keys from a secret value such as a master key, a password, or a passphrase using a pseudorandom function (which typically uses a cryptographic hash function or block cipher). KDFs can be used to stretch keys into longer keys or to obtain keys of a required format, such as converting a group element that is the result of a Diffie–Hellman key exchange into a symmetric key for use with AES. Keyed cryptographic hash functions are popular examples of pseudorandom functions used for key derivation. History The first deliberately slow (key stretching) password-based key derivation function was called "crypt" (or "crypt(3)" after its man page), and was invented by Robert Morris in 1978. It would encrypt a constant (zero), using the first 8 characters of the user's password as the key, by performing 25 iterations of a modified DES encryption algorithm (in which a 12-bit number read from the real-time computer clock is used to perturb the calculations). The resulting 64-bit number is encoded as 11 printable characters and then stored in the Unix password file. While it was a great advance at the time, increases in processor speeds since the PDP-11 era have made brute-force attacks against crypt feasible, and advances in storage have rendered the 12-bit salt inadequate. The crypt function's design also limits the user password to 8 characters, which limits the keyspace and makes strong passphrases impossible. Although high throughput is a desirable property in general-purpose hash functions, the opposite is true in password security applications in which defending against brute-force cracking is a primary concern. The growing use of massively-parallel hardware such as GPUs, FPGAs, and even ASICs for brute-force cracking has made the selection of a suitable algorithms even more critical because the good algorithm should not only enforce a certain amount of computation
https://en.wikipedia.org/wiki/List%20of%20Arab%20flags
Flags of Arab countries, territories, and organisations usually include the color green, which is a symbol of Islam as well as an emblem of purity, fertility and peace. Common colors in Arab flags are Pan-Arab colors (red, black, white and green); common symbols include stars, crescents and the Shahada. Arab national flags Old Arab national flags Arab organizations Other Arab territories See also Pan-Arab colors
https://en.wikipedia.org/wiki/Claire%20Karekezi
Claire Karekezi is a Neurosurgeon (born 1982) at the Rwanda Military Hospital in Kigali, Rwanda. As the first woman neurosurgeon in Rwanda, and one of eight neurosurgeons serving a population of 13 million, Karekezi serves as an advocate for women in neurosurgery. She has become an inspiration for young people pursuing neurosurgery, particularly young women. Karekezi specializes in neuro oncology and skull-base surgery, is the acting chairperson of the African Women in Neurosurgery (AWIN), Committee of the Continental Association of African Neurosurgical Societies (CAANS), and was elected as a member of the national council of the Rwanda Medical and Dental Council (RMDC) and the Secretary of the bureau for 2022-2026. Early life Karekezi was born in Butare, Rwanda. Mr.  Karekezi Sr., her father, was a telecommunication engineer, and Mrs. Musine, her mother, was a high school teacher. Karekezi grew up in Kigali, the capital city of Rwanda, where she and her two older siblings received their early education. She enjoyed science as a child, and after sixth grade in elementary school, she majored in mathematics and physics in high school. She met the admission criteria for medical school and studied general medicine in Butare at the University of Rwanda College of Medicine and Health Sciences, graduating as an MD in March 2009. During her time in Butare, Karekezi had her first exposure to neurosurgery, an experience that changed the course of her life. Education The journey of Karekezi began at the College of Medicine and Health Sciences, University of Rwanda (UR), where she graduated with honors in Medicine in 2009. In 2007, Karekezi participated in an International Federation of Medical Student's Associations (IFMSA) professional exchange at the Linköping Teaching Hospital in Sweden. There, she met professor Jan Hillman, who had a significant impact on her decision to pursue a career in neurosurgery. In February 2009, following a brief stay in Sweden, Karekezi co
https://en.wikipedia.org/wiki/Deep%20image%20compositing
Deep image compositing is a way of compositing and rendering digital images that emerged in the mid-2010s. In addition to the usual color and opacity channels a notion of spatial depth is created. This allows multiple samples in the depth of the image to make up the final resulting color. This technique produces high quality results and removes artifacts around edges that could not be dealt with otherwise. Deep data Deep data is encoded by advanced 3D renderers into an image that samples information about the path each rendered pixel takes along the z axis extending outward from the virtual camera through space, including the color and opacity of every non-opaque surface or volume it passes through along the way, as well as neighboring samples. It might be considered somewhat analogous to the way ray tracing generates simulated photon paths through such mediums; however, ray tracing and other traditional rendering techniques generally produce images that contain only three or four channels of color and opacity values per pixel, flattened into a two dimensional frame. Depth maps, on the other hand, contain z axis information encoded in a grayscale image. Each level of gray represents a different slice of the z space. The "thickness" of each slice is determined at time of render, allowing for more or less depth fidelity depending on how deep the scene is. Depth maps have been a boon to compositors for blending 3D renders with live action and practical elements. To be useful, the map must have high enough bit depth to encode separation between close-to-camera objects and objects near infinity. Most 3D software packages are now capable of generating 16-bit and 32-bit depth maps, providing up to 2 billion depth levels. Depth maps do not however include transparency information about non-opaque surfaces or volumes and as such, objects beyond and viewed through these semi- or fully-transparent objects will have no depth information of their own and may not get composited
https://en.wikipedia.org/wiki/Plasmonic%20nanolithography
Plasmonic nanolithography (also known as plasmonic lithography or plasmonic photolithography) is a nanolithographic process that utilizes surface plasmon excitations such as surface plasmon polaritons (SPPs) to fabricate nanoscale structures. SPPs, which are surface waves that propagate in between planar dielectric-metal layers in the optical regime, can bypass the diffraction limit on the optical resolution that acts as a bottleneck for conventional photolithography. Theory Surface plasmon polaritons are surface electromagnetic waves that propagate in between two surfaces with sign-changing permittivities. They originate from coupling of photons to plasma oscillations, quantized as plasmons. SPPs result in evanescent fields that decay perpendicularly to the interface where the propagation occurs. The dispersion relation for SPPs permits the excitation of wavelengths shorter than the free-space wavelength of the inbound light, additionally ensuring subwavelength field confinement. Nevertheless, the excitation of SPPs necessitate momentum mismatch; prism and grating coupling methods are common. For plasmonic nanolithography processes, this is achieved through surface roughness and perforations. Methods Plasmonic contact lithography, a modification on the evanescent near-field lithography, uses a metal photomask, on which the SPPs are excited. Similar to common photolithographic processes, photoresist is exposed to SPPs that propagate from the mask. Photomasks with holes enable grating coupling of SPPs; the fields only propagate for nanometers. Srituravanich et al. has demonstrated the lithographic process experimentally with a 2D silver hole array mask; 90 nm hole arrays were produced at 365 nm wavelength, which is beyond diffraction limit. Zayats and Smolyaninov utilized a multi-layered metal film mask to enhance the subwavelength aperture; such structures can be realized by thin film deposition methods. Bowtie apertures and nanogaps were also suggested as alte
https://en.wikipedia.org/wiki/Berth%20allocation%20problem
The berth allocation problem (also known as the berth scheduling problem) is a NP-complete problem in operations research, regarding the allocation of berth space for vessels in container terminals. Vessels arrive over time and the terminal operator needs to assign them to berths in order to be served (loading and unloading containers) as soon as possible. Different factors affect the berth and time assignment of each vessel. Among models found in the literature, there are four most frequently observed cases: discrete vs. continuous berthing space, static vs. dynamic vessel arrivals, static vs. dynamic vessel handling times, and variable vessel arrivals. In the discrete problem, the quay is viewed as a finite set of berths. In the continuous problem, vessels can berth anywhere along the quay and the majority of research deals with the former case. In the static arrival problem all vessels are already at the port whereas in the dynamic only a portion of the vessels to be scheduled are present. The majority of the published research in berth scheduling considers the latter case. In the static handling time problem, vessel handling times are considered as input, whereas in the dynamic they are decision variables. Finally, in the last case, the vessel arrival times are considered as variables and are optimized. Technical restrictions such as berthing draft and inter-vessel and end-berth clearance distance are further assumptions that have been adopted in some of the studies dealing with the berth allocation problem, bringing the problem formulation closer to real world conditions. Introducing technical restrictions to existing berth allocation models is rather straightforward and it may increase the complexity of the problem but simplify the use of metaheuristics (decrease in the feasible space). Some of the most notable objectives addressed in the literature are: Minimization of vessel total service times (waiting and handling times), Minimization of earl
https://en.wikipedia.org/wiki/Pelvic%20cavity
The pelvic cavity is a body cavity that is bounded by the bones of the pelvis. Its oblique roof is the pelvic inlet (the superior opening of the pelvis). Its lower boundary is the pelvic floor. The pelvic cavity primarily contains the reproductive organs, urinary bladder, distal ureters, proximal urethra, terminal sigmoid colon, rectum, and anal canal. In females, the uterus, Fallopian tubes, ovaries and upper vagina occupy the area between the other viscera. The rectum is located at the back of the pelvis, in the curve of the sacrum and coccyx; the bladder is in front, behind the pubic symphysis. The pelvic cavity also contains major arteries, veins, muscles, and nerves. These structures coexist in a crowded space, and disorders of one pelvic component may impact upon another; for example, constipation may overload the rectum and compress the urinary bladder, or childbirth might damage the pudendal nerves and later lead to anal weakness. Structure The pelvis has an anteroinferior, a posterior, and two lateral pelvic walls; and an inferior pelvic wall, also called the pelvic floor. The parietal peritoneum is attached here and to the abdominal wall. Lesser pelvis The lesser pelvis (or "true pelvis") is the space enclosed by the pelvic girdle and below the pelvic brim: between the pelvic inlet and the pelvic floor. This cavity is a short, curved canal, deeper on its posterior than on its anterior wall. Some sources consider this region to be the entirety of the pelvic cavity. Other sources define the pelvic cavity as the larger space including the greater pelvis, just above the pelvic inlet. The lesser pelvis is bounded in front and below by the superior rami of the symphysis pubis; above and behind, by the sacrum and coccyx; and laterally, by a broad, smooth, quadrangular area of bone, corresponding to the inner surfaces of the body and superior ramus of the ischium, and the part of the ilium below the arcuate line. The lesser pelvis contains the pelvic colo
https://en.wikipedia.org/wiki/Idiopathic%20multicentric%20Castleman%20disease
Idiopathic multicentric Castleman disease (iMCD) is a subtype of Castleman disease (also known as giant lymph node hyperplasia, lymphoid hamartoma, or angiofollicular lymph node hyperplasia), a group of lymphoproliferative disorders characterized by lymph node enlargement, characteristic features on microscopic analysis of enlarged lymph node tissue, and a range of symptoms and clinical findings. People with iMCD have enlarged lymph nodes in multiple regions and often have flu-like symptoms, abnormal findings on blood tests, and dysfunction of vital organs, such as the liver, kidneys, and bone marrow. iMCD has features often found in autoimmune diseases and cancers, but the underlying disease mechanism is unknown. Treatment for iMCD may involve the use of a variety of medications, including immunosuppressants and chemotherapy. Castleman disease was named after Dr. Benjamin Castleman, who first described the disease in 1956. The Castleman Disease Collaborative Network is the largest organization focused on the disease and is involved in research, awareness, and patient support. Signs and symptoms Patients with iMCD may experience enlarged lymph nodes in multiple lymph node regions; systemic symptoms (fever, night sweats, unintended weight loss, fatigue); enlargement of the liver and/or spleen; extravascular fluid accumulation in the extremities (edema), abdomen (ascites), or lining of the lungs (pleural effusion); lung symptoms such as cough and shortness of breath; and skin findings such as cherry hemangiomas. Causes The cause of iMCD is not known and no risk factors have been identified. Genetic variants have been observed in cases of Castleman disease; however, no genetic variant has been validated as disease causing. Unlike HHV-8-associated MCD, iMCD is not caused by uncontrolled HHV-8 infection. Mechanism The disease mechanism of iMCD has not been fully described. It is known that interleukin-6 (IL-6), a molecule that stimulates immune cells, plays a r
https://en.wikipedia.org/wiki/The%20Neutral%20Theory%20of%20Molecular%20Evolution
The Neutral Theory of Molecular Evolution is an influential monograph written in 1983 by Japanese evolutionary biologist Motoo Kimura. While the neutral theory of molecular evolution existed since his article in 1968, Kimura felt the need to write a monograph with up-to-date information and evidences showing the importance of his theory in evolution. Evolution is a change in the frequency of alleles in a population over time. Mutations occur at random and in the Darwinian evolution model natural selection acts on the genetic variation in a population that has arisen through this mutation. These mutations can be beneficial or deleterious and are selected for or against based on that factor. In this theory, every evolutionary event, mutation, and gene polymorphism (neutral differences in phenotype or genotype) would have to be positively or negatively selected for and show some kind of change over many generations. If these genetic differences grow between different populations speciation events can occur. When this theory was first introduced to the scientific community, there was no understanding of genetic principles such as drift or synonymous mutation. When molecular biologists, like Motoo Kimura (1979), began to examine the DNA evidence, they found that far more mutations occur in non-protein coding regions or are synonymous mutations in coding regions (which do not change the protein structure or function) and are, therefore, not involved in selection as they do not impact an organism’s fitness. These findings began to show that the positive or negative selection in Darwinian evolution was too simplistic to describe every evolutionary process. Through various experiments Kimura was able to determine that proteins in mammalian lineages were polymorphisms of each other, having only one or two point mutations that did not affect the actions of the protein in any way, whereas in Darwinian evolution a slow pattern of selection in genetic lineages with increasing f
https://en.wikipedia.org/wiki/Bluetooth%20Low%20Energy
Bluetooth Low Energy (Bluetooth LE, colloquially BLE, formerly marketed as Bluetooth Smart) is a wireless personal area network technology designed and marketed by the Bluetooth Special Interest Group (Bluetooth SIG) aimed at novel applications in the healthcare, fitness, beacons, security, and home entertainment industries. It is independent of classic Bluetooth and has no compatibility, but Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR) and LE can coexist. The original specification was developed by Nokia in 2006 under the name Wibree, which was integrated into Bluetooth 4.0 in December 2009 as Bluetooth Low Energy. Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. Mobile operating systems including iOS, Android, Windows Phone and BlackBerry, as well as macOS, Linux, Windows 8, Windows 10 and Windows 11, natively support Bluetooth Low Energy. Compatibility Bluetooth Low Energy is distinct from the previous (often called "classic") Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR) protocol, but the two protocols can both be supported by one device: the Bluetooth 4.0 specification permits devices to implement either or both of the LE and BR/EDR systems. Bluetooth Low Energy uses the same 2.4 GHz radio frequencies as classic Bluetooth, which allows dual-mode devices to share a single radio antenna, but uses a simpler modulation system. Branding In 2011, the Bluetooth SIG announced the Bluetooth Smart logo so as to clarify compatibility between the new low energy devices and other Bluetooth devices. Bluetooth Smart Ready indicates a dual-mode device compatible with both classic and low energy peripherals. Bluetooth Smart indicates a low energy-only device which requires either a Smart Ready or another Smart device in order to function. With the May 2016 Bluetooth SIG branding information, the Bluetooth SIG began phasing out the Bluetoot
https://en.wikipedia.org/wiki/Nano%20differential%20scanning%20fluorimetry
NanoDSF is a modified differential scanning fluorimetry method to determine protein stability employing intrinsic tryptophan or tyrosine fluorescence. Protein stability is typically addressed by thermal or chemical unfolding experiments. In thermal unfolding experiments, a linear temperature ramp is applied to unfold proteins, whereas chemical unfolding experiments use chemical denaturants in increasing concentrations. The thermal stability of a protein is typically described by the 'melting temperature' or 'Tm', at which 50% of the protein population is unfolded, corresponding to the midpoint of the transition from folded to unfolded. In contrast to conventional DSF methods, nanoDSF uses tryptophan or tyrosine fluorescence to monitor protein unfolding. Both the fluorescence intensity and the fluorescence maximum strongly depends on the close surroundings of the tryptophan. Therefore, the ratio of the fluorescence intensities at 350 nm and 330 nm is suitable to detect any changes in protein structure, for example due to protein unfolding. Its applications include antibody engineering, membrane protein research, quality control and formulation development. NanoDSF has also been utilized to rapidly evaluate the melting points of enzyme libraries for biotechnological applications. Examples for application of nanoDSF The nanoDSF technology was used to confirm on-target binding of BI-3231 to HSD17B13 and to elucidate its uncompetitive mode of inhibition with regards to NAD+.
https://en.wikipedia.org/wiki/Isaac%20Horowitz
Isaac Horowitz (December 15, 1920 - 2005) was a notable scientist with significant contributions to automatic control theory. He developed and championed the Quantitative Feedback Theory which for the first time introduced a formal combination of the genuine frequency methodology founded by Hendrik Bode with plant uncertainty considerations. Biography Isaac Horowitz was born one of 11 siblings in the British mandate of Palestine, modern Israel, in the city of Safed. His family moved to New York City when he was five years old and shortly thereafter settled in Winnipeg, Manitoba, Canada. He received the B.Sc. in Physics and Mathematics from the University of Manitoba in 1944. In 1948, he received a B.Sc. degree in electrical engineering from MIT. Between 1951 and 1956, he was a full-time instructor and a part-time graduate student at the Polytechnic University of New York from which he obtained his M.E.E. and D.E.E..
https://en.wikipedia.org/wiki/Lock%20number
In helicopter aerodynamics, the Lock number is the ratio of aerodynamic forces, which act to lift the rotor blades, to inertial forces, which act to maintain the blades in the plane of rotation. It is named after C. N. H. Lock, a British aerodynamicist who studied autogyros in the 1920s. Typical rotorcraft blades have a Lock number between 3 and 12, usually approximately 8. The Lock number is typically 8 to 10 for articulated rotors and 5 to 7 for hingeless rotors. High-stiffness blades may have a Lock number up to 14. Larger blades have a higher mass and more inertia, so tend to have a lower Lock number. Helicopter rotors with more than two blades can have lighter blades, so tend to have a higher Lock number. A low Lock number gives good autorotation characteristics due to higher inertia, however this comes with a mass penalty. Ray Prouty writes, "The previously discussed numbers: Mach, Reynolds and Froude are used in many fields of fluid dynamic studies. The Lock number is ours alone." See also Coning Mach number Froude number Reynolds number
https://en.wikipedia.org/wiki/Throat%20singing
Throat singing refers to several vocal practices found in different cultures worldwide. The most distinctive feature of such vocal practices is to be associated to some type of guttural voice that contrasts with the most common types of voices employed in singing, which are usually represented by chest (modal) and head (light, or falsetto) registers. Throat singing is often described as producing the sensation of more than one pitch at a time, i.e., the listener perceives two or more distinct musical notes while the singer is producing a single vocalisation. Throat singing, therefore, consists of a wide range of singing techniques that originally belonged to particular cultures and seem to share some sounding characteristics that make them especially noticeable by other cultures and users of mainstream singing styles. The term originates from the translation of the Tuvan/Mongolian word Xhöömei/Xhöömi, that literally means throat, guttural. Ethnic groups from Russia, Mongolia, Japan, South Africa, Canada, Italy, China and India, among others, accept and normally employ the term throat singing to describe their special way of producing voice, song and music. The term throat singing is not precise, because any singing technique involves the sound generation in the "throat", i.e., the voice produced at the level of the larynx, which includes the vocal folds and other structures. Therefore it would be, in principle, admissible to refer to classical operatic singing or pop singing as "throat singing" for instance. However, the term throat is not adopted by the official terminology of anatomy (Terminologia Anatomica) and is not technically associated with most of the singing techniques. Many authors, performers, coaches, and listeners associate throat singing with overtone singing. Throat singing and overtone singing are certainly not synonyms, contrary to what is inaccurately indicated by many dictionaries (e.g., in the definition by Britannica) but, in some cases, both
https://en.wikipedia.org/wiki/Senescence
Senescence () or biological aging is the gradual deterioration of functional characteristics in living organisms. The word senescence can refer to either cellular senescence or to senescence of the whole organism. Organismal senescence involves an increase in death rates and/or a decrease in fecundity with increasing age, at least in the later part of an organism's life cycle. but it can be delayed. The 1934 discovery that calorie restriction can extend lifespans by 50% in rats, the existence of species having negligible senescence, and the existence of potentially immortal organisms such as members of the genus Hydra have motivated research into delaying senescence and thus age-related diseases. Rare human mutations can cause accelerated aging diseases. Environmental factors may affect aging – for example, overexposure to ultraviolet radiation accelerates skin aging. Different parts of the body may age at different rates and distinctly, including the brain, the cardiovascular system, and muscle. Similarly, functions may distinctly decline with aging, including movement control and memory. Two organisms of the same species can also age at different rates, making biological aging and chronological aging distinct concepts. Definition and characteristics Organismal senescence is the aging of whole organisms. Actuarial senescence can be defined as an increase in mortality and/or a decrease in fecundity with age. The Gompertz–Makeham law of mortality says that the age-dependent component of the mortality rate increases exponentially with age. Aging is characterized by the declining ability to respond to stress, increased homeostatic imbalance, and increased risk of aging-associated diseases including cancer and heart disease. Aging has been defined as "a progressive deterioration of physiological function, an intrinsic age-related process of loss of viability and increase in vulnerability." In 2013, a group of scientists defined nine hallmarks of aging that are commo
https://en.wikipedia.org/wiki/Voter%20model
In the mathematical theory of probability, the voter model is an interacting particle system introduced by Richard A. Holley and Thomas M. Liggett in 1975. One can imagine that there is a "voter" at each point on a connected graph, where the connections indicate that there is some form of interaction between a pair of voters (nodes). The opinions of any given voter on some issue changes at random times under the influence of opinions of his neighbours. A voter's opinion at any given time can take one of two values, labelled 0 and 1. At random times, a random individual is selected and that voter's opinion is changed according to a stochastic rule. Specifically, one of the chosen voter's neighbors is chosen according to a given set of probabilities and that neighbor’s opinion is transferred to the chosen voter. An alternative interpretation is in terms of spatial conflict. Suppose two nations control the areas (sets of nodes) labelled 0 or 1. A flip from 0 to 1 at a given location indicates an invasion of that site by the other nation. Note that only one flip happens each time. Problems involving the voter model will often be recast in terms of the dual system of coalescing Markov chains. Frequently, these problems will then be reduced to others involving independent Markov chains. Definition A voter model is a (continuous time) Markov process with state space and transition rates function , where is a d-dimensional integer lattice, and •,• is assumed to be nonnegative, uniformly bounded and continuous as a function of in the product topology on . Each component is called a configuration. To make it clear that stands for the value of a site x in configuration ; while means the value of a site x in configuration at time . The dynamic of the process are specified by the collection of transition rates. For voter models, the rate at which there is a flip at from 0 to 1 or vice versa is given by a function of site . It has the following properties: for
https://en.wikipedia.org/wiki/Empirical%20risk%20minimization
Empirical risk minimization (ERM) is a principle in statistical learning theory which defines a family of learning algorithms and is used to give theoretical bounds on their performance. The core idea is that we cannot know exactly how well an algorithm will work in practice (the true "risk") because we don't know the true distribution of data that the algorithm will work on, but we can instead measure its performance on a known set of training data (the "empirical" risk). Background Consider the following situation, which is a general setting of many supervised learning problems. We have two spaces of objects and and would like to learn a function (often called hypothesis) which outputs an object , given . To do so, we have at our disposal a training set of examples where is an input and is the corresponding response that we wish to get from . To put it more formally, we assume that there is a joint probability distribution over and , and that the training set consists of instances drawn i.i.d. from . Note that the assumption of a joint probability distribution allows us to model uncertainty in predictions (e.g. from noise in data) because is not a deterministic function of but rather a random variable with conditional distribution for a fixed . We also assume that we are given a non-negative real-valued loss function which measures how different the prediction of a hypothesis is from the true outcome . For classification tasks these loss functions can be scoring rules. The risk associated with hypothesis is then defined as the expectation of the loss function: A loss function commonly used in theory is the 0-1 loss function: . The ultimate goal of a learning algorithm is to find a hypothesis among a fixed class of functions for which the risk is minimal: For classification problems, the Bayes classifier is defined to be the classifier minimizing the risk defined with the 0–1 loss function. Empirical risk minimization In general, th
https://en.wikipedia.org/wiki/List%20of%20misnamed%20theorems
This is a list of misnamed theorems in mathematics. It includes theorems (and lemmas, corollaries, conjectures, laws, and perhaps even the odd object) that are well known in mathematics, but which are not named for the originator. That is, these items on this list illustrate Stigler's law of eponymy (which is not, of course, due to Stephen Stigler, who credits Robert K Merton). == Applied mathematics == Benford's law. This was first stated in 1881 by Simon Newcomb, and rediscovered in 1938 by Frank Benford. The first rigorous formulation and proof seems to be due to Ted Hill in 1988.; see also the contribution by Persi Diaconis. Bertrand's ballot theorem. This result concerning the probability that the winner of an election was ahead at each step of ballot counting was first published by W. A. Whitworth in 1878, but named after Joseph Louis François Bertrand who rediscovered it in 1887. A common proof uses André's reflection method, though the proof by Désiré André did not use any reflections. Algebra Burnside's lemma. This was stated and proved without attribution in Burnside's 1897 textbook, but it had previously been discussed by Augustin Cauchy, in 1845, and by Georg Frobenius in 1887. Cayley–Hamilton theorem. The theorem was first proved in the easy special case of 2×2 matrices by Cayley, and later for the case of 4×4 matrices by Hamilton. But it was only proved in general by Frobenius in 1878. Hölder's inequality. This inequality was first established by Leonard James Rogers, and published in 1888. Otto Hölder discovered it independently, and published it in 1889. Marden's theorem. This theorem relating the location of the zeros of a complex cubic polynomial to the zeros of its derivative was named by Dan Kalman after Kalman read it in a 1966 book by Morris Marden, who had first written about it in 1945. But, as Marden had himself written, its original proof was by Jörg Siebeck in 1864. Pólya enumeration theorem. This was proven in 1927 in a difficult pape
https://en.wikipedia.org/wiki/Modified%20internal%20rate%20of%20return
The modified internal rate of return (MIRR) is a financial measure of an investment's attractiveness. It is used in capital budgeting to rank alternative investments of equal size. As the name implies, MIRR is a modification of the internal rate of return (IRR) and as such aims to resolve some problems with the IRR. Problems associated with the IRR While there are several problems with the IRR, MIRR resolves two of them. Firstly, IRR is sometimes misapplied, under an assumption that interim positive cash flows are reinvested elsewhere in a different project at the same rate of return offered by the project that generated them. This is usually an unrealistic scenario and a more likely situation is that the funds will be reinvested at a rate closer to the firm's cost of capital. The IRR therefore often gives an unduly optimistic picture of the projects under study. Generally for comparing projects more fairly, the weighted average cost of capital should be used for reinvesting the interim cash flows. Secondly, more than one IRR can be found for projects with alternating positive and negative cash flows, which leads to confusion and ambiguity. MIRR finds only one value. Calculation MIRR is calculated as follows: , where n is the number of equal periods at the end of which the cash flows occur (not the number of cash flows), PV is present value (at the beginning of the first period), FV is future value (at the end of the last period). The formula adds up the negative cash flows after discounting them to time zero using the external cost of capital, adds up the positive cash flows including the proceeds of reinvestment at the external reinvestment rate to the final period, and then works out what rate of return would cause the magnitude of the discounted negative cash flows at time zero to be equivalent to the future value of the positive cash flows at the final time period. Spreadsheet applications, such as Microsoft Excel, have inbuilt functions to calculate t
https://en.wikipedia.org/wiki/Video%20Shader
Video Shader is a graphics card feature that ATI advertises on their R300 and R400 cards. The R500 card is advertised as having Video Shader HD. Video shader is a feature that describes ATI's process of using Pixel Shaders to improve quality of video playback. It can also be used to perform post-processing, like adding a film grain, or adding other special effects. One feature of Video Shader is ATI Full Stream. Full Stream detects the edges of video compression blocks and uses a pixel shader based filter to smooth the picture hiding the blocks. Another is Video Soap, which reduces video noise. Other features are DXVA support for Hardware Motion Compensation, iDCT, DCT and color space conversion. Support for Hardware Motion Compensation, iDCT, DCT and color space conversion is also listed. Video Shader has been superseded by Unified Video Decoder (UVD) and Video Coding Engine (VCE). See also Unified Video Decoder (UVD) Video Coding Engine (VCE)
https://en.wikipedia.org/wiki/Lindsey%E2%80%93Fox%20algorithm
The Lindsey–Fox algorithm, named after Pat Lindsey and Jim Fox, is a numerical algorithm for finding the roots or zeros of a high-degree polynomial with real coefficients over the complex field. It is particularly designed for random coefficients but also works well on polynomials with coefficients from samples of speech, seismic signals, and other measured phenomena. A Matlab implementation of this has factored polynomials of degree over a million on a desktop computer. The Lindsey–Fox algorithm The Lindsey–Fox algorithm uses the FFT (fast Fourier transform) to very efficiently conduct a grid search in the complex plane to find accurate approximations to the N roots (zeros) of an Nth-degree polynomial. The power of this grid search allows a new polynomial factoring strategy that has proven to be very effective for a certain class of polynomials. This algorithm was conceived of by Pat Lindsey and implemented by Jim Fox in a package of computer programs created to factor high-degree polynomials. It was originally designed and has been further developed to be particularly suited to polynomials with real, random coefficients. In that form, it has proven to be very successful by factoring thousands of polynomials of degrees from one thousand to hundreds of thousand as well as several of degree one million and one each of degree two million and four million. In addition to handling very high degree polynomials, it is accurate, fast, uses minimum memory, and is programmed in the widely available language, Matlab. There are practical applications, often cases where the coefficients are samples of some natural signal such as speech or seismic signals, where the algorithm is appropriate and useful. However, it is certainly possible to create special, ill-conditioned polynomials that it cannot factor, even low degree ones. The basic ideas of the algorithm were first published by Lindsey and Fox in 1992 and reprinted in 1996.  After further development, other papers were publ
https://en.wikipedia.org/wiki/Remote%20backup%20service
A remote, online, or managed backup service, sometimes marketed as cloud backup or backup-as-a-service, is a service that provides users with a system for the backup, storage, and recovery of computer files. Online backup providers are companies that provide this type of service to end users (or clients). Such backup services are considered a form of cloud computing. Online backup systems are typically built for a client software program that runs on a given schedule. Some systems run once a day, usually at night while computers aren't in use. Other newer cloud backup services run continuously to capture changes to user systems nearly in real-time. The online backup system typically collects, compresses, encrypts, and transfers the data to the remote backup service provider's servers or off-site hardware. There are many products on the market – all offering different feature sets, service levels, and types of encryption. Providers of this type of service frequently target specific market segments. High-end LAN-based backup systems may offer services such as Active Directory, client remote control, or open file backups. Consumer online backup companies frequently have beta software offerings and/or free-trial backup services with fewer live support options. History In the mid-1980s, the computer industry was in a great state of change with modems at speeds of 1200 to 2400 baud, making transfers of large amounts of data slow (1 MB in 72 minutes). While faster modems and more secure network protocols were in development, tape backup systems gained in popularity. During that same period the need for an affordable, reliable online backup system was becoming clear, especially for businesses with critical data. More online/remote backup services came into existence during the heyday of the dot-com boom in the late 1990s. The initial years of these large industry service providers were about capturing market share and understanding the importance and the role that thes
https://en.wikipedia.org/wiki/106%20%28number%29
106 (one hundred [and] six) is the natural number following 105 and preceding 107. In mathematics 106 is a centered pentagonal number, a centered heptagonal number, and a regular 19-gonal number. There are 106 mathematical trees with ten vertices. See also 106 (disambiguation)
https://en.wikipedia.org/wiki/Emanuel%20Sperner
Emanuel Sperner (9 December 1905 – 31 January 1980) was a German mathematician, best known for two theorems. He was born in Waltdorf (near Neiße, Upper Silesia, now Nysa, Poland), and died in Sulzburg-Laufen, West Germany. He was a student at Carolinum in Nysa and then Hamburg University where his advisor was Wilhelm Blaschke. He was appointed Professor in Königsberg in 1934, and subsequently held posts in a number of universities until 1974. Sperner's theorem, from 1928, says that the size of an antichain in the power set of an n-set (a Sperner family) is at most the middle binomial coefficient(s). It has several proofs and numerous generalizations, including the Sperner property of a partially ordered set. Sperner's lemma, from 1928, states that every Sperner coloring of a triangulation of an n-dimensional simplex contains a cell colored with a complete set of colors. It was proven by Sperner to provide an alternate proof of a theorem of Lebesgue characterizing dimensionality of Euclidean spaces. It was later noticed that this lemma provides a direct proof of the Brouwer fixed-point theorem without explicit use of homology. Sperner's students included Kurt Leichtweiss and Gerhard Ringel.
https://en.wikipedia.org/wiki/Sensor%20node
A sensor node (also known as a mote in North America), consists of an individual node from a sensor network that is capable of performing a desired action such as gathering, processing or communicating information with other connected nodes in a network. History Although wireless sensor networks have existed for decades and used for diverse applications such as earthquake measurements or warfare, the modern development of small sensor nodes dates back to the 1998 Smartdust project and the NASA. Sensor Web One of the objectives of the Smartdust project was to create autonomous sensing and communication within a cubic millimeter of space, though this project ended early on, it led to many more research projects and major research centres such as The Berkeley NEST and CENS. The researchers involved in these projects coined the term mote to refer to a sensor node. The equivalent term in the NASA Sensor Webs Project for a physical sensor node is pod, although the sensor node in a Sensor Web can be another Sensor Web itself. Physical sensor nodes have been able to increase their effectiveness and its capability in conjunction with Moore's Law. The chip footprint contains more complex and lower powered microcontrollers. Thus, for the same node footprint, more silicon capability can be packed into it. Nowadays, motes focus on providing the longest wireless range (dozens of km), the lowest energy consumption (a few uA) and the easiest development process for the user. Components The main components of a sensor node usually involve a microcontroller, transceiver, external memory, power source and one or more sensors. Sensors Sensors are used by wireless sensor nodes to capture data from their environment. They are hardware devices that produce a measurable response to a change in a physical condition like temperature or pressure. Sensors measure physical data of the parameter to be monitored and have specific characteristics such as accuracy, sensitivity etc. The cont
https://en.wikipedia.org/wiki/Spaced%20learning
Spaced learning is a learning method in which highly condensed learning content is repeated three times, with two 10-minute breaks during which distractor activities such as physical activities are performed by the students. It is based on the temporal pattern of stimuli for creating long-term memories reported by R. Douglas Fields in Scientific American in 2005. This 'temporal code' Fields used in his experiments was developed into a learning method for creating long-term memories by Paul Kelley, who led a team of teachers and scientists as reported in Making Minds in 2008. A paper on the method has been published in Frontiers in Human Neuroscience. This makes a substantial scientific case for this approach to learning based on research over many years in different species. The distinctive features of the approach are made clear: the speed of instruction being minutes (as opposed to hours, days or months), the spaces and their function, and why content is repeated three times. Spaced learning has been reported in other species as being required for long-term memory creation, a finding that gives considerable weight to its use in education. Background Spaced Learning had been developed by Kelley and his team over years and rather confusingly was not called 'Spaced Learning' at first. Earlier descriptions of Spaced Learning often led to its being misunderstood, and the scientific origins of the approach ignored. When the initial reports of outcomes were made public, media seized upon the condensed learning content as the key element in the approach used and the BBC national television news, The Sunday Times, The Independent, and The Economist reported the approach largely in those terms ('8 minute lessons'). This emphasis was misplaced, since Spaced Learning as a method depends on the length and number of the spaces (Fields' 'temporal code'), not the content presentation (which can vary). However, this misunderstanding was also included in reports in the edu
https://en.wikipedia.org/wiki/Altman%20Z-score
The Z-score formula for predicting bankruptcy was published in 1968 by Edward I. Altman, who was, at the time, an Assistant Professor of Finance at New York University. The formula may be used to predict the probability that a firm will go into bankruptcy within two years. Z-scores are used to predict corporate defaults and an easy-to-calculate control measure for the financial distress status of companies in academic studies. The Z-score uses multiple corporate income and balance sheet values to measure the financial health of a company. The formula The Z-score is a linear combination of four or five common business ratios, weighted by coefficients. The coefficients were estimated by identifying a set of firms which had declared bankruptcy and then collecting a matched sample of firms which had survived, with matching by industry and approximate size (assets). Altman applied the statistical method of discriminant analysis to a dataset of publicly held manufacturers. The estimation was originally based on data from publicly held manufacturers, but has since been re-estimated based on other datasets for private manufacturing, non-manufacturing and service companies. The original data sample consisted of 66 firms, half of which had filed for bankruptcy under Chapter 7. All businesses in the database were manufacturers, and small firms with assets of < $1 million were eliminated. The original Z-score formula was as follows: Z = 1.2X1 + 1.4X2 + 3.3X3 + 0.6X4 + 1.0X5. X1 = ratio of working capital to total assets. Measures liquid assets in relation to the size of the company. X2 = ratio of retained earnings to total assets. Measures profitability that reflects the company's age and earning power. X3 = ratio of earnings before interest and taxes to total assets. Measures operating efficiency apart from tax and leveraging factors. It recognizes operating earnings as being important to long-term viability. X4 = ratio of market value of equity to book value of