source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Maurice%20Newman%20%28artist%29
|
Maurice Newman was a painter, sculptor, model maker and photographer. He was the son of Abraham Newman and Tobi Schmukler, and was born in Lithuania in 1898. He was married to Edythe Brenda Tichell from 1930 to his death in 1977. He had one daughter, Rachel Newman.
In his teens, Newman left Lithuania to live in Switzerland, and acted as a messenger, delivering messages between clandestine lovers; he spoke Russian, Lithuanian, Polish, German and Yiddish. He then lived in England and South Africa, attending the National School of Arts in Johannesburg. In the early 1920s, Newman migrated to the U.S., lived in Boston, and worked in the Newton offices of the Bachrach Studios. After a brief stint as a retoucher at the White Studios in New York City, he returned to Boston to work as a commercial artist while attending the Vesper George School of Art, the School of the Museum of Fine Arts (now the School of the Museum of Fine Arts at Tufts), and the Woodbury School of Art.
In 1940, Newman was employed as a model maker by Federal Works of Art Passive Defense Project (Federal Art Project). In 1942 he relocated to Alexandria Virginia as a civilian Army employee to head the model shop in the United States Army Engineer Research and Development Laboratory at Fort Belvoir. During World War II, he constructed dioramas and topographical bombing maps. Following the war, projects shifted to the Cold War and civil defense.
In retirement, Newman was able to fully devote his time to portrait painting, as well as sculptures in wood and aluminum. His aluminum sculpture, one of the first U.S. memorials to the six million Jews martyred by Hitler, was unveiled in 1963 at the Kansas City, Missouri Jewish Community Center; the keynote speaker at the unveiling was former President Harry S. Truman. His dioramas and miniatures were exhibited at the Boston Children's Museum, 1939 New York World's Fair and the Peabody Essex Museum.
|
https://en.wikipedia.org/wiki/Persistent%20binding
|
Host-based zoning can include WWN or LUN masking, and is typically known as “persistent binding.”
In storage networking, ”persistent binding” is an option of zoning.
Host-based zoning is usually referred to as persistent binding or LUN, and is perhaps the least implemented form of zoning. Because it requires the host configuration to be correct in order to avoid zoning conflicts, this form of zoning creates a greater opportunity for administrative errors and conflicting access to targets. Moreover, zoning interfaces vary among different host operating systems and HBA's — increasing the possibility for administrative errors. If a host is not configured with the zoning software, it can access all devices in the fabric and create an even higher probability of data corruption. Host-based zoning is often used when clusters are implemented to control the mapping of devices to specific target IDs. However, it should never be the only form of zoning. Augmenting host-based zoning with storage- and fabric-based zoning is the only acceptable method to reliably control device access and data security.
Basically, A given LUN has it SCSI id assigned by its RAID device (typically a SAN ). But for some purposes it's useful to have the SCSI id assigned by the host itself: that's persistent binding.
What is Persistent binding for ?
Without persistent binding, after every reboot, the SCSI id of a LUN may change. For Example, under Linux, a LUN bound on /dev/sda could migrate to /dev/sdb after a reboot. The risks augments with multipathing. Based on that, it is obvious that many software may crash without persistent binding.
from http://www.storagesearch.com/datalink-art1.html
Operating systems and upper-level applications (such as backup software) typically require a static or predictable SCSI target ID for their storage reliability and persistent binding affords that happening.
Types of zoning
A zone can include host and LUNS. The LUNS are exported by the DISK ARRAY, the hosts
|
https://en.wikipedia.org/wiki/Flow%20control%20%28data%29
|
In data communications, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from overwhelming a slow receiver. Flow control should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node.
Flow control is important because it is possible for a sending computer to transmit information at a faster rate than the destination computer can receive and process it. This can happen if the receiving computers have a heavy traffic load in comparison to the sending computer, or if the receiving computer has less processing power than the sending computer.
Stop-and-wait
Stop-and-wait flow control is the simplest form of flow control. In this method the message is broken into multiple frames, and the receiver indicates its readiness to receive a frame of data. The sender waits for a receipt acknowledgement (ACK) after every frame for a specified time (called a time out). The receiver sends the ACK to let the sender know that the frame of data was received correctly. The sender will then send the next frame only after the ACK.
Operations
Sender: Transmits a single frame at a time.
Sender waits to receive ACK within time out.
Receiver: Transmits acknowledgement (ACK) as it receives a frame.
Go to step 1 when ACK is received, or time out is hit.
If a frame or ACK is lost during transmission then the frame is re-transmitted. This re-transmission process is known as ARQ (automatic repeat request).
The problem with Stop-and-wait is that only one frame can be transmitted at a time, and that often leads to inefficient transmission, because until the sender receives the ACK it cannot transmit any new packet. During this time both the sender and the channel are unutilised.
Pros and cons of stop and wait
Pros
The only advantage of thi
|
https://en.wikipedia.org/wiki/Brian%20Goodwin
|
Brian Carey Goodwin (25 March 1931 – 15 July 2009) (Sainte-Anne-de-Bellevue, Quebec, Canada - Dartington, Totnes, Devon, UK) was a Canadian mathematician and biologist, a Professor Emeritus at the Open University and a founder of theoretical biology and biomathematics. He introduced the use of complex systems and generative models in developmental biology. He suggested that a reductionist view of nature fails to explain complex features, controversially proposing the structuralist theory that morphogenetic fields might substitute for natural selection in driving evolution. He was also a visible member of the Third Culture movement.
Biography
Brian Goodwin was born in Montreal, Quebec, Canada in 1931. He studied biology at McGill University and then emigrated to the UK, under a Rhodes Scholarship for studying mathematics at Oxford. He got his PhD at the University of Edinburgh presenting the thesis "Studies in the general theory of development and evolution" under the supervision of Conrad Hal Waddington. He then moved to Sussex University until 1983 when he became a full professor at the Open University in Milton Keynes until retirement in 1992. He became a major figure in the early development of mathematical biology, along with other researchers. He was one of the attendants to the famous meetings that took place between 1965 and 1968 in Villa Serbelloni, hosted by the Rockefeller Foundation, under the topic "Towards a theoretical Biology".
Thereafter, he taught at the Schumacher College in Devon, UK, where he was instrumental in starting the college's MSc in Holistic Science. He was made a Founding Fellow of Schumacher College shortly before his death. Goodwin also had a research position at MIT and was a long time visitor of several institutions including the UNAM in Mexico City. He was a founding member of the Santa Fe Institute in New Mexico where he also served as a member of the science board for several years.
Brian Goodwin died in hospital in 2009, aft
|
https://en.wikipedia.org/wiki/QxBranch
|
QxBranch, Inc. (QxBranch) is a data analysis and quantum computing software company, based in Washington, D.C. The company provides data analytics services and research and development for quantum computing technology. On July 11, 2019, QxBranch announced that it had been acquired by Rigetti Computing, a developer of quantum integrated circuits used for quantum computers.
Services
QxBranch provides predictive analytics services to firms in the banking and finance industries. The company also develops software products for quantum computing technologies, including developer tools and interfaces for quantum computers, as well as quantum computing simulators. Additionally, the company provides consulting and research and development for businesses that may be improved through quantum computing methods, including in the development of adiabatic quantum computing methods for machine learning applications.
History
QxBranch was founded in 2014 as a joint spin-off of Shoal Group and The Tauri Group to commercialize quantum computing technology. Shoal Group (named Aerospace Concepts at the time) had a research agreement with Lockheed Martin to access a D-Wave Two quantum computer, and transitioned the access and associated technology to help found QxBranch.
In August 2014, QxBranch was selected as one of eight participants for Accenture's FinTech Innovation Lab program in Hong Kong.
In May 2015, Dr. Ray O Johnson, former Chief Technology Officer of Lockheed Martin Corporation, joined QxBranch as executive director.
In January 2016, Australian Prime Minister Malcolm Turnbull toured QxBranch's facilities in Washington, D.C. for a demonstration of quantum computing applications.
In November 2016, QxBranch, in partnership with UBS, was announced as a winning bid under the Innovate UK's Quantum Technologies Innovation Fund under the UK National Quantum Technologies Programme. The partnership is working on developing quantum algorithms for foreign exchange market tradin
|
https://en.wikipedia.org/wiki/MAVLink
|
MAVLink or Micro Air Vehicle Link is a protocol for communicating with small unmanned vehicle. It is designed as a header-only message marshaling library. MAVLink was first released early 2009 by Lorenz Meier under the LGPL license.
Applications
It is used mostly for communication between a Ground Control Station (GCS) and Unmanned vehicles, and in the inter-communication of the subsystem of the vehicle. It can be used to transmit the orientation of the vehicle, its GPS location and speed.
Packet Structure
In version 1.0 the packet structure is the following:
After Version 2, the packet structure was expanded into the following:
CRC field
To ensure message integrity a cyclic redundancy check (CRC) is calculated to every message into the last two bytes. Another function of the CRC field is to ensure the sender and receiver both agree in the message that is being transferred. It is computed using an ITU X.25/SAE AS-4 hash of the bytes in the packet, excluding the Start-of-Frame indicator (so 6+n+1 bytes are evaluated, the extra +1 is the seed value).
Additionally a seed value is appended to the end of the data when computing the CRC. The seed is generated with every new message set of the protocol, and it is hashed in a similar way as the packets from each message specifications. Systems using the MAVLink protocol can use a precomputed array to this purpose.
The CRC algorithm of MAVLink has been implemented in many languages, like Python and Java.
Messages
The payload from the packets described above are MAVLink messages. Every message is identifiable by the ID field on the packet, and the payload contains the data from the message. An XML document in the MAVlink source has the definition of the data stored in this payload.
Below is the message with ID 24 extracted from the XML document.
<message id="24" name="GPS_RAW_INT">
<description>The global position, as returned by the Global Positioning System (GPS). This is NOT the global position estimate of
|
https://en.wikipedia.org/wiki/Edwin%20Smith%20Papyrus
|
The Edwin Smith Papyrus is an ancient Egyptian medical text, named after Edwin Smith who bought it in 1862, and the oldest known surgical treatise on trauma. From a cited quotation in another text, it may have been known to ancient surgeons as the "Secret Book of the Physician".
This document, which may have been a manual of military surgery, describes 48 cases of injuries, fractures, wounds, dislocations and tumors. It dates to Dynasties 16–17 of the Second Intermediate Period in ancient Egypt, 1600 BCE. The papyrus is unique among the four principal medical papyri in existence
that survive today. While other papyri, such as the Ebers Papyrus and London Medical Papyrus, are medical texts based in magic, the Edwin Smith Papyrus presents a rational and scientific approach to medicine in ancient Egypt, in which medicine and magic do not conflict. Magic would be more prevalent had the cases of illness been mysterious, such as internal disease.
The Edwin Smith papyrus is a scroll 4.68 meters or 15.3 feet in length. The recto (front side) has 377 lines in 17 columns, while the verso (backside) has 92 lines in five columns. Aside from the fragmentary outer column of the scroll, the remainder of the papyrus is intact, although it was cut into one-column pages some time in the 20th century. It is written right-to-left in hieratic, the Egyptian cursive form of hieroglyphs, in black ink with explanatory glosses in red ink. The vast majority of the papyrus is concerned with trauma and surgery, with short sections on gynaecology and cosmetics on the verso. On the recto side, there are 48 cases of injury. Each case details the type of the injury, examination of the patient, diagnosis and prognosis, and treatment. The verso side consists of eight magic spells and five prescriptions. The spells of the verso side and two incidents in Case 8 and Case 9 are the exceptions to the practical nature of this medical text. Generic spells and incantations may have been used as a last
|
https://en.wikipedia.org/wiki/Cloud-based%20design%20and%20manufacturing
|
Cloud-based design and manufacturing (CBDM) refers to a service-oriented networked product development model in which service consumers are able to configure products or services and reconfigure manufacturing systems through Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Hardware-as-a-Service (HaaS), and Software-as-a-Service (SaaS).
Adapted from the original cloud computing paradigm and introduced into the realm of computer-aided product development, Cloud-Based Design and Manufacturing is gaining significant momentum and attention from both academia and industry.
Cloud-based design and manufacturing includes two aspects: cloud-based design and cloud-based manufacturing. Another related concept is cloud manufacturing that is more general and popular.
Cloud-Based Design (CBD) refers to a networked design model that leverages cloud computing, service-oriented architecture (SOA), Web 2.0 (e.g., social network sites), and semantic web technologies to support cloud-based engineering design services in distributed and collaborative environments.
Cloud-Based Manufacturing (CBM) refers to a networked manufacturing model that exploits on-demand access to a shared collection of diversified and distributed manufacturing resources to form temporary, reconfigurable production lines which enhance efficiency, reduce product lifecycle costs, and allow for optimal resource allocation in response to variable-demand customer generated tasking.
The enabling technologies for Cloud-Based Design and Manufacturing include cloud computing, Web 2.0, Internet of Things (IoT), and service-oriented architecture (SOA).
History
The term cloud-based design and manufacturing (CBDM) was initially coined by Dazhong Wu, David Rosen, and Dirk Schaefer at Georgia Tech in 2012 for the purpose of articulating a new paradigm for digital manufacturing and design innovation in distributed and collaborative settings. The main objective of CBDM is to further reduce time and cost assoc
|
https://en.wikipedia.org/wiki/UCL%20Faculty%20of%20Mathematical%20and%20Physical%20Sciences
|
The UCL Faculty of Mathematical and Physical Sciences is one of the 11 constituent faculties of University College London (UCL). The Faculty, the UCL Faculty of Engineering Sciences and the UCL Faculty of the Built Envirornment (The Bartlett) together form the UCL School of the Built Environment, Engineering and Mathematical and Physical Sciences.
Departments
The Faculty currently comprises the following departments:
UCL Department of Chemistry
UCL Department of Earth Sciences
UCL Department of Mathematics
Chalkdust is an online mathematics interest magazine published by Department of Mathematics students starting in 2015
UCL Department of Natural Sciences
UCL Department of Physics & Astronomy
UCL Department of Science and Technology Studies
UCL Department of Space & Climate Physics (Mullard Space Science Laboratory)
UCL Department of Statistical Science
London Centre for Nanotechnology - a joint venture between UCL and Imperial College London established in 2003 following the award of a £13.65m higher education grant under the Science Research Infrastructure Fund.
Research centres and institutes
The Faculty is closely involved with the following research centres and institutes:
UCL Centre for Materials Research
UCL Centre for Mathematics and Physics in the Life Sciences and Experimental Biology (CoMPLEX) - an inter-disciplinary virtual centre that seeks to bring together mathematicians, physical scientists, computer scientists and engineers upon the problems posed by complexity in biology and biomedicine. The centre works with 29 departments and Institutes across UCL. It has a MRes/PhD program that requires that its students also belong to at least one of these Departments/Institutes. The centre is based in the Physics Building on the UCL main campus.
Centre for Planetary Science at UCL/Birkbeck
UCL Clinical Operational Research Unit (CORU) - CORU sits within the Department of Mathematics and is a team of researchers dedicated to applying operational research,
|
https://en.wikipedia.org/wiki/Loudness%20compensation
|
Loudness compensation, or simply loudness, is a setting found on some hi-fi equipment that increases the level of the high and low frequencies. This is intended to be used while listening at low-volume levels, to compensate for the fact that as the loudness of audio decreases, the ear's lower sensitivity to extreme high and low frequencies may cause these signals to fall below the threshold of hearing. As a result audio material may become thin sounding at low volumes, losing bass and treble. The loudness compensation feature applies equalization and is intended to rectify this situation.
Calibration
Correct loudness compensation requires a calibrated system with known listening level. Audio level at a listener's ears depends on the listening environment, listener position, speaker sensitivity as well as amplifier gain. For loudness compensation to work correctly the playback system must also accurately assume what volume level was used in mastering. For movie soundtracks this reference volume level is an industry standard and can be used by manufacturers to provide a loudness feature that works with a reasonable degree of accuracy. A home theater product that provides a reference level indication on the volume control can be expected to work well with movie soundtracks.
|
https://en.wikipedia.org/wiki/Hemichrome
|
A hemichrome (FeIII) is a form of low-spin methemoglobin (metHb).
Hemichromes, which precede the denaturation processes of hemoglobin (Hb), are mainly produced by partially denaturated hemoglobins and form histidine complexes.
Hemichromes are usually associated with blood disorders.
Types of hemichromes
Hemichromes can be classified in two main categories: reversible and irreversible.
Reversible hemichromes (Hch-1) have the ability to return to their native formation (hemoglobin). Some hemichromes can be reduced to the high-spin state of deoxyhemoglobin, while others are first being reduced to hemochromes (FeII) and then to deoxyhemoglobin through anaerobic dialysis. Photolysis, in the presence of oxygen from CO and its reaction with the hemochrome, can quickly convert a hemichrome to oxyhemoglobin (HbO2).
Irreversible hemichromes (Hch-2) cannot be converted to their native form.
Both the reversible and irreversible hemichromes have a similar rate during proteolytic degradation and they both have a lower percentage of alpha helixes.
Hemichrome in bloodstains
Upon blood exiting the body, hemoglobin in blood transits from bright red to dark brown, which is attributed to oxidation of oxy-hemoglobin (HbO2) to methemoglobin (met-Hb) and ending up in hemichrome (HC). For forensic purposes, the fractions of HbO2, met-Hb and HC in a bloodstain can be used for age determination of bloodstains when measured with Reflectance Spectroscopy .
Hemichrome stability
Hemichromes form an insoluble macromolecule (macromolecular aggregate) by copolymerization with the cytoplasm of band 3. Covalent bonds reinforce the aggregate interactions of the hemichromes which are accumulated on the surface of the membrane. However, hemichromes are less stable than their native form.
Normal formation
Hemoglobin A in humans can form hemichromes even under physiological conditions as a result of pH and temperature alterations, and the autoxidation of oxyhemoglobin. Hemichrome formation,
|
https://en.wikipedia.org/wiki/Semipredicate%20problem
|
In computer programming, a semipredicate problem occurs when a subroutine intended to return a useful value can fail, but the signalling of failure uses an otherwise valid return value. The problem is that the caller of the subroutine cannot tell what the result means in this case.
Example
The division operation yields a real number, but fails when the divisor is zero. If we were to write a function that performs division, we might choose to return 0 on this invalid input. However, if the dividend is 0, the result is 0 too. This means there is no number we can return to uniquely signal attempted division by zero, since all real numbers are in the range of division.
Practical implications
Early programmers handled potentially exceptional cases such as division using a convention requiring the calling routine to verify the inputs before calling the division function. This had two problems: first, it greatly encumbered all code that performed division (a very common operation); second, it violated the Don't repeat yourself and encapsulation principles, the former of which suggesting eliminating duplicated code, and the latter suggesting that data-associated code be contained in one place (in this division example, the verification of input was done separately). For a computation more complicated than division, it could be difficult for the caller to recognize invalid input; in some cases, determining input validity may be as costly as performing the entire computation. The target function could also be modified and would then expect different preconditions than would the caller; such a modification would require changes in every place where the function was called.
Solutions
The semipredicate problem is not universal among functions that can fail.
Using a custom convention to interpret return values
If the range of a function does not cover the entire space corresponding to the data type of the function's return value, a value known to be impossible under norm
|
https://en.wikipedia.org/wiki/Digital%20multicast%20television%20network
|
A digital multicast television network, also known as a diginet or multichannel, is a type of national television service designed to be broadcast terrestrially as a supplementary service to other stations on their digital subchannels. Made possible by the conversion from analog to digital television broadcasting, which left room for additional services to be broadcast from an individual transmitter, regional and national broadcasters alike have introduced such channels since the 2000s. By March 2022, 54 such services existed in the United States.
Typically run on a lesser budget, national multicast services often rely on archive and imported content and are tailored to allow advertisers to reach specific demographics. Most of their revenue is derived from national advertising.
Digital multicast services by country
Australia
The first multichannel broadcast in Australia was ABC Kids, which broadcast from 2001 to 2003; in the succeeding years, the country's commercial broadcasters also launched secondary services to compete against DVDs and online piracy. However, their ability to do so was hampered at first by a ban on adding channels, with a focus on such services as datacasting and high-definition. It was not until 2009 that commercial broadcasters were allowed to add multichannels; in that year, the three major networks all did so, bringing the number of channels they offered from three to eleven.
The original commercial multichannels were generalist in nature, which made it difficult for advertisers to target specific demographics and therefore made them less lucrative. The shift to specifically targeted services and their reliance on existing programming has allowed these channels to survive despite drawing comparatively low shares of the audience: in 2018, 7mate led the group with an audience share of 4.1 percent among metropolitan audiences. However, after the Australian Communications and Media Authority permitted the commercial broadcasters to move requ
|
https://en.wikipedia.org/wiki/Integrated%20nested%20Laplace%20approximations
|
Integrated nested Laplace approximations (INLA) is a method for approximate Bayesian inference based on Laplace's method. It is designed for a class of models called latent Gaussian models (LGMs), for which it can be a fast and accurate alternative for Markov chain Monte Carlo methods to compute posterior marginal distributions. Due to its relative speed even with large data sets for certain problems and models, INLA has been a popular inference method in applied statistics, in particular spatial statistics, ecology, and epidemiology. It is also possible to combine INLA with a finite element method solution of a stochastic partial differential equation to study e.g. spatial point processes and species distribution models. The INLA method is implemented in the R-INLA R package.
Latent Gaussian models
Let denote the response variable (that is, the observations) which belongs to an exponential family, with the mean (of ) being linked to a linear predictor via an appropriate link function. The linear predictor can take the form of a (Bayesian) additive model. All latent effects (the linear predictor, the intercept, coefficients of possible covariates, and so on) are collectively denoted by the vector . The hyperparameters of the model are denoted by . As per Bayesian statistics, and are random variables with prior distributions.
The observations are assumed to be conditionally independent given and :
where is the set of indices for observed elements of (some elements may be unobserved, and for these INLA computes a posterior predictive distribution). Note that the linear predictor is part of .
For the model to be a latent Gaussian model, it is assumed that is a Gaussian Markov Random Field (GMRF) (that is, a multivariate Gaussian with additional conditional independence properties) with probability density
where is a -dependent sparse precision matrix and is its determinant. The precision matrix is sparse due to the GMRF assumption. The prior distribu
|
https://en.wikipedia.org/wiki/Suberin
|
Suberin, cutin and lignins are complex, higher plant epidermis and periderm cell-wall macromolecules, forming a protective barrier. Suberin, a complex polyester biopolymer, is lipophilic, and composed of long chain fatty acids called suberin acids, and glycerol. Suberins and lignins are considered covalently linked to lipids and carbohydrates, respectively, and lignin is covalently linked to suberin, and to a lesser extent, to cutin. Suberin is a major constituent of cork, and is named after the cork oak, Quercus suber. Its main function is as a barrier to movement of water and solutes.
Anatomy and physiology
Suberin is highly hydrophobic and a somewhat 'rubbery' material. In roots, suberin is deposited in the radial and transverse/tangential cell walls of the endodermal cells. This structure, known as the Casparian strip or Casparian band, functions to prevent water and nutrients taken up by the root from entering the stele through the apoplast. Instead, water must bypass the endodermis via the symplast. This allows the plant to select the solutes that pass further into the plant. It thus forms an important barrier to harmful solutes. For example, mangroves use suberin to minimize salt intake from their littoral habitat.
Suberin is found in the phellem layer of the periderm (or cork). This is outermost layer of the bark. The cells in this layer are dead and abundant in suberin, preventing water loss from the tissues below. Suberin can also be found in various other plant structures. For example, they are present in the lenticels on the stems of many plants and the net structure in the rind of a netted melon is composed of suberised cells.
Structure and biosynthesis
Suberin consists of two domains, a polyaromatic and a polyaliphatic domain. The polyaromatics are predominantly located within the primary cell wall, and the polyaliphatics are located between the primary cell wall and the cell membrane. The two domains are supposed to be cross-linked. The exact quali
|
https://en.wikipedia.org/wiki/Distance%20sampling
|
Distance sampling is a widely used group of closely related methods for estimating the density and/or abundance of populations. The main methods are based on line transects or point transects. In this method of sampling, the data collected are the distances of the objects being surveyed from these randomly placed lines or points, and the objective is to estimate the average density of the objects within a region.
Basic line transect methodology
A common approach to distance sampling is the use of line transects. The observer traverses a straight line (placed randomly or following some planned distribution). Whenever they observe an object of interest (e.g., an animal of the type being surveyed), they record the distance from their current position to the object (r), as well as the angle of the detection to the transect line (θ). The distance of the object to the transect can then be calculated as x = r * sin(θ). These distances x are the detection distances that will be analyzed in further modeling.
Objects are detected out to a pre-determined maximum detection distance w. Not all objects within w will be detected, but a fundamental assumption is that all objects at zero distance (i.e., on the line itself) are detected. Overall detection probability is thus expected to be 1 on the line, and to decrease with increasing distance from the line. The distribution of the observed distances is used to estimate a "detection function" that describes the probability of detecting an object at a given distance. Given that various basic assumptions hold, this function allows the estimation of the average probability P of detecting an object given that is within width w of the line. Object density can then be estimated as , where n is the number of objects detected and a is the size of the region covered (total length of the transect (L) multiplied by 2w).
In summary, modeling how detectability drops off with increasing distance from the transect allows estimating how man
|
https://en.wikipedia.org/wiki/National%20Synchrotron%20Radiation%20Research%20Center
|
The National Synchrotron Radiation Research Center (NSRRC; ) synchrotron radiation facility at the Hsinchu Science Park in East District, Hsinchu City, Taiwan as the agency under the Ministry of Science and Technology of the Republic of China.
It houses the Taiwan Light Source (TLS) and Taiwan Photon Source (TPS). Additionally, the NSRRC also operates two beamlines at SPring-8 in Japan and the Sika neutron scattering instrument at the OPAL research reactor in Australia.
Instruments
Taiwan Light Source
The TLS is Taiwan's first synchrotron and was opened in 1993 as a third-generation synchrotron with a beam energy of 1.5 GeV beam. The storage ring has a circumference of 120 m. There are twenty-six operational beamlines. They cover a wide range of functionality, from IR microscopy to X-ray lithography.
Taiwan Photon Source
The TPS is a 3-GeV third-generation synchrotron light source, built at a cost of approximately NT$7 billion (US$224 million). After a seven-year plan was launched in 2007, it delivered first light on December 31, 2014. Projected to be 10,000 times brighter than the TLS, the TPS is considered one of the world's brightest light sources. It has a storage ring circumference of 518.4 m. The facility is expected to have 48 experimental stations fully operational by 2016. The synchrotron is aimed to benefit biomedical and nanotechnology research. The TPS is located adjacent to the TLS and the two light sources are intended to be complementary in providing a wide range of the photon spectrum, from IR to x-rays greater than 10 keV, for researchers' needs.
Organizational structure
Light Source Division
Instrumentation Development Division
Experimental Facility Division
Scientific Research Division
Administration Division
Radiation and Operation Safety Division
|
https://en.wikipedia.org/wiki/59th%20meridian%20west
|
The meridian 59° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Atlantic Ocean, South America, the Southern Ocean, and Antarctica to the South Pole.
The 59th meridian west forms a great circle with the 121st meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 59th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="120" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Lincoln Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
|Nyeboe Land
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Newman Bugt
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
|Hall Land
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Baffin Bay
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Davis Strait
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | Labrador Sea
|-valign="top"
|
! scope="row" |
| Newfoundland and Labrador — Labrador Quebec — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Gulf of Saint Lawrence
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Newfoundland and Labrador — Port au Port Peninsula on the island of Newfoundland
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bay St. George
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Newfoundland and Labrador — island of Newfoundland
|-
| style="background:#b0e0e6;" |
! scope="row" style="backgro
|
https://en.wikipedia.org/wiki/Filociclovir
|
Filociclovir (cyclopropavir, MBX-400) is an antiviral drug which was developed for the treatment of cytomegalovirus infection and also shows some activity against other double-stranded DNA viruses. It has reached Phase II human clinical trials.
|
https://en.wikipedia.org/wiki/Swept-plane%20display
|
Swept-plane display is a structure from motion technique with which one can create the optical illusion of a volume of light, due to the persistence of vision property of human visual perception.
The principle is to have a 2D lighted surface sweep in a circle, creating a volume. The image on the 2D surface changes as the surface rotates. The lighted surface needs to be translucent.
Perception
Optical illusions
Psychophysics
|
https://en.wikipedia.org/wiki/Matrix%20calculus
|
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.
Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in the layout conventions section.
Scope
Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the d
|
https://en.wikipedia.org/wiki/International%20Symposium%20on%20Symbolic%20and%20Algebraic%20Computation
|
ISSAC, the International Symposium on Symbolic and Algebraic Computation, is an academic conference in the field of computer algebra. ISSAC has been organized annually since 1988, typically in July. The conference is regularly sponsored by the Association for Computing Machinery special interest group SIGSAM, and the proceedings since 1989 have been published by ACM. ISSAC is considered as being one of the most influential conferences for the publication of scientific computing research.
History
The first ISSAC took place in Rome on 4–8 July 1988. It succeeded a series of meetings held between 1966 and 1987 under the names SYMSAM, SYMSAC, EUROCAL, EUROSAM and EUROCAM.
ISSAC Awards
The Richard D. Jenks Memorial Prize for excellence in software engineering applied to computer algebra is awarded at ISSAC every other year since 2004.
The ISSAC Distinguished Paper Award is awarded at ISSAC since 2002 to authors that display excellence in areas that include, but are not limited to, algebraic computation, symbolic-numeric computation, and system design and implementation.
The ISSAC Distinguished Student Author Award is awarded at ISSAC since 2004 to authors if they were a student at the time their paper was submitted.
Conference topics
Typical topics include:
exact linear algebra;
polynomial system solving;
symbolic summation;
symbolic integration and computational differential algebra;
computational group theory;
symbolic-numeric algorithms;
the design and implementation of computer algebra systems;
applications of computer algebra.
See also
Journal of Symbolic Computation
|
https://en.wikipedia.org/wiki/Shepherd%20Building%20Group
|
Shepherd Building Group Ltd is a family owned business, based in York, that manufactures, leases and sells modular buildings in the UK and Europe. Its Portakabin and Portaloo brands are frequently treated as generic terms for modular buildings and toilets.
The company was one of the largest privately owned building contractors in the UK, but sold that business to Wates Group in 2015.
History
In 1890, 35 year old joiner Frederick Shepherd started the business in York. His younger son Frederick Welton Shepherd joined and expanded the firm known, from 1910, as F Shepherd and Son. They diversified from house building to general contracting, and incorporated, in 1924, as F Shepherd and Son Ltd. By the late 1930s there was a workforce of 700 that operated throughout Yorkshire, and also the North East of England.
Main contractor
The firm undertook extensive work at military sites up to, and during, the Second World War. Post conflict contracts were predominantly public sector, often incorporating prefabricated concrete panel systems developed by CLASP, Wates, and Yorkshire Development Group.
In 1962, F Shepherd and Son Ltd reorganised under Shepherd Group Ltd, and land was purchased at Huntington. Initially it was used for manufacturing, but in 1995 the headquarters moved there from Blue Bridge Lane, near the River Ouse. The old headquarters site became a Mecca Bingo hall before that too was demolished to make way for the construction of Frederick House student accommodation in 2022.
By 1968, the group employed 6,788 staff. That fell to 3,587 in 1971, and by 2009 there were 3,200.
Shepherd Group was a main contractor for leisure, commercial, industrial, residential, healthcare, education, retail, and research buildings. The construction division also included companies engaged in mechanical and electrical services, facility management and housebuilding.
Manufactured products
In 1951 Donald Shepherd, grandson of the founder, developed a bulk cement silo, for use i
|
https://en.wikipedia.org/wiki/Reliability%2C%20availability%20and%20serviceability
|
Reliability, availability and serviceability (RAS), also known as reliability, availability, and maintainability (RAM), is a computer hardware engineering term involving reliability engineering, high availability, and serviceability design. The phrase was originally used by International Business Machines (IBM) as a term to describe the robustness of their mainframe computers.
Computers designed with higher levels of RAS have many features that protect data integrity and help them stay available for long periods of time without failure This data integrity and uptime is a particular selling point for mainframes and fault-tolerant systems.
Definitions
While RAS originated as a hardware-oriented term, systems thinking has extended the concept of reliability-availability-serviceability to systems in general, including software.
Reliability can be defined as the probability that a system will produce correct outputs up to some given time t. Reliability is enhanced by features that help to avoid, detect and repair hardware faults. A reliable system does not silently continue and deliver results that include uncorrected corrupted data. Instead, it detects and, if possible, corrects the corruption, for example: by retrying an operation for transient (soft) or intermittent errors, or else, for uncorrectable errors, isolating the fault and reporting it to higher-level recovery mechanisms (which may failover to redundant replacement hardware, etc.), or else by halting the affected program or the entire system and reporting the corruption. Reliability can be characterized in terms of mean time between failures (MTBF), with reliability = exp(-t/MTBF).
Availability means the probability that a system is operational at a given time, i.e. the amount of time a device is actually operating as the percentage of total time it should be operating. High-availability systems may report availability in terms of minutes or hours of downtime per year. Availability features allow the
|
https://en.wikipedia.org/wiki/Lung-on-a-chip
|
The lung-on-a-chip is a complex, three-dimensional model of a living, breathing human lung on a microchip. The device is made using human lung and blood vessel cells and it can predict absorption of airborne nanoparticles and mimic the inflammatory response triggered by microbial pathogens. It can be used to test the effects of environmental toxins, absorption of aerosolized therapeutics, and the safety and efficacy of new drugs. It is expected to become an alternative to animal testing.
The lung-on-a-chip places two layers of living tissues—the lining of the lung's air sacs and the blood vessels that surround them—across a porous, flexible boundary. Air is delivered to the lung lining cells, a rich culture medium flows in the capillary channel to mimic blood, and cyclic mechanical stretching is generated by a vacuum applied to the chambers adjacent to the cell culture channels to mimic breathing.
The research findings for lung-on-a-chip were published in the June 25, 2010, issue of Science, the academic journal of the American Association for the Advancement of Science. The research was funded by the National Institutes of Health, the American Heart Association, and the Wyss Institute for Biologically Inspired Engineering at Harvard University.
Inventors
The technology was developed by Donald E. Ingber, M.D., Ph.D., an American cell biologist who is the Founding Director of the Wyss Institute for Biologically Inspired Engineering at Harvard University, and Dan Dongeun Huh, Ph.D., who was a Technology Development Fellow at the Wyss Institute and is now Wilf Family Term Chair Assistant Professor in Bioengineering at the University of Pennsylvania. The device was created using a microfabrication strategy known as soft lithography that was pioneered by George M. Whitesides, an American chemist, who is a professor of chemistry at Harvard, as well as a Wyss Institute core faculty member.
Testing
The response of the lung-on-a-chip to inhaled living pathogens was test
|
https://en.wikipedia.org/wiki/Genital%20trauma
|
Genital trauma is trauma to the genitalia.
History of studying genital trauma
Doctors and nurses have been conducting sexual assault examinations and have been collecting evidence for victims of assault for 20 years. But the amount of scientific data collected on genital injuries post-sexual assault are still minimal. Therefore, there is no available evidence to show specific patterns of injury resulting from sexual assault. The motivation for investigating and collecting data on genital injuries has primarily been within the context of the legal system, such as proving or disproving sexual assault, rather than for medical purposes. The studies that have been done in the past 25 years in relation to sexual assault cases in the judicial system has laid the groundwork for interpreting sexual assault injuries. It is important for there to research on genital injuries more broadly relating to sexual activity (and not just sexual assault) to improve medical knowledge on the subject. Methods of studying and documenting genital injury has greatly improved through the use of tissue staining dyes and colposcopy. The first studies that used newer methods were retrospective chart reviews done in a hospital by a doctor or nurse. These studies used several different methods to identify and document injuries, such as direct visualization, colposcopy, and/or tissue staining dyes. Earlier studies only used direct visualization for their data.
Vaginal trauma from consensual and non-consensual intercourse
Vaginal trauma is possible during and after consensual and non-consensual intercourse so it is difficult to determine the circumstances in which the trauma occurs only based on a physical examination. It can be difficult to differentiate between injuries from consensual sex and injuries from sexual assault in adolescents. Women are three times more likely to have vaginal injuries and intercourse-related injuries from a forced assault than from a consensual sexual experience. Vag
|
https://en.wikipedia.org/wiki/Hussein%20bin%20Ali%2C%20King%20of%20Hejaz
|
Hussein bin Ali al-Hashimi (; 1 May 18544 June 1931) was an Arab leader from the Banu Qatadah branch of the Banu Hashim clan who was the Sharif and Emir of Mecca from 1908 and, after proclaiming the Great Arab Revolt against the Ottoman Empire, King of the Hejaz, even if he refused this title, from 1916 to 1924. He proclaimed himself Caliph after the abolition of the Ottoman Caliphate in 1924 and stayed in power until 1925 when Hejaz was invaded by the Saudis. He is usually considered as the father of modern pan-Arabism.
In 1908, in the aftermath of the Young Turk Revolution, Hussein was appointed Sharif of Mecca by the Ottoman sultan Abdul Hamid II. In 1916, with the promise of British support for Arab independence, he proclaimed the Arab Revolt against the Ottoman Empire, accusing the Committee of Union and Progress of violating tenets of Islam and limiting the power of the sultan-caliph. In the aftermath of World War I, Hussein refused to ratify the Treaty of Versailles, in protest of the Balfour Declaration and the establishment of British and French mandates in Syria, Iraq, and Palestine. He later refused to sign the Anglo-Hashemite Treaty and thus deprived himself of British support when his kingdom was attacked by Ibn Saud.
In March 1924, when the Ottoman Caliphate was abolished, Hussein proclaimed himself "Caliph of all Muslims". His sons Faisal and Abdullah were made rulers of Iraq and Transjordan respectively in 1921. In October 1924, facing defeat by Ibn Saud, he abdicated and was succeeded as king by his eldest son Ali. After the Kingdom of Hejaz was invaded by the Al Saud-Wahhabi armies of the Ikhwan, on 23 December 1925 King Hussein bin Ali surrendered to the Saudis, bringing the Kingdom of Hejaz, the Sharifate of Mecca and the Sharifian Caliphate to an end. His Caliphate was opposed by the British Empire, the Zionists and the Wahhabis alike. However, he received support from a large part of the Muslim population of that time and from Mehmed VI.
Hus
|
https://en.wikipedia.org/wiki/Plantronics%20Colorplus
|
The Plantronics Colorplus is a graphics card for IBM PC computers, first sold in 1982. It is a superset of the then-current CGA standard, using the same monitor standard (4-bit digital TTL RGBI monitor) and providing the same pixel resolutions. It was produced by Frederick Electronics (of Frederick, Maryland), a subsidiary of Plantronics since 1968, and sold by Plantronics' Enhanced Graphics Products division.
The Colorplus has twice the memory of a standard CGA board (32k, compared to 16k). The additional memory can be used in graphics modes to double the color depth, giving two additional graphics modes—16 colors at resolution, or 4 colors at resolution.
It uses the same Motorola MC6845 display controller as the previous MDA and CGA adapters.
The original card also includes a parallel printer port.
Output capabilities
CGA compatible modes:
16 color mode (actual a text mode using , ▌, ▐ and █)
in 4 colors from a 16 color hardware palette. Pixel aspect ratio of 1:1.2.
in 2 colors. Pixel aspect ratio of 1:2.4
with pixel font text mode (effective resolution of )
with pixel font text mode (effective resolution of )
In addition to the CGA modes, it offers:
with 16 colors
with 4 colors
"New high-resolution" text font, selectable by hardware jumper
The "new" font was actually the unused "thin" font already present in the IBM CGA ROMs, with 1-pixel wide vertical strokes. This offered greater clarity on RGB monitors, versus the default "thick" / 2-pixel font more suitable for output to composite monitors and over RF to televisions but, contrary to Plantronics' advertising claims, was drawn at the same pixel resolution.
Software support
Few software made use of the enhanced Plantronics modes, for which there was no BIOS support.
A 1984 advertisement listed the following software as compatible:
Color-It
UCSD P-system
Peachtree Graphics Language
Business Graphics System
Graph Power
The Draftsman
Videogram
Stock View
GSX
CompuShow ( mode)
Some contempora
|
https://en.wikipedia.org/wiki/Toroidal%20moment
|
In electromagnetism, a toroidal moment is an independent term in the multipole expansion of electromagnetic fields besides magnetic and electric multipoles. In the electrostatic multipole expansion, all charge and current distributions can be expanded into a complete set of electric and magnetic multipole coefficients. However, additional terms arise in an electrodynamic multipole expansion. The coefficients of these terms are given by the toroidal multipole moments as well as time derivatives of the electric and magnetic multipole moments. While electric dipoles can be understood as separated charges and magnetic dipoles as circular currents, axial (or electric) toroidal dipoles describes toroidal (donut-shaped) charge arrangements whereas polar (or magnetic) toroidal dipole (also called anapole) correspond to the field of a solenoid bent into a torus.
Classical toroidal dipole moment
A complex expression allows the current density J to be written as a sum of electric, magnetic, and toroidal moments using Cartesian or spherical differential operators. The lowest order toroidal term is the toroidal dipole. Its magnitude along direction i is given by
Since this term arises only in an expansion of the current density to second order, it generally vanishes in a long-wavelength approximation.
However, a recent study comes to the result that the toroidal multipole moments are not a separate multipole family, but rather higher order terms of the electric multipole moments.
Quantum toroidal dipole moment
In 1957, Yakov Zel'dovich found that because the weak interaction violates parity symmetry, a spin- Dirac particle must have a toroidal dipole moment, also known as an anapole moment, in addition to the usual electric and magnetic dipoles. The interaction of this term is most easily understood in the non-relativistic limit, where the Hamiltonian is
where , , and are the electric, magnetic, and anapole moments, respectively, and is the vector of Pauli matri
|
https://en.wikipedia.org/wiki/BAITSSS
|
BAITSSS (Backward-Averaged Iterative Two-Source Surface temperature and energy balance Solution) is biophysical Evapotranspiration (ET) computer model that determines water use, primarily in agriculture landscape, using remote sensing-based information. It was developed and refined by Ramesh Dhungel and the water resources group at University of Idaho's Kimberly Research and Extension Center since 2010. It has been used in different areas in the United States including Southern Idaho, Northern California, northwest Kansas, Texas, and Arizona.
History of development
BAITSSS originated from the research of Ramesh Dhungel, a graduate student at the University of Idaho, who joined a project called "Producing and integrating time series of gridded evapotranspiration for irrigation management, hydrology and remote sensing applications" under professor Richard G. Allen.
In 2012, the initial version of landscape model was developed using the Python IDLE environment using NARR weather data (~ 32 kilometers). Dhungel submitted his PhD dissertation in 2014 where the model was called BATANS (backward averaged two source accelerated numerical solution). The model was first published in Meteorological Applications journal in 2016 under the name BAITSSS
as a framework to interpolate ET between the satellite overpass when thermal based surface temperature is unavailable. The overall concept of backward averaging was introduced to expedite the convergence process of iteratively solved surface energy balance components which can be time-consuming and can frequently suffer non-convergence, especially in low wind speed.
In 2017, the landscape BAITSSS model was scripted in Python shell, together with GDAL and NumPy libraries using NLDAS weather data (~ 12.5 kilometers). The detailed independent model was evaluated against weighing lysimeter measured ET, infrared temperature (IRT) and net radiometer of drought‐tolerant corn and sorghum at Conservation and Production Research Labor
|
https://en.wikipedia.org/wiki/Cholecystokinin%20A%20receptor
|
The Cholecystokinin A receptor is a human protein, also known as CCKAR or CCK1, with CCK1 now being the IUPHAR-recommended name.
Function
This gene encodes a G-protein coupled receptor that binds sulfated members of the cholecystokinin (CCK) family of peptide hormones. This receptor is a major physiologic mediator of pancreatic enzyme secretion and smooth muscle contraction of the gallbladder and stomach. In the central and peripheral nervous system this receptor regulates satiety and the release of beta-endorphin and dopamine.
The extracellular, N-terminal, domain of this protein adopts a tertiary structure consisting of a few helical turns and a disulfide-cross linked loop. It is required for interaction of the cholecystokinin A receptor with its corresponding hormonal ligand.
Selective Ligands
Agonists
Cholecystokinin
CCK-4
SR-146,131
A-71623 - modified tetrapeptide, potent and selective CCKA agonist, IC50 3.7nM, 1200x selectivity over CCKB, CAS# 130408-77-4
Antagonists
Proglumide
Lorglumide
Devazepide
Dexloxiglumide
Asperlicin
SR-27897
IQM-95333
JNJ-17156516
See also
Cholecystokinin receptor
Cholecystokinin antagonist
|
https://en.wikipedia.org/wiki/Electronic%20oscillation
|
Electronic oscillation is a repeating cyclical variation in voltage or current in an electrical circuit, resulting in a periodic waveform. The frequency of the oscillation in hertz is the number of times the cycle repeats per second.
The recurrence may be in the form of a varying voltage or a varying current. The waveform may be sinusoidal or some other shape when its magnitude is plotted against time. Electronic oscillation may be intentionally caused, as in devices designed as oscillators, or it may be the result of unintentional positive feedback from the output of an electronic device to its input. The latter appears often in feedback amplifiers (such as operational amplifiers) that do not have sufficient gain or phase margins. In this case, the oscillation often interferes with or compromises the amplifier's intended function, and is known as parasitic oscillation.
|
https://en.wikipedia.org/wiki/Hessian%20%28Web%20service%20protocol%29
|
Hessian is a binary Web service protocol that makes Web services usable without requiring a large framework, and without learning a new set of protocols . Because it is a binary protocol, it is well-suited to sending binary data without any need to extend the protocol with attachments.
Hessian was developed by Caucho Technology, Inc. The company has released Java, Python and ActionScript for Adobe Flash implementations of Hessian under an open source license (the Apache license). Third-party implementations in several other languages (C++, C#, JavaScript, Perl, PHP, Ruby, Objective-C, D, and Erlang) are also available as open-source.
Adaptations
Although Hessian is primarily intended for Web services, it can be adapted for TCP traffic by using the HessianInput and HessianOutput classes in Caucho's Java implementation.
Implementations
Cotton (Erlang)
HessDroid (Android)
Hessian (on Rubyforge) (Ruby)
Hessian.js (JavaScript)
Hessian4J (Java)
HessianC# (C#)
HessianCPP (C++)
HessianD (D)
HessianKit (Objective-C 2.0)
HessianObjC (Objective-C)
HessianPHP (PHP)
HessianPy (Python)
HessianRuby (Ruby)
Hessian-Translator (Perl)
See also
Abstract Syntax Notation One
SDXF
Apache Thrift
Etch (protocol)
Protocol Buffers
Internet Communications Engine
|
https://en.wikipedia.org/wiki/Kuratowski%27s%20closure-complement%20problem
|
In point-set topology, Kuratowski's closure-complement problem asks for the largest number of distinct sets obtainable by repeatedly applying the set operations of closure and complement to a given starting subset of a topological space. The answer is 14. This result was first published by Kazimierz Kuratowski in 1922. It gained additional exposure in Kuratowski's fundamental monograph Topologie (first published in French in 1933; the first English translation appeared in 1966) before achieving fame as a textbook exercise in John L. Kelley's 1955 classic, General Topology.
Proof
Letting denote an arbitrary subset of a topological space, write for the closure of , and for the complement of . The following three identities imply that no more than 14 distinct sets are obtainable:
. (The closure operation is idempotent.)
. (The complement operation is an involution.)
. (Or equivalently , using identity (2)).
The first two are trivial. The third follows from the identity where is the interior of which is equal to the complement of the closure of the complement of , . (The operation is idempotent.)
A subset realizing the maximum of 14 is called a 14-set. The space of real numbers under the usual topology contains 14-sets. Here is one example:
where denotes an open interval and denotes a closed interval. Let denote this set. Then the following 14 sets are accessible:
, the set shown above.
Further results
Despite its origin within the context of a topological space, Kuratowski's closure-complement problem is actually more algebraic than topological. A surprising abundance of closely related problems and results have appeared since 1960, many of which have little or nothing to do with point-set topology.
The closure-complement operations yield a monoid that can be used to classify topological spaces.
|
https://en.wikipedia.org/wiki/Point%20groups%20in%20three%20dimensions
|
In geometry, a point group in three dimensions is an isometry group in three dimensions that leaves the origin fixed, or correspondingly, an isometry group of a sphere. It is a subgroup of the orthogonal group O(3), the group of all isometries that leave the origin fixed, or correspondingly, the group of orthogonal matrices. O(3) itself is a subgroup of the Euclidean group E(3) of all isometries.
Symmetry groups of geometric objects are isometry groups. Accordingly, analysis of isometry groups is analysis of possible symmetries. All isometries of a bounded (finite) 3D object have one or more common fixed points. We follow the usual convention by choosing the origin as one of them.
The symmetry group of an object is sometimes also called its full symmetry group, as opposed to its proper symmetry group, the intersection of its full symmetry group with E+(3), which consists of all direct isometries, i.e., isometries preserving orientation. For a bounded object, the proper symmetry group is called its rotation group. It is the intersection of its full symmetry group with SO(3), the full rotation group of the 3D space. The rotation group of a bounded object is equal to its full symmetry group if and only if the object is chiral.
The point groups that are generated purely by a finite set of reflection mirror planes passing through the same point are the finite Coxeter groups, represented by Coxeter notation.
The point groups in three dimensions are heavily used in chemistry, especially to describe the symmetries of a molecule and of molecular orbitals forming covalent bonds, and in this context they are also called molecular point groups.
3D isometries that leave origin fixed
The symmetry group operations (symmetry operations) are the isometries of three-dimensional space R3 that leave the origin fixed, forming the group O(3). These operations can be categorized as:
The direct (orientation-preserving) symmetry operations, which form the group SO(3):
The identity op
|
https://en.wikipedia.org/wiki/Andrej%20Dujella
|
Andrej Dujella (born May 21, 1966 in Pula) is a Croatian professor of mathematics at the University of Zagreb and a fellow of the Croatian Academy of Sciences and Arts.
Life
Born in Pula, a native of Zadar, Dujella took part in the International Mathematical Olympiad, where he won a bronze medal in 1984.
He received his M.Sc. and Ph.D. in mathematics from the University of Zagreb with a dissertation titled "Generalized Diophantine–Davenport problem". His main area of research is number theory, in particular Diophantine equations, elliptic curves, and applications of number theory in cryptography. Dujella is author of the monograph "Number Theory" (translated from Croatian).
Dujella's main contribution to number theory is in connection to Diophantine m-tuples.
Dujella has shown that there exists no Diophantine 6-tuple and that there exist at most a finite number of Diophantine 5-tuples. He applied Diophantine tuples to construct elliptic curves with high rank. In 1998, Dujella and Attila Pethő introduced congruence method to obtain lower bound for number of Diophantine 5-tuples.
|
https://en.wikipedia.org/wiki/Fast%20and%20Secure%20Protocol
|
The Fast Adaptive and Secure Protocol (FASP) is a proprietary data transfer protocol. FASP is a network-optimized network protocol created by Michelle C. Munson and Serban Simu, productized by Aspera, and now owned by IBM subsequent to its acquisition of Aspera. The associated client/server software packages are also commonly called Aspera. The technology is patented under US Patent #8085781, Bulk Data Transfer, #20090063698, Method and system for aggregate bandwidth control. and others.
Built upon the connectionless UDP protocol, FASP does not expect any feedback on every packet sent, and yet provides fully reliable data transfer over best effort IP networks. Only the packets marked as really lost must be requested again by the recipient. As a result, it does not suffer as much loss of throughput as TCP does on networks with high latency or high packet loss and avoids the overhead of naive "UDP data blaster" protocols. The protocol innovates upon naive "data blaster" protocols through an optimal control-theoretic retransmission algorithm and implementation that achieves maximum goodput and avoids redundant retransmission of data. Its control model is designed to fill the available bandwidth of the end-to-end path over which the transfer occurs with only "good" and needed data.
Large organizations like the European Nucleotide Archive, the US National Institutes of Health National Center for Biotechnology Information and others use the protocol. The technology was recognized with many awards including an Engineering Emmy from the Academy of Film and Television.
Security
FASP has built-in security mechanisms that do not affect the transmission speed. The encryption algorithms used are based exclusively on open standards. Some product implementation use secure key exchange and authentication such as SSH.
The data is optionally encrypted or decrypted immediately before sending and receiving with the AES-128. To counteract attacks by monitoring the encrypted info
|
https://en.wikipedia.org/wiki/Disk%20array
|
A disk array is a disk storage system which contains multiple disk drives. It is differentiated from a disk enclosure, in that an array has cache memory and advanced functionality, like RAID, deduplication, encryption and virtualization.
Components of a disk array include:
Disk array controllers
Cache in form of both volatile random-access memory and non-volatile flash memory.
Disk enclosures for both magnetic rotational hard disk drives and electronic solid-state drives.
Power supplies
Typically a disk array provides increased availability, resiliency, and maintainability by using additional redundant components (controllers, power supplies, fans, etc.), often up to the point where all single points of failure (SPOFs) are eliminated from the design. Additionally, disk array components are often hot-swappable.
Traditionally disk arrays were divided into categories:
Network attached storage (NAS) arrays
Storage area network (SAN) arrays:
Modular SAN arrays
Monolithic SAN arrays
Utility Storage Arrays
Storage virtualization
Primary vendors of storage systems include Coraid, Inc., DataDirect Networks, Dell EMC, Fujitsu, Hewlett Packard Enterprise, Hitachi Data Systems, Huawei, IBM, Infortrend, NetApp, Oracle Corporation, Panasas, Pure Storage and other companies that often act as OEM for the above vendors and do not themselves market the storage components they manufacture.
|
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%20gauge
|
In particle physics, the Wess–Zumino gauge is a particular choice of a gauge transformation in a gauge theory with supersymmetry. In this gauge, the supersymmetrized gauge transformation is chosen in such a way that most components of the vector superfield vanish, except for the usual physical ones when the function of the superspace is expanded in terms of components.
See also
Supersymmetric gauge theory
Supersymmetric quantum field theory
Gauge theories
|
https://en.wikipedia.org/wiki/Generative%20grammar
|
Generative grammar, or generativism , is a linguistic theory that regards linguistics as the study of a hypothesised innate grammatical structure. It is a biological or biologistic modification of earlier structuralist theories of linguistics, deriving from logical syntax and glossematics. Generative grammar considers grammar as a system of rules that generates exactly those combinations of words that form grammatical sentences in a given language. It is a system of explicit rules that may apply repeatedly to generate an indefinite number of sentences which can be as long as one wants them to be. The difference from structural and functional models is that the object is base-generated within the verb phrase in generative grammar. This purportedly cognitive structure is thought of as being a part of a universal grammar, a syntactic structure which is caused by a genetic mutation in humans.
Generativists have created numerous theories to make the NP VP (NP) analysis work in natural language description. That is, the subject and the verb phrase appearing as independent constituents, and the object placed within the verb phrase. A main point of interest remains in how to appropriately analyse Wh-movement and other cases where the subject appears to separate the verb from the object. Although claimed by generativists as a cognitively real structure, neuroscience has found no evidence for it. In other words, generative grammar encompasses proposed models of linguistic cognition; but there is still no specific indication that these are quite correct. Recent arguments have been made that the success of large language models undermine key claims of generative syntax because they are based on markedly different assumptions, including gradient probability and memorized constructions, and out-perform generative theories both in syntactic structure and in integration with cognition and neuroscience.
Frameworks
There are a number of different approaches to generative grammar.
|
https://en.wikipedia.org/wiki/Ritz%20ballistic%20theory
|
Ritz ballistic theory is a theory in physics, first published in 1908 by Swiss physicist Walther Ritz. In 1908, Ritz published Recherches critiques sur l'Électrodynamique générale, a lengthy criticism of Maxwell-Lorentz electromagnetic theory, in which he contended that the theory's connection with the luminiferous aether (see Lorentz ether theory) made it "essentially inappropriate to express the comprehensive laws for the propagation of electrodynamic actions."
Ritz proposed a new equation, derived from the principles of the ballistic theory of electromagnetic waves, a theory competing with the special theory of relativity. The equation relates the force between two charged particles with a radial separation r relative velocity v and relative acceleration a, where k is an undetermined parameter from the general form of Ampere's force law as proposed by Maxwell. The equation obeys Newton's third law and forms the basis of Ritz's electrodynamics.
Derivation of Ritz's equation
On the assumption of an emission theory, the force acting between two moving charges should depend on the density of the messenger particles emitted by the charges (), the radial distance between the charges (ρ), the velocity of the emission relative to the receiver, ( and for the x and r components, respectively), and the acceleration of the particles relative to each other (). This gives us an equation of the form:
.
where the coefficients , and are independent of the coordinate system and are functions of and . The stationary coordinates of the observer relate to the moving frame of the charge as follows
Developing the terms in the force equation, we find that the density of particles is given by
The tangent plane of the shell of emitted particles in the stationary coordinate is given by the Jacobian of the transformation from to :
We can also develop expressions for the retarded radius and velocity using Taylor series expansions
With these substitutions, we find that the forc
|
https://en.wikipedia.org/wiki/Matrix%20%28mass%20spectrometry%29
|
In mass spectrometry, a matrix is a compound that promotes the formation of ions. Matrix compounds are used in matrix-assisted laser desorption/ionization (MALDI), matrix-assisted ionization (MAI), and fast atom bombardment (FAB).
Matrix-assisted laser desorption/ionization
MALDI is an ionization technique where laser energy is absorbed by a matrix to create ions from large molecules without fragmentation. The matrix, typically in excess, is mixed with the analyte molecule and deposited on a target. A table of matrix compounds, their structures, laser wavelengths typically used, and typical application is shown below.
Matrix-assisted ionization
Matrix-assisted ionization is an ionization method in mass spectrometry that creates ions via the creation of particles at atmospheric pressure and transfer to the vacuum of the mass analyzer.
Fast atom bombardment
FAB uses a high energy beam of atoms directed at a surface to create ions. FAB matrix compounds are typically liquids.
See also
Desorption/ionization on silicon
|
https://en.wikipedia.org/wiki/RNA%20polymerase
|
In molecular biology, RNA polymerase (abbreviated RNAP or RNApol), or more specifically DNA-directed/dependent RNA polymerase (DdRP), is an enzyme that catalyzes the chemical reactions that synthesize RNA from a DNA template.
Using the enzyme helicase, RNAP locally opens the double-stranded DNA so that one strand of the exposed nucleotides can be used as a template for the synthesis of RNA, a process called transcription. A transcription factor and its associated transcription mediator complex must be attached to a DNA binding site called a promoter region before RNAP can initiate the DNA unwinding at that position. RNAP not only initiates RNA transcription, it also guides the nucleotides into position, facilitates attachment and elongation, has intrinsic proofreading and replacement capabilities, and termination recognition capability. In eukaryotes, RNAP can build chains as long as 2.4 million nucleotides.
RNAP produces RNA that, functionally, is either for protein coding, i.e. messenger RNA (mRNA); or non-coding (so-called "RNA genes"). At least four functional types of RNA genes exist:
Transfer RNA (tRNA) Transfers specific amino acids to growing polypeptide chains at the ribosomal site of protein synthesis during translation;
Ribosomal RNA (rRNA) Incorporates into ribosomes;
Micro RNA (miRNA) Regulates gene activity; and, RNA silencing
Catalytic RNA (ribozyme) Functions as an enzymatically active RNA molecule.
RNA polymerase is essential to life, and is found in all living organisms and many viruses. Depending on the organism, a RNA polymerase can be a protein complex (multi-subunit RNAP) or only consist of one subunit (single-subunit RNAP, ssRNAP), each representing an independent lineage. The former is found in bacteria, archaea, and eukaryotes alike, sharing a similar core structure and mechanism. The latter is found in phages as well as eukaryotic chloroplasts and mitochondria, and is related to modern DNA polymerases. Eukaryotic and archaeal RNAP
|
https://en.wikipedia.org/wiki/Self-anointing%20in%20animals
|
Self-anointing in animals, sometimes called anointing or anting, is a behaviour whereby a non-human animal smears odoriferous substances over themselves. These substances are often the secretions, parts, or entire bodies of other animals or plants. The animal may chew these substances and then spread the resulting saliva mixture over their body, or they may apply the source of the odour directly with an appendage, tool or by rubbing their body on the source.
The functions of self-anointing differ between species, but it may act as self-medication, repel parasites, provide camouflage, aid in communication, or make the animal poisonous.
Primates
Several primate species self-anoint with various items such as millipedes, leaves and fruit. They sometimes drool while doing this. Both capuchin monkeys and squirrel monkeys perform urine washing, when they deposit a small quantity of urine onto the palm of a hand and then rub it on the sole of the opposite foot. It is thought to have multiple functions including hygiene, thermoregulation and response to irritation from biting ectoparasites (such as ticks and botfly). Some strepsirrhines and New World monkeys also self-anoint the body with urine to communicate.
Capuchins
Wild wedge-capped capuchin monkeys (Cebus olivaceus) self-anoint with millipedes (Orthoporus dorsovittatus). Chemical analysis revealed these millipedes secrete two benzoquinones, compounds known to be potently repellent to insects and the secretions are thought to provide protection against insects, particularly mosquitoes (and the bot flies they transmit) during the rainy season. Millipede secretion is so avidly sought by the monkeys that up to four of them will share a single millipede. The anointment must also involve risks, since benzoquinones are toxic and carcinogenic; however, it is likely that for capuchins, the immediate benefits of self-anointment outweigh the long-term costs. Secretions from these millipedes also elicit self-anointing in c
|
https://en.wikipedia.org/wiki/Enterprise%20Storage%20OS
|
Enterprise Storage OS, also known as ESOS, is a Linux distribution that serves as a block-level storage server in a storage area network (SAN). ESOS is composed of open-source software projects that are required for a Linux distribution and several proprietary build and install time options. The SCST project is the core component of ESOS; it provides the back-end storage functionality.
Platform
ESOS is a niche Linux distribution. ESOS is intended to run on a USB flash drive, or some other type of removable media such as Secure Digital, CompactFlash, etc. ESOS is a memory resident operating system: At boot, a tmpfs file system is initialized as the root file system and the USB flash drive image is copied onto this file system. Configuration files and logs are periodically written to a USB flash drive (persistent storage) or by user intervention when configuration changes occur.
Interface
ESOS utilizes a text-based user interface (TUI) for system management, network configuration, and storage provisioning functions. The TUI used in ESOS is written in C; the ncurses and CDK libraries are used.
Front-end connectivity
ESOS supports connectivity on several different front-end storage area network technologies. These core functions are supported by SCST and third-party target drivers that vendors have developed for SCST:
Fibre Channel: QLogic HBAs are natively supported, and Emulex OneConnect FC HBAs can be supported by a build time option (requiring the Emulex OCS SDK)
InfiniBand: Mellanox, QLogic, and Chelsio IB HCAs, among others, are supported
Fibre Channel over Ethernet (FCoE): A software target implementation supports NICs with DCB/DCBX capabilities, or build time options exist for supporting Emulex OneConnect FCoE CNAs (requires the Emulex OCS SDK) and Chelsio Uwire FCoE CNAs.
iSCSI: Will work over any IP communication method supported by ESOS (Ethernet, IPoIB).
Back-end storage
Open-source software projects and commodity computing server hardware are us
|
https://en.wikipedia.org/wiki/Gubernaculum%20testis
|
In the inguinal crest of a peculiar structure, the gubernaculum testis makes its appearance. This is at first a slender band, extending from that part of the skin of the groin which afterward forms the scrotum through the inguinal canal to the body and epididymis of the testis. The gubernaculum testis is homologous to the round ligament of the uterus in females.
External links
Images
Embryology
|
https://en.wikipedia.org/wiki/Orientation%20%28mental%29
|
Orientation is a function of the mind involving awareness of three dimensions: time, place and person. Problems with orientation lead to disorientation, and can be due to various conditions. It ranges from an inability to coherently understand person, place, time, and situation, to complete orientation.
Assessment
Assessment of a persons mental orientation is frequently designed to evaluate the need for focused diagnosis and treatment of conditions leading to Altered Mental Status (AMS). A variety of basic prompts and tests are available to determine a person's level of orientation. These tests frequently primarily assess the ability of the person (within EMS) to perform basic functions of life (see: Airway Breathing Circulation), many assessments then gauge their level of amnesia, awareness of surroundings, concept of time, place, and response to verbal, and sensory stimuli.
Causes of mental disorientation
Disorientation has a variety of causes, physiological and mental in nature. Physiological disorientation is frequently caused by an underlying or acute condition. Disease or injury that impairs the delivery of essential nutrients such as glucose, oxygen, fluids, or electrolytes can impair homeostasis, and therefore neurological function causing mental disorientation. Other causes are psycho-neurological in nature (see also Cognitive disorder) stemming from chemical imbalances in the brain, deterioration of the structure of the brain, or psychiatric states or illnesses that result in disorientation.
Mental orientation is frequently effected by shock, including physiological shock (see: Shock circulatory) and mental shock (see: Acute stress reaction, a psychological condition in response to acute stressful stimuli.)
Areas within precuneus, posterior cingulate cortex, inferior parietal lobe, medial prefrontal cortex, lateral frontal, lateral temporal cortices are believed to be responsible for situational orientation.
See also
Mental confusion
Mental status
|
https://en.wikipedia.org/wiki/Quinapyramine
|
Quinapyramine is a trypanocidal agent for veterinary use.
|
https://en.wikipedia.org/wiki/Regulatory%20sequence
|
A regulatory sequence is a segment of a nucleic acid molecule which is capable of increasing or decreasing the expression of specific genes within an organism. Regulation of gene expression is an essential feature of all living organisms and viruses.
Description
In DNA, regulation of gene expression normally happens at the level of RNA biosynthesis (transcription). It is accomplished through the sequence-specific binding of proteins (transcription factors) that activate or inhibit transcription. Transcription factors may act as activators, repressors, or both. Repressors often act by preventing RNA polymerase from forming a productive complex with the transcriptional initiation region (promoter), while activators facilitate formation of a productive complex. Furthermore, DNA motifs have been shown to be predictive of epigenomic modifications, suggesting that transcription factors play a role in regulating the epigenome.
In RNA, regulation may occur at the level of protein biosynthesis (translation), RNA cleavage, RNA splicing, or transcriptional termination. Regulatory sequences are frequently associated with messenger RNA (mRNA) molecules, where they are used to control mRNA biogenesis or translation. A variety of biological molecules may bind to the RNA to accomplish this regulation, including proteins (e.g., translational repressors and splicing factors), other RNA molecules (e.g., miRNA) and small molecules, in the case of riboswitches.
Activation and implementation
A regulatory DNA sequence does not regulate unless it is activated. Different regulatory sequences are activated and then implement their regulation by different mechanisms.
Enhancer activation and implementation
Expression of genes in mammals can be upregulated when signals are transmitted to the promoters associated with the genes. Cis-regulatory DNA sequences that are located in DNA regions distant from the promoters of genes can have very large effects on gene expression, with some genes
|
https://en.wikipedia.org/wiki/Radio%20over%20fiber
|
Radio over fiber (RoF) or RF over fiber (RFoF) refers to a technology whereby light is modulated by a radio frequency signal and transmitted over an optical fiber link. Main technical advantages of using fiber optical links are lower transmission losses and reduced sensitivity to noise and electromagnetic interference compared to all-electrical signal transmission.
Applications range from the transmission of mobile radio signals (3G, 4G, 5G and WiFi) and the transmission of cable television signals (CATV) to the transmission of RF L-Band signals in ground stations for satellite communications.
General Advantage
Low attenuation
Signals transmitted on optical fiber attenuate much less than through other media like metal cables or wireless media. By using optical fiber, the radio signals can gap larger transmission distances, reducing the need of additional repeaters or amplifiers.
Applications
Wireless Communications
In the area of Wireless Communications one main application is to facilitate wireless access, such as 5G and WiFi simultaneously from the same antenna. In other words, radio signals are carried over fiber-optic cable. Thus, a single antenna can receive any and all radio signals (5G, Wifi, cell, etc..) carried over a single-fiber cable to a central location where equipment then converts the signals; this is opposed to the traditional way where each protocol type (5G, WiFi, cell) requires separate equipment at the location of the antenna.
Although radio transmission over fiber is used for multiple purposes, such as in cable television (CATV) networks and in satellite base stations, the term RoF is usually applied when this is done for wireless access.
In RoF systems, wireless signals are transported in optical form between a central station and a set of base stations before being radiated through the air. Each base station is adapted to communicate over a radio link with at least one user's mobile station located within the radio range of said b
|
https://en.wikipedia.org/wiki/Attribute-based%20access%20control
|
Attribute-based access control (ABAC), also known as policy-based access control for IAM, defines an access control paradigm whereby a subject's authorization to perform a set of operations is determined by evaluating attributes associated with the subject, object, requested operations, and, in some cases, environment attributes.
ABAC is a method of implementing access control policies that is highly adaptable and can be customized using a wide range of attributes, making it suitable for use in distributed or rapidly changing environments. The only limitations on the policies that can be implemented with ABAC are the capabilities of the computational language and the availability of relevant attributes. ABAC policy rules are generated as Boolean functions of the subject's attributes, the object's attributes, and the environment attributes.
Unlike role-based access control (RBAC), which defines roles that carry a specific set of privileges associated with them and to which subjects are assigned, ABAC can express complex rule sets that can evaluate many different attributes. Through defining consistent subject and object attributes into security policies, ABAC eliminates the need for explicit authorizations to individuals’ subjects needed in a non-ABAC access method, reducing the complexity of managing access lists and groups.
Attribute values can be set-valued or atomic-valued. Set-valued attributes contain more than one atomic value. Examples are role and project. Atomic-valued attributes contain only one atomic value. Examples are clearance and sensitivity. Attributes can be compared to static values or to one another, thus enabling relation-based access control.
Although the concept itself existed for many years, ABAC is considered a "next generation" authorization model because it provides dynamic, context-aware and risk-intelligent access control to resources allowing access control policies that include specific attributes from many different information sy
|
https://en.wikipedia.org/wiki/Conference%20on%20Web%20and%20Internet%20Economics
|
Conference on Web and Internet Economics (WINE) (prior to 2013, The Workshop on Internet & Network Economics) is an interdisciplinary workshop devoted to the analysis of algorithmic and economic problems arising in the Internet and the World Wide Web. The submissions are peer reviewed and the proceedings of the conference is published by Springer-Verlag. The conference has been held every year since 2005. Previous sessions include:
WINE 2005: Hong Kong, China : Proceedings: Lecture Notes in Computer Science 3828 Springer 2005,
WINE 2006: Patras, Greece : Proceedings: Lecture Notes in Computer Science 4286 Springer 2006,
WINE 2007: San Diego, CA, USA : Proceedings: Lecture Notes in Computer Science 4858 Springer 2007,
WINE 2008: Shanghai, China : Proceedings: Lecture Notes in Computer Science 5385 Springer 2008,
WINE 2009: Rome, Italy : Proceedings: Lecture Notes in Computer Science 5929 Springer 2009,
WINE 2010: Stanford, CA, USA : Proceedings: Lecture Notes in Computer Science 6484 Springer 2010,
WINE 2011: Singapore : Proceedings: Lecture Notes in Computer Science 7090 Springer 2011,
WINE 2012: Liverpool, UK : Proceedings: Lecture Notes in Computer Science 7695 Springer 2012,
WINE 2013: Cambridge, MA, USA : Proceedings: Lecture Notes in Computer Science 8289 Springer 2013,
WINE 2014: Beijing, China : Proceedings: Lecture Notes in Computer Science 8877 Springer 2014,
WINE 2015: Amsterdam, the Netherlands : Proceedings: Lecture Notes in Computer Science 9470 Springer 2015,
WINE 2016: Montreal, Canada : Proceedings: Lecture Notes in Computer Science 10123 Springer 2016,
WINE 2017: Bangalore, India : Proceedings: Lecture Notes in Computer Science 10660 Springer 2017,
WINE 2018: Oxford, UK : Proceedings: Lecture Notes in Computer Science 11316 Springer 2018,
WINE 2019: New York, NY, USA : Proceedings: Lecture Notes in Computer Science 11920 Springer 2019,
WINE 2020: Beijing, China : Proceedings: Lecture Notes in Computer Science 12495 S
|
https://en.wikipedia.org/wiki/Anti-CRISPR
|
Anti-CRISPR (Anti-Clustered Regularly Interspaced Short Palindromic Repeats or Acr) is a group of proteins found in phages, that inhibit the normal activity of CRISPR-Cas, the immune system of certain bacteria. CRISPR consists of genomic sequences that can be found in prokaryotic organisms, that come from bacteriophages that infected the bacteria beforehand, and are used to defend the cell from further viral attacks. Anti-CRISPR results from an evolutionary process occurred in phages in order to avoid having their genomes destroyed by the prokaryotic cells that they will infect.
Before the discovery of this type of family proteins, the acquisition of mutations was the only way known that phages could use to avoid CRISPR-Cas mediated shattering, by reducing the binding affinity of the phage and CRISPR. Nonetheless, bacteria have mechanisms to retarget the mutant bacteriophage, a process that it is called "priming adaptation". So, as far as researchers currently know, anti-CRISPR is the most effective way to ensure the survival of phages throughout the infection process of bacteria.
History
Anti-CRISPR systems were first seen in Pseudomonas aeruginosa prophages, which disabled type I-F CRISPR–Cas system, characteristic of some strains of these bacteria. After analysing the genomic sequences of these phages, genes codifying five different Anti-CRISPR proteins (also named Acrs) were discovered. Such proteins were AcrF1, AcrF2, AcrF3, AcrF4 and AcrF5. Research found none of these proteins disrupted the expression of Cas genes nor the assembling of CRISPR molecules, so it was thought that those type I-F proteins directly affected the CRISPR–Cas interference.
Further investigation confirmed this hypothesis with the discovery of 4 other proteins (AcrE1, AcrE2, AcrE3 and AcrE4), which were shown to impede Pseudomonas aeruginosa’s CRISPR-Cas system. Furthermore, the locus of the genes codifying these type I-E proteins was really close to the one responsible for the type
|
https://en.wikipedia.org/wiki/Mandibular%20symphysis
|
In human anatomy, the facial skeleton of the skull the external surface of the mandible is marked in the median line by a faint ridge, indicating the mandibular symphysis (Latin: symphysis menti) or line of junction where the two lateral halves of the mandible typically fuse at an early period of life (1-2 years). It is not a true symphysis as there is no cartilage between the two sides of the mandible.
This ridge divides below and encloses a triangular eminence, the mental protuberance, the base of which is depressed in the center but raised on either side to form the mental tubercle. The lowest (most inferior) end of the mandibular symphysis — the point of the chin — is called the "menton".
It serves as the origin for the geniohyoid and the genioglossus muscles.
Other animals
Solitary mammalian carnivores that rely on a powerful canine bite to subdue their prey have a strong mandibular symphysis, while pack hunters delivering shallow bites have a weaker one. When filter feeding, the baleen whales, of the suborder Mysticeti, can dynamically expand their oral cavity in order to accommodate enormous volumes of sea water. This is made possible thanks to its mandibular skull joints, especially the elastic mandibular symphysis which permits both dentaries to be rotated independently in two planes. This flexible jaw, which made the titanic body sizes of baleen whales possible, is not present in early whales and most likely evolved within Mysticeti.
|
https://en.wikipedia.org/wiki/Systema%20Naturae
|
(originally in Latin written with the ligature æ) is one of the major works of the Swedish botanist, zoologist and physician Carl Linnaeus (1707–1778) and introduced the Linnaean taxonomy. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers, Gaspard and Johann, Linnaeus was first to use it consistently throughout his book. The first edition was published in 1736. The full title of the 10th edition (1758), which was the most important one, was or translated: "System of nature through the three kingdoms of nature, according to classes, orders, genera and species, with characters, differences, synonyms, places".
The tenth edition of this book (1758) is considered the starting point of zoological nomenclature. In 1766–1768 Linnaeus published the much enhanced 12th edition, the last under his authorship. Another again enhanced work in the same style and titled "" was published by Johann Friedrich Gmelin between 1788 and 1793. Since at least the early 20th century, zoologists have commonly recognized this as the last edition belonging to this series.
Overview
Linnaeus (later known as "Carl von Linné", after his ennoblement in 1761) published the first edition of in the year 1735, during his stay in the Netherlands. As was customary for the scientific literature of its day, the book was published in Latin. In it, he outlined his ideas for the hierarchical classification of the natural world, dividing it into the animal kingdom (), the plant kingdom (), and the "mineral kingdom" ().
Linnaeus's Systema Naturae lists only about 10,000 species of organisms, of which about 6,000 are plants and 4,236 are animals. According to the historian of botany William T. Stearn, "Even in 1753 he believed that the number of species of plants in the whole world would hardly reach 10,000; in his whole career he named about 7,700 species of flowering plants."
Linnaeus developed his classification of the plant kingdom in an attempt to
|
https://en.wikipedia.org/wiki/ObjectiveFS
|
ObjectiveFS is a distributed file system developed by Objective Security Corp. It is a POSIX-compliant file system built with an object store backend. It was initially released with AWS S3 backend, and has later implemented support for Google Cloud Storage and object store devices. It was released for beta in early 2013, and the first version was officially released on August 11, 2013.
Design
ObjectiveFS implements a log structured file system on top of object stores (such as Amazon S3, Google Cloud Storage and other object store devices). It is a POSIX compliant file system and supports features such as dynamic file system size, soft and hard links, unix attributes, extended attributes, Unix timestamps, users and permissions, no limit on file size, atomic renames, atomic file creation, directory renames, read and write anywhere in a file, named pipes, sockets, etc.
It implements client-side encryption and uses the NaCl crypto library, with algorithms like Salsa20 and Poly1305. This approach doesn't have data-dependent branches or data-dependency array indices and protects against cache timing attacks. Data is encrypted before leaving the client, and stays encrypted at rest and in motion.
One main difference between ObjectiveFS and GlusterFS/CephFS is that it offloads the storage cluster management to cloud providers (Amazon/Google).
Usage
ObjectiveFS software runs on the server and talks to the object store using S3 API. The software itself handles the metadata. When there are multiple servers sharing the same files, it handles the negotiation with other sharing servers (also running ObjectiveFS).
Some use cases are scaling web servers, mail servers, content management services (CMS), hybrid cloud., hybrid development environment between laptop and cloud
See also
Distributed file system
List of file systems, the distributed fault-tolerant file system section
Ceph
Lustre
GlusterFS
|
https://en.wikipedia.org/wiki/Velocity%20potential
|
A velocity potential is a scalar potential used in potential flow theory. It was introduced by Joseph-Louis Lagrange in 1788.
It is used in continuum mechanics, when a continuum occupies a simply-connected region and is irrotational. In such a case,
where denotes the flow velocity. As a result, can be represented as the gradient of a scalar function :
is known as a velocity potential for .
A velocity potential is not unique. If is a velocity potential, then is also a velocity potential for , where is a scalar function of time and can be constant. In other words, velocity potentials are unique up to a constant, or a function solely of the temporal variable.
The Laplacian of a velocity potential is equal to the divergence of the corresponding flow. Hence if a velocity potential satisfies Laplace equation, the flow is incompressible.
Unlike a stream function, a velocity potential can exist in three-dimensional flow.
Usage in acoustics
In theoretical acoustics, it is often desirable to work with the acoustic wave equation of the velocity potential instead of pressure and/or particle velocity .
Solving the wave equation for either field or field does not necessarily provide a simple answer for the other field. On the other hand, when is solved for, not only is found as given above, but is also easily found—from the (linearised) Bernoulli equation for irrotational and unsteady flow—as
See also
Vorticity
Hamiltonian fluid mechanics
Potential flow
Potential flow around a circular cylinder
Notes
External links
Joukowski Transform Interactive WebApp
Continuum mechanics
Physical quantities
|
https://en.wikipedia.org/wiki/List%20of%20cosmologists
|
This is a list of people who have made noteworthy contributions to cosmology (the study of the history and large-scale structure of the universe) and their cosmological achievements
A
Tom Abel (1970–) studied primordial star formation
Roberto Abraham (1965–) studied the shapes of early galaxies
Andreas Albrecht studied the formation of the early universe, cosmic structure, and dark energy
Hannes Alfvén (1908–1995) theorized that galactic magnetic fields could be generated by plasma currents
Ralph A. Alpher (1921–2007) argued that observed proportions of hydrogen and helium in the universe could be explained by the big bang model, predicted cosmic background radiation
Aristarchus of Samos (310–230 BC) early proponent of heliocentrism
Aristotle (circa 384–322 BC) posited a geocentric cosmology that was widely accepted for many centuries
Aryabhata (476–550) described a geocentric model with slow and fast epicycles
B
Ja'far ibn Muhammad Abu Ma'shar al-Balkhi (787–886) conveyed Aristotle's theories from Persia to Europe
James M. Bardeen (1939–2022) studied the mathematics of black holes and of vacua under general relativity
John D. Barrow (1952–2020) popularized the anthropic cosmological principle
Charles L. Bennett (1956–) studied the large-scale structure of the universe by mapping irregularities in microwave background radiation
Orfeu Bertolami (1959–) studied the cosmological constant, inflation, dark energy-dark matter unification and interaction, alternative gravity theories
Somnath Bharadwaj (1964–) studied large-scale structure formation
James Binney (1950–) studied galactic dynamics and supernova disruption of galactic gasses
Martin Bojowald (1973–) studied loop quantum gravity and established loop quantum cosmology
Hermann Bondi (1919–2005) developed the steady-state model
Mustapha Ishak Boushaki (1967–) physicist researcher on Cosmology
Tycho Brahe (1546–1601) promoted a geo-heliocentric system of epicycles
Robert Brandenberger (1956–)
|
https://en.wikipedia.org/wiki/Bus%20functional%20model
|
A Bus Functional Model or BFM (also known as a Transaction Verification Model or TVM) is a non-synthesizable software model of an integrated circuit component having one or more external buses. The emphasis of the model is on simulating system bus transactions prior to building and testing the actual hardware. BFMs are usually defined as tasks in Hardware description languages (HDLs), which apply stimuli to the design under verification via complex waveforms and protocols. A BFM is typically implemented using hardware description languages such as Verilog, VHDL, SystemC, or SystemVerilog.
Typically, BFMs offer a two-sided interface: One interface side drives and samples low-level signals according to the bus protocol. On its other side, tasks are available to create and respond to bus transactions. BFMs are often used as reusable building blocks to create simulation test benches, in which the bus interface ports of a design under test are connected to appropriate BFMs.
Another common application of BFMs is the provision of substitute models for IP components: Instead of a netlist or RTL design of an IP component, a 3rd party IP supplier might provide only a BFM suitable for verification purposes. The actual IP component in the form of a gate-level netlist can be directly provided to the foundry by the IP provider.
In the past, BFM were treated as a non-synthesizable entity, however recently BFMs are becoming available as synthesizable models as well.
Transaction Verification Models
BFMs are sometimes referred to as TVMs or Transaction Verification Models. This is to emphasize that bus operations of the model have been bundled into atomic bus transactions to make it easier to issue and view bus transactions. Visualizations of the bus transactions modeled by TVMs are similar to the output of a protocol analyzer or bus sniffer.
|
https://en.wikipedia.org/wiki/SPEAR
|
SPEAR (originally Stanford Positron Electron Asymmetric Rings, now simply a name) was a collider at the SLAC National Accelerator Laboratory. It began running in 1972, colliding electrons and positrons with an energy of . During the 1970s, experiments at the accelerator played a key role in particle physics research, including the discovery of the meson (awarded the 1976 Nobel Prize in physics), many charmonium states, and the discovery of the tau (awarded the 1995 Nobel Prize in physics).
Today, SPEAR is used as a synchrotron radiation source for the Stanford Synchrotron Radiation Lightsource (SSRL). The latest major upgrade of the ring in that finished in 2004 rendered it the current name SPEAR3.
|
https://en.wikipedia.org/wiki/United%20States%20Armed%20Forces%20nude%20photo%20scandal
|
In March 2017, a nude photo scandal in the United States Armed Forces was uncovered after it was reported by the Center for Investigative Reporting and The War Horse. In early reporting, it was believed that the scandal was contained to only the Marine Corps, but was subsequently revealed to involve the rest of the military.
Incident
In a closed Facebook group called "Marines United," which consisted of 30,000 active duty and retired members of the United States Armed Forces and British Royal Marines, hundreds of photos of female servicemembers from every branch of the military were distributed. The page included links to Dropbox and Google Drive with even more images. After the Facebook group was shut down, members of the original group were redirected to other groups. In a post on the original group page, a member wrote, "It would be hilarious if one of these FBI or (Naval Criminal Investigative Service) fucks found their wife on here." In one instance, a group called "Marines United 2" was created and had 3,000 members. In the MU2 group, a user identified as Garret Bailey wrote, "If you add the fuck that snitches… I will blast you on every goddamn page from here to fucking the sandbox and back. Understand this: I will not accept a request until I can see that the person has served. If they haven’t, DON’T FUCKING ADD THEM!!! If you see someone and know they are a fucking snitch, let an admin know. This shit should have never made it to the national fucking news."
Investigation
The Naval Criminal Investigative Service launched an investigation into the incident.
Reactions
US Department of Defense
Jim Mattis, the Secretary of Defense, said, "The purported actions of civilian and military personnel on social media websites, including some associated with the Marines United group and possibly others, represent egregious violations of the fundamental values we uphold at the Department of Defense." Robert Neller, the Commandant of the Marine Corps, said, "For any
|
https://en.wikipedia.org/wiki/Inversion%20%28music%29
|
In music theory, an inversion is a type of change to intervals, chords, voices (in counterpoint), and melodies. In each of these cases, "inversion" has a distinct but related meaning. The concept of inversion also plays an important role in musical set theory.
Intervals
An interval is inverted by raising or lowering either of the notes by one or more octaves so that the positions of the notes reverse (i.e. the higher note becomes the lower note and vice versa). For example, the inversion of an interval consisting of a C with an E above it (the third measure below) is an E with a C above it – to work this out, the C may be moved up, the E may be lowered, or both may be moved.
The tables to the right show the changes in interval quality and interval number under inversion. Thus, perfect intervals remain perfect, major intervals become minor and vice versa, and augmented intervals become diminished and vice versa. (Doubly diminished intervals become doubly augmented intervals, and vice versa.).
Traditional interval numbers add up to nine: seconds become sevenths and vice versa, thirds become sixths and vice versa, and so on. Thus, a perfect fourth becomes a perfect fifth, an augmented fourth becomes a diminished fifth, and a simple interval (that is, one that is narrower than an octave) and its inversion, when added together, equal an octave. See also complement (music).
Chords
A chord's inversion describes the relationship of its lowest notes to the other notes in the chord. For instance, a C major triad contains the tones C, E and G; its inversion is determined by which of these tones is the lowest note (or bass note) in the chord.
The term inversion often categorically refers to the different possibilities, though it may also be restricted to only those chords where the lowest note is not also the root of the chord. Texts that follow this restriction may use the term position instead, to refer to all of the possibilities as a category.
Root position and inver
|
https://en.wikipedia.org/wiki/Dead%20water
|
Dead water is the nautical term for a phenomenon which can occur when there is strong vertical density stratification due to salinity or temperature or both. It is common where a layer of fresh or brackish water rests on top of denser salt water, without the two layers mixing. The phenomenon is frequently, but not exclusively, observed in fjords where glacier runoff flows into salt water without much mixing. The phenomenon is a result of energy producing internal waves that have an effect on the vessel. The effect can also be found at density boundaries between sub surface layers.
In the better known surface phenomenon a ship traveling in a fresh water layer with a depth approximately equal to the vessel's draft will expend energy creating and maintaining internal waves between the layers. The vessel may be hard to maneuver or can even slow down almost to a standstill and "stick". An increase in speed by a few knots can overcome the effect. Experiments have shown the effect can be even more pronounced in the case of submersibles encountering such stratification at depth.
The phenomenon, long considered sailor's yarns, was first described for science by Fridtjof Nansen, the Norwegian Arctic explorer. Nansen wrote the following from his ship Fram in August 1893 in the Nordenskiöld Archipelago near the Taymyr Peninsula:
"When caught in dead water Fram appeared to be held back, as if by some mysterious force, and she did not always answer the helm. In calm weather, with a light cargo, Fram was capable of 6 to 7 knots. When in dead water she was unable to make 1.5 knots. We made loops in our course, turned sometimes right around, tried all sorts of antics to get clear of it, but to very little purpose."
Nansen's experience led him to request physicist and meteorologist Vilhelm Bjerknes to study it scientifically. Bjerknes had his student, Vagn Walfrid Ekman, investigate. Ekman, who later described the effect now bearing his name as the Ekman spiral, demonstrated the
|
https://en.wikipedia.org/wiki/Barlow%27s%20formula
|
Barlow's formula (called "Kesselformel" in German) relates the internal pressure that a pipe can withstand to its dimensions and the strength of its material.
This approximate formula is named after Peter Barlow, an English mathematician.
,
where
: internal pressure,
: allowable stress,
: wall thickness,
: outside diameter.
This formula (DIN 2413) figures prominently in the design of autoclaves and other pressure vessels.
Other formulations
The design of a complex pressure containment system involves much more than the application of Barlow's formula. For example, in 100 countries the ASME BPVCcode stipulates the requirements for design and testing of pressure vessels.
The formula is also common in the pipeline industry to verify that pipe used for gathering, transmission, and distribution lines can safely withstand operating pressures. The design factor is multiplied by the resulting pressure which gives the maximum operating pressure (MAOP) for the pipeline. In the United States, this design factor is dependent on Class locations which are defined in DOT Part 192. There are four class locations corresponding to four design factors:
External links
Barlow's Formula Calculator
Barlow's Equation and Calculator
Barlow's Formula Solver
Barlow's Formula Calculator for Copper Tubes
|
https://en.wikipedia.org/wiki/Denso%20mapcode
|
The Denso MapCode system is a spatial reference system (not to be confused with the international mapcode system, a different spatial reference system). Denso MapCodes are 7- to 10-digit codes identifying specific 900-square-meter areas in Japan.
History
The Denso MapCode system was developed in 1997 by Denso Corporation for easy identification of any location in Japan by Japanese navigation systems. Car navigation systems are unable to identify locations for which addresses or telephone numbers are not available or house numbers, like in Japan, are not sequential. The Denso MapCode system enables accurate pinpointing by number, predetermined according to latitude and longitude.
The use of MapCodes is free to end users but corporations wanting to commercialise it will need to sign a contract with Denso and pay a fee. In Japan, car navigation system suppliers such as Denso itself, Clarion, Kenwood, Fujitsu Ten, Sony, Panasonic, Pioneer and Alpine Electronics have adopted the system and Toyota, Honda, Nissan, Fuji Heavy Industries, BMW Japan, General Motors Japan, Jaguar Japan and Land Rover Japan have introduced it in their vehicles.
Design principles
The Denso MapCode system divided Japan into 1162 zones, each zone into 900 blocks, and each block into 900 areas. A Denso MapCode number consists of the zone number (up to 4 digits), the block number (always 3 digits) and the area number (always 3 digits), a numeric code of up to 10 digits.
As the MapCode numbers proved too coarse for certain situations (this first version identified an area with a radius of about 100 meters ), the system was extended in 2004: by adding an asterisk and two extra digits, a specific cell of 9 square meters can be identified within an area. The design of the division in blocks is graphically explained on the website of Denso.
Application
Unfortunately, neither Garmin nor TomTom publish their own car navigation maps for Japan, although a third-party map for Garmin is available from
|
https://en.wikipedia.org/wiki/Lead%20shielding
|
Lead shielding refers to the use of lead as a form of radiation protection to shield people or objects from radiation so as to reduce the effective dose. Lead can effectively attenuate certain kinds of radiation because of its high density and high atomic number; principally, it is effective at stopping gamma rays and x-rays.
Operation
Lead's high density is caused by the combination of its high atomic number and the relatively short bond lengths and atomic radius. The high atomic number means that more electrons are needed to maintain a neutral charge and the short bond length and a small atomic radius means that many atoms can be packed into a particular lead structure.
Because of lead's density and large number of electrons, it is well suited to scattering x-rays and gamma-rays. These rays form photons, a type of boson, which impart energy onto electrons when they come into contact. Without a lead shield, the electrons within a person's body would be affected, which could damage their DNA. When the radiation attempts to pass through lead, its electrons absorb and scatter the energy. Eventually though, the lead will degrade from the energy to which it is exposed. However, lead is not effective against all types of radiation. High energy electrons (including beta radiation) incident on lead may create bremsstrahlung radiation, which is potentially more dangerous to tissue than the original radiation. Furthermore, lead is not a particularly effective absorber of neutron radiation.
Types
Lead is used for shielding in x-ray machines, nuclear power plants, labs, medical facilities, military equipment, and other places where radiation may be encountered. There is great variety in the types of shielding available both to protect people and to shield equipment and experiments. In gamma-spectroscopy for example, lead castles are constructed to shield the probe from environmental radiation. Personal shielding includes lead aprons (such as the familiar garment used d
|
https://en.wikipedia.org/wiki/Sort%20%28Unix%29
|
In computing, sort is a standard command line program of Unix and Unix-like operating systems, that prints the lines of its input or concatenation of all files listed in its argument list in sorted order. Sorting is done based on one or more sort keys extracted from each line of input. By default, the entire input is taken as sort key. Blank space is the default field separator. The command supports a number of command-line options that can vary by implementation. For instance the "-r" flag will reverse the sort order.
History
A command that invokes a general sort facility was first implemented within Multics. Later, it appeared in Version 1 Unix. This version was originally written by Ken Thompson at AT&T Bell Laboratories. By Version 4 Thompson had modified it to use pipes, but sort retained an option to name the output file because it was used to sort a file in place. In Version 5, Thompson invented "-" to represent standard input.
The version of bundled in GNU coreutils was written by Mike Haertel and Paul Eggert. This implementation employs the merge sort algorithm.
Similar commands are available on many other operating systems, for example a command is part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2.
The command has also been ported to the IBM i operating system.
Syntax
sort [OPTION]... [FILE]...
With no FILE, or when FILE is -, the command reads from standard input.
Parameters
Examples
Sort a file in alphabetical order
$ cat phonebook
Smith, Brett 555-4321
Doe, John 555-1234
Doe, Jane 555-3214
Avery, Cory 555-4132
Fogarty, Suzie 555-2314
$ sort phonebook
Avery, Cory 555-4132
Doe, Jane 555-3214
Doe, John 555-1234
Fogarty, Suzie 555-2314
Smith, Brett 555-4321
Sort by number
The -n option makes the program sort according to numerical value. The command produces output that starts with a number, the file size, so its output can be piped to to produce a list of files sorted by (ascending) fil
|
https://en.wikipedia.org/wiki/Comparison%20gallery%20of%20image%20scaling%20algorithms
|
This gallery shows the results of numerous image scaling algorithms.
Scaling methods
An image size can be changed in several ways. Consider resizing a 160x160 pixel photo to the following 40x40 pixel thumbnail and then scaling the thumbnail to a 160x160 pixel image. Also consider doubling the size of the following image containing text.
|
https://en.wikipedia.org/wiki/Alan%20Haberman
|
Alan Haberman (July 27, 1929 in Worcester, Massachusetts – June 12, 2011 in Newton, Massachusetts) was an American supermarket executive who is credited with popularizing the use of the barcode in commerce internationally. Haberman was a founder and board member of the Uniform Code Council. He graduated from Harvard College and Harvard Business School.
See also
George J. Laurer, U.P.C. creator
|
https://en.wikipedia.org/wiki/Heliotropism
|
Heliotropism, a form of tropism, is the diurnal or seasonal motion of plant parts (flowers or leaves) in response to the direction of the Sun.
The habit of some plants to move in the direction of the Sun, a form of tropism, was already known by the Ancient Greeks. They named one of those plants after that property Heliotropium, meaning "sun turn". The Greeks assumed it to be a passive effect, presumably the loss of fluid on the illuminated side, that did not need further study. Aristotle's logic that plants are passive and immobile organisms prevailed. In the 19th century, however, botanists discovered that growth processes in the plant were involved, and conducted increasingly in-depth experiments. A. P. de Candolle called this phenomenon in any plant heliotropism (1832). It was renamed phototropism in 1892, because it is a response to light rather than to the sun, and because the phototropism of algae in lab studies at that time strongly depended on the brightness (positive phototropic for weak light, and negative phototropic for bright light, like sunlight). A botanist studying this subject in the lab, at the cellular and subcellular level, or using artificial light, is more likely to employ the more abstract word phototropism, a term which includes artificial light as well as natural sunlight. The French scientist Jean-Jacques d'Ortous de Mairan was one of the first to study heliotropism when he experimented with the Mimosa pudica plant. The phenomenon was studied by Charles Darwin and published in his penultimate 1880 book The Power of Movement in Plants, a work which included other stimuli to plant movement such as gravity, moisture and touch.
Floral heliotropism
Heliotropic flowers track the Sun's motion across the sky from east to west. Daisies or Bellis perennis close their petals at night but open in the morning light and then follow the sun as the day progresses. During the night, the flowers may assume a random orientation, while at dawn they turn ag
|
https://en.wikipedia.org/wiki/Acoziborole
|
Acoziborole (SCYX-7158) is an antiprotozoal drug invented by Anacor Pharmaceuticals in 2009, and now under development by the Drugs for Neglected Diseases Initiative for the treatment of African trypanosomiasis (sleeping sickness).
It is a structurally novel drug described as a benzoxaborole derivative, and is a one-day, one-dose oral treatment. Phase I human clinical trials were completed successfully in 2015. A single arm phase II/III trial, with no control group, was conducted from 2016 to 2019 in the Democratic Republic of the Congo and Guinea involving 208 eligible patients with trypanosomiasis caused by Trypanosoma brucei gambiense. The results of the study, published in The Lancet on 29 November 2022, found the treatment regimen had a efficacy greater than 95%. Two follow-up studies, one comparing acoziborole to nifurtimox/eflornithine and a double-blind, randomized trial of the drug based on WHO recommendations with 1,200 total participants, are underway as of November 2022.
As the regimen is significantly easier to administer compared to existing treatment options, some commentators expressed hope that acoziborole could significantly slow down or even eliminate the transmission of African trypanosomiasis in humans.
See also
Tavaborole
|
https://en.wikipedia.org/wiki/Weibull%20modulus
|
The Weibull modulus is a dimensionless parameter of the Weibull distribution which is used to describe variability in measured material strength of brittle materials.
For ceramics and other brittle materials, the maximum stress that a sample can be measured to withstand before failure may vary from specimen to specimen, even under identical testing conditions. This is related to the distribution of physical flaws present in the surface or body of the brittle specimen, since brittle failure processes originate at these weak points. When flaws are consistent and evenly distributed, samples will behave more uniformly than when flaws are clustered inconsistently. This must be taken into account when describing the strength of the material, so strength is best represented as a distribution of values rather than as one specific value. The Weibull modulus is a shape parameter for the Weibull distribution model which, in this case, maps the probability of failure of a component at varying stresses.
Consider strength measurements made on many small samples of a brittle ceramic material. If the measurements show little variation from sample to sample, the calculated Weibull modulus will be high and a single strength value would serve as a good description of the sample-to-sample performance. It may be concluded that its physical flaws, whether inherent to the material itself or resulting from the manufacturing process, are distributed uniformly throughout the material. If the measurements show high variation, the calculated Weibull modulus will be low; this reveals that flaws are clustered inconsistently and the measured strength will be generally weak and variable. Products made from components of low Weibull modulus will exhibit low reliability and their strengths will be broadly distributed.
Test procedures for determining the Weibull modulus are specified in DIN EN 843-5 and DIN 51 110-3.
A further method to determine the strength of brittle materials has been descri
|
https://en.wikipedia.org/wiki/2012%20phenomenon
|
The 2012 phenomenon was a range of eschatological beliefs that cataclysmic or transformative events would occur on or around 21 December 2012. This date was regarded as the end-date of a 5,126-year-long cycle in the Mesoamerican Long Count calendar, and festivities took place on 21 December 2012 to commemorate the event in the countries that were part of the Maya civilization (Mexico, Guatemala, Honduras, and El Salvador), with main events at Chichén Itzá in Mexico and Tikal in Guatemala.
Various astronomical alignments and numerological formulae were proposed for this date. A New Age interpretation held that the date marked the start of a period during which Earth and its inhabitants would undergo a positive physical or spiritual transformation, and that 21 December 2012 would mark the beginning of a new era. Others suggested that the date marked the end of the world or a similar catastrophe. Scenarios suggested for the end of the world included the arrival of the next solar maximum, an interaction between Earth and Sagittarius A*, a supermassive black hole at the center of the galaxy, the Nibiru cataclysm in which Earth would collide with a mythical planet called Nibiru, or even the heating of Earth's core.
Scholars from various disciplines quickly dismissed predictions of cataclysmic events as they arose. Mayan scholars stated that no classic Mayan accounts forecast impending doom, and the idea that the Long Count calendar ends in 2012 misrepresented Mayan history and culture. Astronomers rejected the various proposed doomsday scenarios as pseudoscience, having been refuted by elementary astronomical observations.
Mesoamerican Long Count calendar
December 2012 marked the conclusion of a bʼakʼtun—a time period in the Mesoamerican Long Count calendar, used in Mesoamerica prior to the arrival of Europeans. Although the Long Count was most likely invented by the Olmec, it has become closely associated with the Maya civilization, whose classic period lasted from
|
https://en.wikipedia.org/wiki/Margaret%20L.%20Kripke
|
Margaret L. Kripke is an American immunologist. She is an expert in photoimmunology and the immunology of skin cancers. She earned a BS and MS in bacteriology, and a Ph.D in immunology, at the University of California at Berkeley.
She founded the department of immunology at The University of Texas M. D. Anderson Cancer Center in 1983, and served as the cancer center's executive vice president and chief academic officer until her retirement in 2007. After her retirement, Kripke served as special advisor to the provost.
From 1993 to 1994, Kripke served as president of the American Association for Cancer Research.
In 2008, M. D. Anderson established the Margaret Kripke Legend Award "to honor individuals who have enhanced the careers of women in cancer medicine and cancer science".
She served on the President's Cancer Panel from 2003 to 2011. The panel's 2006-2007 report, Promoting Healthy Lifestyles, urged "that the influence of the tobacco industry – particularly on America’s children – be weakened
through strict Federal regulation of tobacco product sales and marketing". The panel's 2008-2009 report, Reducing Environmental Cancer Risk: What We Can Do Now, "for the first time highlights the contribution of environmental contaminants to the development of cancer". A 2021 video describes how Dr. Kripke came to rethink her assumptions about the causes of cancer.
In 2013, she was named a Fellow of the American Association for Cancer Research Academy.
From 2012 through 2016, she was the chief scientific officer of the Cancer Prevention and Research Institute of Texas.
She has served on the board of directors of Silent Spring Institute.
In 2020, Kripke called upon the National Cancer Institute to publish information about cancer risks from exposure to chemicals in the environment.
Bibliography
Publication Lists
JSTOR.org
Researchgate.net
Google Scholar
Books
Google Books list
|
https://en.wikipedia.org/wiki/Tellimagrandin%20I
|
Tellimagrandin I is an ellagitannin found in plants, such as Cornus canadensis, Eucalyptus globulus, Melaleuca styphelioides, Rosa rugosa, and walnut. It is composed of two galloyl and one hexahydroxydiphenyl groups bound to a glucose residue. It differs from Tellimagrandin II only by a hydroxyl group instead of a third galloyl group. It is also structurally similar to punigluconin and pedunculagin, two more ellagitannin monomers.
Tellimagrandin I has been shown to restore antioxidant enzyme activity in glucose- and oxalate-challenged rat cells and affects Cu(II)- and Fe(II)-dependent DNA strand breaks. It has hepatoprotective effects on carbon tetrachloride- and d-galactosamine-stressed HepG2 cells and enhances peroxisomal fatty acid beta-oxidation in liver, increasing mRNA expression of PPAR alpha, ACOX1, and CPT1A. It enhances gap junction communication and reduces tumor phenotype in HeLa cells and inhibits invasion of HSV-1 and HCV similar to eugeniin and casuarictin.
See also
Ellagitannin
Pedunculagin
Punigluconin
Tellimagrandin II
|
https://en.wikipedia.org/wiki/Gray%20goo
|
Gray goo (also spelled as grey goo) is a hypothetical global catastrophic scenario involving molecular nanotechnology in which out-of-control self-replicating machines consume all biomass (and perhaps also everything else) on Earth while building many more of themselves, a scenario that has been called ecophagy . The original idea assumed machines were designed to have this capability, while popularizations have assumed that machines might somehow gain this capability by accident.
Self-replicating machines of the macroscopic variety were originally described by mathematician John von Neumann, and are sometimes referred to as von Neumann machines or clanking replicators.
The term gray goo was coined by nanotechnology pioneer K. Eric Drexler in his 1986 book Engines of Creation. In 2004, he stated "I wish I had never used the term 'gray goo'." Engines of Creation mentions "gray goo" as a thought experiment in two paragraphs and a note, while the popularized idea of gray goo was first publicized in a mass-circulation magazine, Omni, in November 1986.
Definition
The term was first used by molecular nanotechnology pioneer K. Eric Drexler in Engines of Creation (1986). In Chapter 4, Engines Of Abundance, Drexler illustrates both exponential growth and inherent limits (not gray goo) by describing "dry" nanomachines that can function only if given special raw materials:
According to Drexler, the term was popularized by an article in science fiction magazine Omni, which also popularized the term "nanotechnology" in the same issue. Drexler says arms control is a far greater issue than gray goo "nanobugs".
Drexler describes gray goo in Chapter 11 of Engines of Creation:
Drexler notes that the geometric growth made possible by self-replication is inherently limited by the availability of suitable raw materials. Drexler used the term "gray goo" not to indicate color or texture, but to emphasize the difference between "superiority" in terms of human values and "superiority"
|
https://en.wikipedia.org/wiki/External%20spermatic%20fascia
|
The external spermatic fascia (intercrural or intercolumnar fascia) is a thin membrane, prolonged downward around the surface of the spermatic cord and testis. It is separated from the dartos tunic by loose areolar tissue. It is occasionally referred to as 'Le Fascia de Webster' after an anatomist who once described it.
Structure
The external spermatic fascia is derived from the aponeurosis of the abdominal external oblique muscle. It is acquired by the spermatic cord at the superficial inguinal ring.
|
https://en.wikipedia.org/wiki/Treatise%20on%20Light
|
Treatise on Light: In Which Are Explained the Causes of That Which Occurs in Reflection & Refraction (: Où Sont Expliquées les Causes de ce qui Luy Arrive Dans la Reflexion & Dans la Refraction) is a book written by Dutch polymath Christiaan Huygens that was published in French in 1690. The book describes Huygens's conception of the nature of light propagation which makes it possible to explain the laws of geometrical optics shown in Descartes's Dioptrique, which Huygens aimed to replace.
Unlike Newton's corpuscular theory, which was presented in the Opticks, Huygens conceived of light as an irregular series of shock waves which proceeds with very great, but finite, velocity through the aether, similar to sound waves. Moreover, he proposed that each point of a wavefront is itself the origin of a secondary spherical wave, a principle known today as the Huygens–Fresnel principle. The book is considered a pioneering work of theoretical and mathematical physics and the first mechanistic account of an unobservable physical phenomenon.
Overview
Huygens worked on the mathematics of light rays and the properties of refraction in his work Dioptrica, which began in 1652 but remained unpublished, and which predated his lens grinding work. In 1672, the problem of the strange refraction of the Iceland crystal created a puzzle regarding the physics of refraction that Huygens wanted to solve. Huygens eventually was able to solve this problem by means of elliptical waves in 1677 and confirmed his theory by experiments mostly after critical reactions in 1679.
His explanation of birefringence was based on three hypotheses: (1) There are inside the crystal two media in which light waves proceed, (2) one medium behaves as ordinary ether and carries the normally refracted ray, and (3) the velocity of the waves in the other medium is dependent on direction, so that the waves do not expand in spherical form, but rather as ellipsoids of revolution; this second medium carries the abnorm
|
https://en.wikipedia.org/wiki/IBall%20%28company%29
|
iBall is an Indian electronics retailer headquartered in Mumbai. It imports computer peripherals, smartphones and tablets from original equipment manufacturers (OEMs).
Products
, the company sold consumer electronics products in 28 different product categories.
In 2014, iBall launched the Andi Uddaan smartphone for women. An SOS button located at the back of the phone sounds a loud siren and automatically sends text messages (SMS) to five pre-selected contacts when pressed.
In May 2015, iBall launched the iBall Slide i701 in collaboration with Intel and Microsoft.
In May 2016, iBall entered into a strategic partnership with Intel and Microsoft claimed to have launched India's most affordable Windows 10 Laptop - iBall CompBook at ₹9,999.
Awards
In 2020, iBall won the MEA Award for Innovative Use of Ambient Media.
|
https://en.wikipedia.org/wiki/Air%20preheater
|
An air preheater is any device designed to heat air before another process (for example, combustion in a boiler With the primary objective of increasing the thermal efficiency of the process. They may be used alone or to replace a recuperative heat system or to replace a steam coil.
In particular, this article describes the combustion air preheaters used in large boilers found in thermal power stations producing electric power from e.g. fossil fuels, biomass or waste. For instance, as the Ljungström air preheater has been attributed worldwide fuel savings estimated to 4,960,000,000 tons of oil, "few inventions have been as successful in saving fuel as the Ljungström Air Preheater", marked as the 44th International Historic Mechanical Engineering Landmark by the American Society of Mechanical Engineers.
The purpose of the air preheater is to recover the heat from the boiler flue gas which increases the thermal efficiency of the boiler by reducing the useful heat lost in the flue gas. As a consequence, the flue gases are also conveyed to the flue gas stack (or chimney) at a lower temperature, allowing simplified design of the conveyance system and the flue gas stack. It also allows control over the temperature of gases leaving the stack (to meet emissions regulations, for example). It is installed between the economizer and chimney.
Types
There are two types of air preheaters for use in steam generators in thermal power stations: One is a tubular type built into the boiler flue gas ducting, and the other is a regenerative air preheater. These may be arranged so the gas flows horizontally or vertically across the axis of rotation.
Another type of air preheater is the regenerator used in iron or glass manufacture.
Tubular type
Construction features
Tubular preheaters consist of straight tube bundles which pass through the outlet ducting of the boiler and open at each end outside of the ducting. Inside the ducting, the hot furnace gases pass around the preheater t
|
https://en.wikipedia.org/wiki/Generalized-strain%20mesh-free%20formulation
|
The generalized-strain mesh-free (GSMF) formulation is a local meshfree method in the field of numerical analysis, completely integration free, working as a weighted-residual weak-form collocation. This method was first presented by Oliveira and Portela (2016), in order to further improve the computational efficiency of meshfree methods in numerical analysis. Local meshfree methods are derived through a weighted-residual formulation which leads to a local weak form that is the well known work theorem of the theory of structures. In an arbitrary local region, the work theorem establishes an energy relationship between a statically-admissible stress field and an independent kinematically-admissible strain field. Based on the independence of these two fields, this formulation results in a local form of the work theorem that is reduced to regular boundary terms only, integration-free and free of volumetric locking.
Advantages over finite element methods are that GSMF doesn't rely on a grid, and is more precise and faster when solving bi-dimensional problems. When compared to other meshless methods, such as rigid-body displacement mesh-free (RBDMF) formulation, the element-free Galerkin (EFG) and the meshless local Petrov-Galerkin finite volume method (MLPG FVM); GSMF proved to be superior not only regarding the computational efficiency, but also regarding the accuracy.
The moving least squares (MLS) approximation of the elastic field is used on this local meshless formulation.
Formulation
In the local form of the work theorem, equation:
The displacement field , was assumed as a continuous function leading to a regular integrable function that is the kinematically-admissible strain field . However, this continuity assumption on , enforced in the local form of the work theorem, is not absolutely required but can be relaxed by convenience, provided can be useful as a generalized function, in the sense of the theory of distributions, see Gelfand and Shilov. Hence, t
|
https://en.wikipedia.org/wiki/Vacuum%20flask
|
A vacuum flask (also known as a Dewar flask, Dewar bottle or thermos) is an insulating storage vessel that greatly lengthens the time over which its contents remain hotter or cooler than the flask's surroundings. Invented by Sir James Dewar in 1892, the vacuum flask consists of two flasks, placed one within the other and joined at the neck. The gap between the two flasks is partially evacuated of air, creating a near-vacuum which significantly reduces heat transfer by conduction or convection. When used to hold cold liquids, this also virtually eliminates condensation on the outside of the flask.
Vacuum flasks are used domestically to keep beverages hot or cold for extended periods of time, and for keeping cooked food hot. They are also used for thermal cooking. Vacuum flasks are also used for many purposes in industry.
History
The vacuum flask was designed and invented by Scottish scientist Sir James Dewar in 1892 as a result of his research in the field of cryogenics and is sometimes called a Dewar flask in his honour. While performing experiments in determining the specific heat of the element palladium, Dewar made a brass chamber that he enclosed in another chamber to keep the palladium at its desired temperature. He evacuated the air between the two chambers, creating a partial vacuum to keep the temperature of the contents stable. Dewar refused to patent his invention; this allowed others to develop the flask using new materials such as glass and aluminium, and it became a significant tool for chemical experiments and also a common household item.
Dewar's design was quickly transformed into a commercial item in 1904 as two German glassblowers, Reinhold Burger and Albert Aschenbrenner, discovered that it could be used to keep cold drinks cold and warm drinks warm and invented a more robust flask design, which was suited for everyday use. The Dewar flask design had never been patented but the German men who discovered the commercial use for the product name
|
https://en.wikipedia.org/wiki/Cryptlib
|
cryptlib is an open-source cross-platform software security toolkit library. It is distributed under the Sleepycat License, a free software license compatible with the GNU General Public License. Alternatively, cryptlib is available under a proprietary license for those preferring to use it under proprietary terms.
Features
cryptlib is a security toolkit library that allows programmers to incorporate encryption and authentication services to software. It provides a high-level interface so strong security capabilities can be added to an application without needing to know many of the low-level details of encryption or authentication algorithms. It comes with an over 400 page programming manual.
At the highest level, cryptlib provides implementations of complete security services such as S/MIME and PGP/OpenPGP secure enveloping, SSL/TLS and SSH secure sessions, CA services such as CMP, SCEP, RTCS, and OCSP, and other security operations such as secure timestamping. Since cryptlib uses industry-standard X.509, S/MIME, PGP/OpenPGP, and SSH/SSL/TLS data formats, the resulting encrypted or signed data can be easily transported to other systems and processed there, and cryptlib itself runs on many operating systems—all Windows versions and most Unix/Linux systems. This allows email, files, and EDI transactions to be authenticated with digital signatures and encrypted in an industry-standard format.
cryptlib provides other capabilities including full X.509/PKIX certificate handling (all X.509 versions from X.509v1 to X.509v4) with support for SET, Microsoft AuthentiCode, Identrus, SigG, S/MIME, SSL, and Qualified certificates, PKCS #7 certificate chains, handling of certification requests and CRLs (certificate revocation lists) including automated checking of certificates against CRLs and online checking using RTCS and OCSP, and issuing and revoking certificates using CMP and SCEP. It also implements a full range of certification authority (CA) functions provides complet
|
https://en.wikipedia.org/wiki/Cc%3AMail
|
cc:Mail is a discontinued store-and-forward LAN-based email system originally developed on Microsoft's MS-DOS platform by Concentric Systems, Inc. in the 1980s. The company, founded by Robert Plummer, Hubert Lipinski, and Michael Palmer, later changed its name to PCC Systems, Inc., and then to cc:Mail, Inc. At the height of its popularity, cc:Mail had about 14 million users, and won various awards for being the top email software package of the mid-1990s.
Architecture overview
In the 1980s and 1990s, it became common in office environments to have a personal computer on every desk, all connected via a local area network (LAN). Typically, (at least) one computer is set up as a file server, so that any computer on the LAN can store and access files on the server as if they were local files. cc:Mail was designed to operate in that environment.
The central point of focus in the cc:Mail architecture is the cc:Mail "post office," which is a collection of files located on the file server and consisting of the message store and related data. However, no cc:Mail software needs to be installed or run on the file server itself. The cc:Mail application is installed on the user desktops. It provides a user interface, and reads and writes to the post office files directly in order to send, access, and manage email messages. This arrangement is called a "shared-file mail system" (which was also implemented later in competing products such as Microsoft Mail). This is in contrast to a "client/server mail system" which involves a mail client application interacting with a mail server application (the latter then being the focal point of message handling). Client/server mail was added later to the cc:Mail product architecture (see below), and also became available in competing offerings (such as Microsoft Exchange).
Other than the cc:Mail desktop application, key software elements of the cc:Mail architecture include cc:Mail Router (for transferring messages between post offices,
|
https://en.wikipedia.org/wiki/Pyrolysis%E2%80%93gas%20chromatography%E2%80%93mass%20spectrometry
|
Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
How it works
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie Point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest bonds, producing smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprint to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside programmable temperature vaporizer (PTV) injectors that provide quick heating (up to 60 °C/s) and high maximum temperatures of 600-650 °C. This is sufficient for many pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case quartz GC inlet liners can be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
Applications
Pyrolysis gas chromatography is useful for the identification of involatile compounds. These materials include polymeric mat
|
https://en.wikipedia.org/wiki/Break-in%20%28mechanical%20run-in%29
|
Break-in or breaking in, also known as run-in or running in, is the procedure of conditioning a new piece of equipment by giving it an initial period of running, usually under light load, but sometimes under heavy load or normal load. It is generally a process of moving parts wearing against each other to produce the last small bit of size and shape adjustment that will settle them into a stable relationship for the rest of their working life.
One of the most common examples of break-in is engine break-in for petrol engines and diesel engines.
Engine break-in
A new engine is broken in by following specific driving guidelines during the first few hours of its use. The focus of breaking in an engine is on the contact between the piston rings of the engine and the cylinder wall. There is no universal preparation or set of instructions for breaking in an engine. Most importantly, experts disagree on whether it is better to start engines on high or low power to break them in. While there are still consequences to an unsuccessful break-in, they are harder to quantify on modern engines than on older models. In general, people no longer break in the engines of their own vehicles after purchasing a car or motorcycle, because the process is done in production. It is still common, even today, to find that an owner's manual recommends gentle use at first (often specified as the first 500 or 1000 kilometres or miles). But it is usually only normal use without excessive demands that is specified, as opposed to light/limited use. For example, the manual will specify that the car be driven normally, but not in excess of the highway speed limit.
Goal
The goal of modern engine break-ins is the settling of piston rings into an engine's cylinder wall. A cylinder wall is not perfectly smooth but has a deliberate slight roughness to help oil adhesion. As the engine is powered up, the piston rings between the pistons and cylinder wall will begin to seal against the wall's small ridges
|
https://en.wikipedia.org/wiki/Riesz%20potential
|
In mathematics, the Riesz potential is a potential named after its discoverer, the Hungarian mathematician Marcel Riesz. In a sense, the Riesz potential defines an inverse for a power of the Laplace operator on Euclidean space. They generalize to several variables the Riemann–Liouville integrals of one variable.
Definition
If 0 < α < n, then the Riesz potential Iαf of a locally integrable function f on Rn is the function defined by
where the constant is given by
This singular integral is well-defined provided f decays sufficiently rapidly at infinity, specifically if f ∈ Lp(Rn) with 1 ≤ p < n/α. In fact, for any 1 ≤ p (p>1 is classical, due to Sobolev, while for p=1 see , the rate of decay of f and that of Iαf are related in the form of an inequality (the Hardy–Littlewood–Sobolev inequality)
where is the vector-valued Riesz transform. More generally, the operators Iα are well-defined for complex α such that .
The Riesz potential can be defined more generally in a weak sense as the convolution
where Kα is the locally integrable function:
The Riesz potential can therefore be defined whenever f is a compactly supported distribution. In this connection, the Riesz potential of a positive Borel measure μ with compact support is chiefly of interest in potential theory because Iαμ is then a (continuous) subharmonic function off the support of μ, and is lower semicontinuous on all of Rn.
Consideration of the Fourier transform reveals that the Riesz potential is a Fourier multiplier.
In fact, one has
and so, by the convolution theorem,
The Riesz potentials satisfy the following semigroup property on, for instance, rapidly decreasing continuous functions
provided
Furthermore, if , then
One also has, for this class of functions,
See also
Bessel potential
Fractional integration
Sobolev space
Notes
|
https://en.wikipedia.org/wiki/Samsung%20Display
|
Samsung Display (Hangul: 삼성디스플레이) ) is a company selling display devices with OLED and QD-OLED technology. Display markets include smartphones, TVs, laptops, computer monitors, smartwatches, VR, game consoles, and automotive applications.
Headquartered in South Korea, Samsung Display has production plants in China, Vietnam, and India, and operates sales offices in six countries. Samsung Display enabled the first mass-production of OLED and quantum dot display and aims to develop next-generation technology such as slidable, rollable and stretchable panels.
As the LCD business spun off from Samsung Electronics, Samsung Display Corporation was established on April 1, 2012. The company launched on July 1 by merging Samsung Electronics’ LCD business, S-LCD Corporation(manufacturer of amorphous TFT LCD panels) and Samsung Mobile Display(Samsung’s OLED arm). By combining the OLED and LCD businesses, Samsung Display became the world's largest display company.
History
January 1991: Samsung Electronics launched TFT-LCD business
February 1995: Operated TFT-LCD line for the first time domestically
November 2003: Invested for 4.5 generation AMOLED mass-production for the first time in the world
July 2004: A joint venture S-LCD Corporation between Samsung Electronics and Sony Corporation was established.
April 2005: S-LCD begins shipment of seventh-generation TFT LCD panels for LCD TVs.
August 2007: S-LCD begins shipment of eighth-generation TFT LCD panels for LCD TVs.
October 2007: Started to mass produce AMOLED for the first time in the world
March 2009: Exceed production of AMOLED one million monthly
December 2011: The company's partners announce that Samsung will acquire Sony's entire stake in the joint venture, making S-LCD Corporation a wholly owned subsidiary of Samsung Electronics.
July 1, 2012: S-LCD and Samsung Mobile Display merge to create Samsung Display
August 2014: Samsung Display mass-produced the world’s first curved edge display panel, featured in the Galax
|
https://en.wikipedia.org/wiki/PointCast
|
PointCast was a dot-com company founded in 1992 by Christopher R. Hassett in Sunnyvale, California.
PointCast Network
The company's initial product amounted to a screensaver that displayed news and other information, delivered live over the Internet. The PointCast Network used push technology, which was a hot concept at the time, and received enormous press coverage when it launched in beta form on February 13, 1996.
The product did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. It demanded more bandwidth than the home dial-up Internet connections of the day could provide, and people objected to the large number of advertisements that were pushed over the service as well. PointCast offered corporations a proxy server that would dramatically reduce the bandwidth used. But even this didn't help save PointCast. A more likely reason than bandwidth was the increasing popularity of "portal websites". When PointCast first started Yahoo offered little more than a hierarchical structure on the internet (broken down by subject much like DMOZ) but was soon to introduce the portal which was customizable and offered a much more convenient way to read the news.
News Corporation purchase offer and change of CEO
At its height in January 1997, News Corporation made an offer of $450 million to purchase the company. However, the offer was withdrawn in March. While there were rumors that it was withdrawn due to issues with the price and revenue projections, James Murdoch said it was due to PointCast's inaction.
Shortly after not accepting the purchase offer, the board of directors decided to replace Christopher Hassett as the CEO. Some reasons included turning down the recent purchase offer, software performance problems (using too much corporate bandwidth) and declining market share (lost to the then-emerging Web portal sites.) After five months, David Dorman was
|
https://en.wikipedia.org/wiki/Variant%20%28biology%29
|
In microbiology and virology, the term variant or genetic variant is used to describe a subtype of a microorganism that is genetically distinct from a main strain, but not sufficiently different to be termed a distinct strain. A similar distinction is made in botany between different cultivated varieties of a species of plant, termed cultivars.
Viruses
SARS-CoV-2
It was said in 2013 that "there is no universally accepted definition for the terms 'strain', 'variant', and 'isolate' in the virology community, and most virologists simply copy the usage of terms from others". The lack of precise definition continued in 2020; in the context of the Variant of Concern 202012/01 version of the SARS-CoV-2 virus, the website of the US Centers for Disease Control and Prevention (CDC) states, "For the time being in the context of this variant, the [terms "variant", "strain", and "lineage"] are generally being used interchangeably by the scientific community".
|
https://en.wikipedia.org/wiki/Magnetotactic%20bacteria
|
Magnetotactic bacteria (or MTB) are a polyphyletic group of bacteria that orient themselves along the magnetic field lines of Earth's magnetic field. Discovered in 1963 by Salvatore Bellini and rediscovered in 1975 by Richard Blakemore, this alignment is believed to aid these organisms in reaching regions of optimal oxygen concentration. To perform this task, these bacteria have organelles called magnetosomes that contain magnetic crystals. The biological phenomenon of microorganisms tending to move in response to the environment's magnetic characteristics is known as magnetotaxis. However, this term is misleading in that every other application of the term taxis involves a stimulus-response mechanism. In contrast to the magnetoreception of animals, the bacteria contain fixed magnets that force the bacteria into alignment—even dead cells are dragged into alignment, just like a compass needle.
Introduction
The first description of magnetotactic bacteria was in 1963 by Salvatore Bellini of the University of Pavia. While observing bog sediments under his microscope, Bellini noticed a group of bacteria that evidently oriented themselves in a unique direction. He realized these microorganisms moved according to the direction of the North Pole, and hence called them "magnetosensitive bacteria". The publications were academic (peer-reviewed by the Istituto di Microbiologias editorial committee under responsibility of the Institute's Director Prof. L. Bianchi, as usual in European universities at the time) and communicated in Italian with English, French and German short summaries in the official journal of a well-known institution, yet unexplainedly seem to have attracted little attention until they were brought to the attention of Richard Frankel in 2007. Frankel translated them into English and the translations were published in the Chinese Journal of Oceanography and Limnology.
Richard Blakemore, then a microbiology graduate student at the University of Massachus
|
https://en.wikipedia.org/wiki/Histone%20octamer
|
In molecular biology, a histone octamer is the eight-protein complex found at the center of a nucleosome core particle. It consists of two copies of each of the four core histone proteins (H2A, H2B, H3, and H4). The octamer assembles when a tetramer, containing two copies of H3 and two of H4, complexes with two H2A/H2B dimers. Each histone has both an N-terminal tail and a C-terminal histone-fold. Each of these key components interacts with DNA in its own way through a series of weak interactions, including hydrogen bonds and salt bridges. These interactions keep the DNA and the histone octamer loosely associated, and ultimately allow the two to re-position or to separate entirely.
History of research
Histone post-translational modifications were first identified and listed as having a potential regulatory role on the synthesis of RNA in 1964. Since then, over several decades, chromatin theory has evolved. Chromatin subunit models as well as the notion of the nucleosome were established in 1973 and 1974, respectively. Richmond and his research group has been able to elucidate the crystal structure of the histone octamer with DNA wrapped up around it at a resolution of 7 Å in 1984. The structure of the octameric core complex was revisited seven years later and a resolution of 3.1 Å was elucidated for its crystal at a high salt concentration. Though sequence similarity is low between the core histones, each of the four have a repeated element consisting of a helix-loop-helix called the histone fold motif. Furthermore, the details of protein-protein and protein-DNA interactions were fine-tuned by X-ray crystallography studies at 2.8 and 1.9 Å, respectively, in the 2000s.
The histone octamer in molecular detail
Core histones are four proteins called H2A, H2B, H3 and H4 and they are all found in equal parts in the cell. All four of the core histone amino acid sequences contain between 20 and 24% of lysine and arginine and the size or the protein ranges between
|
https://en.wikipedia.org/wiki/Methyl%20green
|
Methyl green (CI 42585) is a cationic or positive charged stain related to Ethyl Green that has been used for staining DNA since the 19th century. It has been used for staining cell nuclei either as a part of the classical Unna-Pappenheim stain or as a nuclear counterstain ever since.
In recent years, its fluorescent properties, when bound to DNA, have positioned it as useful for far-red imaging of live cell nuclei.
Fluorescent DNA staining is routinely used in cancer prognosis.
Methyl green also emerges as an alternative stain for DNA in agarose gels, fluorometric assays, and flow cytometry. It has also been shown that it can be used as an exclusion viability stain for cells.
Its interaction with DNA has been shown to be non-intercalating, in other words, not inserting itself into the DNA, but instead electrostatic with the DNA major groove. It is used in combination with pyronin in the methyl green–pyronin stain, which stains and differentiates DNA and RNA.
When excited at 244 or 388 nm in a neutral aqueous solution, methyl green produces a fluorescent emission at 488 or 633 nm, respectively. The presence or absence of DNA does not affect these fluorescence behaviors. When binding DNA under neutral aqueous conditions, methyl green also becomes fluorescent in the far red with an excitation maximum of 633 nm and an emission maximum of 677 nm.
Commercial Methyl green preparations are often contaminated with Crystal violet. Crystal violet can be removed by chloroform extraction.
|
https://en.wikipedia.org/wiki/Pacific%20Symposium%20on%20Biocomputing
|
The Pacific Symposium on Biocomputing (PSB) is an annual multidisciplinary scientific meeting co-founded in 1996 by Dr. Teri Klein, Dr. Lawrence Hunter and Sharon Surles. The conference is to presentation and discuss research in the theory and application of computational methods for biology. Papers and presentations are peer reviewed and published.
PSB brings together researchers from the US and the Asian Pacific nations, to exchange research results and address open issues in all aspects of computational biology. PSB is a forum for the presentation of work in databases, algorithms, interfaces, visualization, modeling, and other computational methods, as applied to biological problems, with emphasis on applications in data-rich areas of molecular biology.
The PSB aims for "critical mass" in sub-disciplines within biocomputing. For that reason, it is the only meeting whose sessions are defined dynamically each year in response to specific proposals. PSB sessions are organized by leaders in the emerging areas and targeted to provide a forum for publication and discussion of research in biocomputing's topics.
Since 2017 the Research Parasite Award has been announced and presented annually at the Symposium to recognize scientists who study previously-published data in ways not anticipated by the researchers who first generated it. An endowment for the award and sponsorship has been provided for the Junior Parasite award winner to attend the symposium and presentation.
|
https://en.wikipedia.org/wiki/Mosaic%20virus
|
A mosaic virus is any virus that causes infected plant foliage to have a mottled appearance. Such viruses come from a variety of unrelated lineages and consequently there is no taxon that unites all mosaic viruses.
Examples
Virus species that contained the word 'mosaic' in their English language common name are listed below, though with the nomenclature and taxonomy of the ICTV 2022 release. However, not all viruses that may cause a mottled appearance belong to species that include the word "mosaic" in the name.
|
https://en.wikipedia.org/wiki/Run-time%20infrastructure%20%28simulation%29
|
In simulation, run-time infrastructure (RTI) is a middleware that is required when implementing the High Level Architecture (HLA). RTI
is the fundamental component of HLA. It provides a set of software services that are necessary to support federates to coordinate their operations and data exchange during a runtime execution. In other sense, it is the implementation of the HLA interface specification but is not itself part of specification. Modern RTI implementations conform to the IEEE 1516 and/or HLA 1.3 API specifications. These specifications do not include a network protocol for RTI. It is up to the implementors of an RTI to create a specification. Due to this, interoperability between RTI products and often, RTI versions, should not be assumed unless the vendor specifies interoperability with other products or versions.
Known implementations
Middleware
Simulation software
|
https://en.wikipedia.org/wiki/Barrel%20shifter
|
A barrel shifter is a digital circuit that can shift a data word by a specified number of bits without the use of any sequential logic, only pure combinational logic, i.e. it inherently provides a binary operation. It can however in theory also be used to implement unary operations, such as logical shift left, in cases where limited by a fixed amount (e.g. for address generation unit). One way to implement a barrel shifter is as a sequence of multiplexers where the output of one multiplexer is connected to the input of the next multiplexer in a way that depends on the shift distance. A barrel shifter is often used to shift and rotate n-bits in modern microprocessors, typically within a single clock cycle.
For example, take a four-bit barrel shifter, with inputs A, B, C and D. The shifter can cycle the order of the bits ABCD as DABC, CDAB, or BCDA; in this case, no bits are lost. That is, it can shift all of the outputs up to three positions to the right (and thus make any cyclic combination of A, B, C and D). The barrel shifter has a variety of applications, including being a useful component in microprocessors (alongside the ALU).
Implementation
The very fastest shifters are implemented as full crossbars, in a manner similar to the 4-bit shifter depicted above, only larger. These incur the least delay, with the output always a single gate delay behind the input to be shifted (after allowing the small time needed for the shift count decoder to settle; this penalty, however, is only incurred when the shift count changes). These crossbar shifters require however n2 gates for n-bit shifts. Because of this, the barrel shifter is often implemented as a cascade of parallel 2×1 multiplexers instead, which allows a large reduction in gate count, now growing only with n x log n; the propagation delay is however larger, growing with log n (instead of being constant as with the crossbar shifter).
For an 8-bit barrel shifter, two intermediate signals are used which shifts
|
https://en.wikipedia.org/wiki/Natural%20fertility
|
Natural fertility is the fertility that exists without birth control. The control is the number of children birthed to the parents and is modified as the number of children reaches the maximum. Natural fertility tends to decrease as a society modernizes. Women in a pre-modernized society typically have given birth to a large number of children by the time they are 50 years old, while women in post-modernized society only bear a small number by the same age. However, during modernization natural fertility rises, before family planning is practiced.
Historical populations have traditionally honored the idea of natural fertility by displaying fertility symbols.
Birth control
Natural fertility is a concept developed by the French historical demographer Louis Henry to refer to the level of fertility that would prevail in a population that makes no conscious effort to limit, regulate, or control fertility, so that fertility depends only on physiological factors affecting fecundity. In contrast, populations that practice birth control will have lower fertility levels as a result of delaying first births (a lengthened interval between menarche and first pregnancy), extended intervals between births, or stopping child-bearing at a certain age. Such control does not assume the use of artificial means of fertility regulation or modern contraceptive methods but can result from the use of traditional means of contraception or pregnancy prevention (e.g., coitus interruptus). Many social norms or practices affect fertility regulation including celibacy, the age at marriage and the timing and frequency of sexual intercourse, including periods of prescribed sexual abstinence. Breastfeeding has also been used to space births in areas without birth control. Ansley Coale and other demographers have developed several methods for measuring the extent of such fertility control, in which the idea of a natural level of fertility is an essential component.
When women have access to birth
|
https://en.wikipedia.org/wiki/Morphism%20of%20algebraic%20varieties
|
In algebraic geometry, a morphism between algebraic varieties is a function between the varieties that is given locally by polynomials. It is also called a regular map. A morphism from an algebraic variety to the affine line is also called a regular function.
A regular map whose inverse is also regular is called biregular, and the biregular maps are the isomorphisms of algebraic varieties. Because regular and biregular are very restrictive conditions – there are no non-constant regular functions on projective varieties – the concepts of rational and birational maps are widely used as well; they are partial functions that are defined locally by rational fractions instead of polynomials.
An algebraic variety has naturally the structure of a locally ringed space; a morphism between algebraic varieties is precisely a morphism of the underlying locally ringed spaces.
Definition
If X and Y are closed subvarieties of and (so they are affine varieties), then a regular map is the restriction of a polynomial map . Explicitly, it has the form:
where the s are in the coordinate ring of X:
where I is the ideal defining X (note: two polynomials f and g define the same function on X if and only if f − g is in I). The image f(X) lies in Y, and hence satisfies the defining equations of Y. That is, a regular map is the same as the restriction of a polynomial map whose components satisfy the defining equations of .
More generally, a map f:X→Y between two varieties is regular at a point x if there is a neighbourhood U of x and a neighbourhood V of f(x) such that f(U) ⊂ V and the restricted function f:U→V is regular as a function on some affine charts of U and V. Then f is called regular, if it is regular at all points of X.
Note: It is not immediately obvious that the two definitions coincide: if X and Y are affine varieties, then a map f:X→Y is regular in the first sense if and only if it is so in the second sense. Also, it is not immediately clear whether regularity depen
|
https://en.wikipedia.org/wiki/Neo-Hookean%20solid
|
A neo-Hookean solid is a hyperelastic material model, similar to Hooke's law, that can be used for predicting the nonlinear stress-strain behavior of materials undergoing large deformations. The model was proposed by Ronald Rivlin in 1948. In contrast to linear elastic materials, the stress-strain curve of a neo-Hookean material is not linear. Instead, the relationship between applied stress and strain is initially linear, but at a certain point the stress-strain curve will plateau. The neo-Hookean model does not account for the dissipative release of energy as heat while straining the material and perfect elasticity is assumed at all stages of deformation.
The neo-Hookean model is based on the statistical thermodynamics of cross-linked polymer chains and is usable for plastics and rubber-like substances. Cross-linked polymers will act in a neo-Hookean manner because initially the polymer chains can move relative to each other when a stress is applied. However, at a certain point the polymer chains will be stretched to the maximum point that the covalent cross links will allow, and this will cause a dramatic increase in the elastic modulus of the material. The neo-Hookean material model does not predict that increase in modulus at large strains and is typically accurate only for strains less than 20%. The model is also inadequate for biaxial states of stress and has been superseded by the Mooney-Rivlin model.
The strain energy density function for an incompressible neo-Hookean material in a three-dimensional description is
where is a material constant, and is the first invariant (trace), of the right Cauchy-Green deformation tensor, i.e.,
where are the principal stretches.
For a compressible neo-Hookean material the strain energy density function is given by
where is a material constant and is the deformation gradient. It can be shown that in 2D, the strain energy density function is
Several alternative formulations exist for compressible neo-Hoo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.