source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/MIoTy
|
mioty is a low-power wide-area network (LPWAN) protocol. It is using telegram splitting, a standardized LPWAN technology in the license-free spectrum. This technology splits a data telegram into multiple sub packets and sends them after applying error correcting codes, in a partly predefined time and frequency pattern. This makes a transmission robust to interferences and packet collisions. It is standardised in the TS 103 357 ETSI. Its uplink operates at the 868 MHz band, license free in Europe, and 916MHz band in North America. It requires a bandwidth of 200 kHz for two channels (e.g. up- and downlink).
Technology attributes
Long range: The operating range of LPWAN technology varies from a few kilometers in urban areas to over 10 km in rural settings. It can also enable effective data communication in previously infeasible indoor and underground locations.
Low power: Optimized for power consumption, LPWAN transceivers can run on small, inexpensive batteries for up to 20 years.
Telegram splitting: (or TSMA, telegram splitting multiple access) Splits the packets of data under transport in small sub-packets at the sensor level. The small packets getting then transmitted over variable frequency and time.
More than a million packets a day.
Serving moving clients. It can serve data from clients moving at up to 120 km/h.
Applications
It is intended to be used for monitoring devices in large areas.
See also
Internet of Things
|
https://en.wikipedia.org/wiki/Breeding%20program
|
A breeding programme is the planned breeding of a group of animals or plants, usually involving at least several individuals and extending over several generations. There are a couple of breeding methods, such as artificial (which is man made) and natural (it occurs on its own).
According to the Dutch State Secretary of Economic Affairs the delivery of young animals is important for the natural behavior of the mother, the herd and is desirable from a veterinary perspective.
Breeding programs are commonly employed in several fields where humans wish to change the characteristics of their animals' offspring through careful selection of breeding partners:
Dog and cat fanciers may coordinate a breeding program to raise the probability of an animal's litter producing a championship-caliber animal.
Horse breeders try to produce fast racehorses through breeding programs.
Conservationists use breeding programs to try to help the recovery of endangered species by preserving the existing gene pool and preventing inbreeding.
There also can be breeding programs for plants. For instance, a winery owner, seeking to find a better tasting wine, could design a breeding program so that only the vines whose grapes make the very best wine are allowed to breed.
|
https://en.wikipedia.org/wiki/IEEE%20802.20
|
IEEE 802.20 or Mobile Broadband Wireless Access (MBWA) was a specification by the standard association of the Institute of Electrical and Electronics Engineers (IEEE) for mobile broadband networks. The main standard was published in 2008. MBWA is no longer being actively developed.
This wireless broadband technology is also known and promoted as iBurst (or HC-SDMA, High Capacity Spatial Division Multiple Access). It was originally developed by ArrayComm and optimizes the use of its bandwidth with the help of smart antennas. Kyocera is the manufacturer of iBurst devices.
Description
iBurst is a mobile broadband wireless access system that was first developed by ArrayComm, and announced with partner Sony in April 2000.
It was adopted as the High Capacity – Spatial Division Multiple Access (HC-SDMA) radio interface standard (ATIS-0700004-2005) by the Alliance for Telecommunications Industry Solutions (ATIS).
The standard was prepared by ATIS’ Wireless Technology and Systems Committee's Wireless Wideband Internet Access subcommittee and accepted as an American National Standard in 2005.
HC-SDMA was announced as considered by ISO TC204 WG16 for the continuous communications standards architecture, known as Communications, Air-interface, Long and Medium range (CALM), which ISO is developing for intelligent transport systems (ITS). ITS may include applications for public safety, network congestion management during traffic incidents, automatic toll booths, and more. An official liaison was established between WTSC and ISO TC204 WG16 for this in 2005.
The HC-SDMA interface provides wide-area broadband wireless data-connectivity for fixed, portable and mobile computing devices and appliances. The protocol is designed to be implemented with smart antenna array techniques (called MIMO for multiple-input multiple-output) to substantially improve the radio frequency (RF) coverage, capacity and performance for the system.
In January 2006, the IEEE 802.20 Mobile Broadband W
|
https://en.wikipedia.org/wiki/Narsingh%20Deo
|
Narsingh Deo (January 2, 1936 – January 13, 2023) was an Indian-American computer scientist. He served as a professor and the Charles N. Millican Endowed Chair of the Department of Computer Science, University of Central Florida. Deo received his Ph.D. for his dissertation 'Topological Analysis of Active Networks and Generalization of Hamiltonian tree' from Northwestern University, IL., in 1965; S. L. Hakimi was his adviser. He was professor at the Indian Institute of Technology, Kanpur. Deo died in Winter Park, Florida on January 13, 2023, at the age of 87.
Books
Graph Theory with Application to Engineering and Computer Science, Prentice-Hall, Englewood Cliffs, N.J., 1974, 480 pages.
Combinatorial Algorithms: Theory and Practice (with E.M. Reingold and J. Nievergelt), Prentice-Hall, Englewood Cliffs, NJ., 1977, 433 pages.
System Simulation with Digital Computers, Prentice-Hall, Englewood Cliffs, N.J., 1979, 200 pages.
Discrete Optimization Algorithms: With Pascal Programs (with M.M. Syslo and J. S. Kowalik), Prentice-Hall, Englewood Cliffs, N.J., 1983, 542 pages.
|
https://en.wikipedia.org/wiki/Hjulstr%C3%B6m%20curve
|
The Hjulström curve, named after Filip Hjulström (1902–1982), is a graph used by hydrologists and geologists to determine whether a river will erode, transport, or deposit sediment. It was originally published in his doctoral thesis "Studies of the morphological activity of rivers as illustrated by the River Fyris." in 1935. The graph takes sediment particle size and water velocity into account.
The upper curve shows the critical erosion velocity in cm/s as a function of particle size in mm, while the lower curve shows the deposition velocity as a function of particle size. Note that the axes are logarithmic.
The plot shows several key concepts about the relationships between erosion, transportation, and deposition. For particle sizes where friction is the dominating force preventing erosion, the curves follow each other closely and the required velocity increases with particle size. However, for cohesive sediment, mostly clay but also silt, the erosion velocity increases with decreasing grain size, as the cohesive forces are relatively more important when the particles get smaller. The critical velocity for deposition, on the other hand, depends on the settling velocity, and that decreases with decreasing grainsize. The Hjulström curve shows that sand particles of a size around 0.1 mm require the lowest stream velocity to erode.
The curve was expanded by Åke Sundborg in 1956. He significantly improved the level of detail in the cohesive part of the diagram, and added lines for different modes of transportation. The result is called the Sundborg diagram, or the Hjulström-Sundborg Diagram, in the academic literature.
This curve dates back to early 20th century research on river geomorphology and has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity
|
https://en.wikipedia.org/wiki/Charles%20Sabine
|
Charles Edward Sabine (born 20 April 1960, British Army Battalion HQ, Rinteln, West Germany), is an Emmy Award-winning television journalist who worked for the US Network NBC News for twenty-six years, before becoming a global spokesman for patients and families suffering degenerative brain diseases. He is active throughout advocacy and charity sectors across four continents and founder of the Hidden No More Foundation. He has 2 children, Roman and Sabrina.
Early life and career
Sabine was educated at Brentwood School, England, then obtained a first class honours degree in Media Studies from Westminster University, where he was tutored by BBC Radio Producer Charles Parker.
Sabine joined NBC in 1982 in London, and worked as a writer at 30 Rock in Manhattan, New York, in 1987. He then transitioned to field production in conflicts and according to NBC Universal, “Sabine participated in most of the major international news stories of the next two decades”.
As producer of the NBC Nightly News with Tom Brokaw'''s team coverage of the Romanian Revolution, Sabine received an Emmy Award for his program segments which aired in December 1989, in the Outstanding General Coverage of a Single–Breaking News Story category of the News & Documentary Emmy Awards.
During his time in the field, Sabine conducted three tours on CVN-71 nuclear-powered aircraft carriers at battle stations, USS George Washington (the Adriatic), the USS Theodore Roosevelt (Mediterranean) and the USS Enterprise (Arabian Sea).
He was the last western journalist to interview the founder of Hamas, Sheik Ahmed Yassin in a secret location in Gaza.
The thirty-five countries and territories from which Sabine reported conflicts for NBC News, include the allied Gulf Wars in Iraq, Saudi Arabia and Kuwait; wars in Bosnia, Croatia, Serbia, Kosovo and Chechnya; the US invasion of Haiti; the Genocide in Rwanda; the Ebola outbreak in Zaire; revolutions in Poland, Romania, Hungary and Czechoslovakia and sectarian con
|
https://en.wikipedia.org/wiki/Scholz%20conjecture
|
In mathematics, the Scholz conjecture is a conjecture on the length of certain addition chains.
It is sometimes also called the Scholz–Brauer conjecture or the Brauer–Scholz conjecture, after Arnold Scholz who formulated it in 1937 and Alfred Brauer who studied it soon afterward and proved a weaker bound.
Statement
The conjecture states that
,
where is the length of the shortest addition chain producing n.
Here, an addition chain is defined as a sequence of numbers, starting with 1, such that every number after the first can be expressed as a sum of two earlier numbers (which are allowed to both be equal). Its length is the number of sums needed to express all its numbers, which is one less than the length of the sequence of numbers (since there is no sum of previous numbers for the first number in the sequence, 1). Computing the length of the shortest addition chain that contains a given number can be done by dynamic programming for small numbers, but it is not known whether it can be done in polynomial time measured as a function of the length of the binary representation of . Scholz's conjecture, if true, would provide short addition chains for numbers of a special form, the Mersenne numbers.
Example
As an example, : it has a shortest addition chain
1, 2, 4, 5
of length three, determined by the three sums
1 + 1 = 2,
2 + 2 = 4,
4 + 1 = 5.
Also, : it has a shortest addition chain
1, 2, 3, 6, 12, 24, 30, 31
of length seven, determined by the seven sums
1 + 1 = 2,
2 + 1 = 3,
3 + 3 = 6,
6 + 6 = 12,
12 + 12 = 24,
24 + 6 = 30,
30 + 1 = 31.
Both and equal 7.
Therefore, these values obey the inequality (which in this case is an equality) and the Scholz conjecture is true for the case .
Partial results
By using a combination of computer search techniques and mathematical characterizations of optimal addition chains, showed that the conjecture is true for all . Additionally, he verified that for all , the inequality of the conjecture is actually an equality.
|
https://en.wikipedia.org/wiki/Adjunction%20formula
|
In mathematics, especially in algebraic geometry and the theory of complex manifolds, the adjunction formula relates the canonical bundle of a variety and a hypersurface inside that variety. It is often used to deduce facts about varieties embedded in well-behaved spaces such as projective space or to prove theorems by induction.
Adjunction for smooth varieties
Formula for a smooth subvariety
Let X be a smooth algebraic variety or smooth complex manifold and Y be a smooth subvariety of X. Denote the inclusion map by i and the ideal sheaf of Y in X by . The conormal exact sequence for i is
where Ω denotes a cotangent bundle. The determinant of this exact sequence is a natural isomorphism
where denotes the dual of a line bundle.
The particular case of a smooth divisor
Suppose that D is a smooth divisor on X. Its normal bundle extends to a line bundle on X, and the ideal sheaf of D corresponds to its dual . The conormal bundle is , which, combined with the formula above, gives
In terms of canonical classes, this says that
Both of these two formulas are called the adjunction formula.
Examples
Degree d hypersurfaces
Given a smooth degree hypersurface we can compute its canonical and anti-canonical bundles using the adjunction formula. This reads aswhich is isomorphic to .
Complete intersections
For a smooth complete intersection of degrees , the conormal bundle is isomorphic to , so the determinant bundle is and its dual is , showingThis generalizes in the same fashion for all complete intersections.
Curves in a quadric surface
embeds into as a quadric surface given by the vanishing locus of a quadratic polynomial coming from a non-singular symmetric matrix. We can then restrict our attention to curves on . We can compute the cotangent bundle of using the direct sum of the cotangent bundles on each , so it is . Then, the canonical sheaf is given by , which can be found using the decomposition of wedges of direct sums of vector bundles. Then, usi
|
https://en.wikipedia.org/wiki/Net%20present%20value
|
The net present value (NPV) or net present worth (NPW) applies to a series of cash flows occurring at different times. The present value of a cash flow depends on the interval of time between now and the cash flow. It also depends on the annual effective discount rate. NPV accounts for the time value of money. It provides a method for evaluating and comparing capital projects or financial products with cash flows spread over time, as in loans, investments, payouts from insurance contracts plus many other applications.
Time value of money dictates that time affects the value of cash flows. For example, a lender may offer 99 cents for the promise of receiving $1.00 a month from now, but the promise to receive that same dollar 20 years in the future would be worth much less today to that same person (lender), even if the payback in both cases was equally certain. This decrease in the current value of future cash flows is based on a chosen rate of return (or discount rate). If for example there exists a time series of identical cash flows, the cash flow in the present is the most valuable, with each future cash flow becoming less valuable than the previous cash flow. A cash flow today is more valuable than an identical cash flow in the future because a present flow can be invested immediately and begin earning returns, while a future flow cannot.
NPV is determined by calculating the costs (negative cash flows) and benefits (positive cash flows) for each period of an investment. After the cash flow for each period is calculated, the present value (PV) of each one is achieved by discounting its future value (see Formula) at a periodic rate of return (the rate of return dictated by the market). NPV is the sum of all the discounted future cash flows.
Because of its simplicity, NPV is a useful tool to determine whether a project or investment will result in a net profit or a loss. A positive NPV results in profit, while a negative NPV results in a loss. The NPV measures
|
https://en.wikipedia.org/wiki/Injury%20in%20humans
|
An injury is any physiological damage to living tissue caused by immediate physical stress. Injuries to humans can occur intentionally or unintentionally and may be caused by blunt trauma, penetrating trauma, burning, toxic exposure, asphyxiation, or overexertion. Injuries can occur in any part of the body, and different symptoms are associated with different injuries.
Treatment of a major injury is typically carried out by a health professional and varies greatly depending on the nature of the injury. Traffic collisions are the most common cause of accidental injury and injury-related death among humans. Injuries are distinct from chronic conditions, psychological trauma, infections, or medical procedures, though injury can be a contributing factor to any of these.
Several major health organizations have established systems for the classification and description of human injuries.
Occurrence
Injuries may be intentional or unintentional. Intentional injuries may be acts of violence against others or self-inflicted against one's own person. Accidental injuries may be unforeseeable, or they may be caused by negligence. In order, the most common types of unintentional injuries are traffic accidents, falls, drowning, burns, and accidental poisoning. Certain types of injuries are more common in developed countries or developing countries. Traffic injuries are more likely to kill pedestrians than drivers in developing countries. Scalding burns are more common in developed countries, while open-flame injuries are more common in developing countries.
As of 2021, approximately 4.4 million people are killed due to injuries each year worldwide, constituting nearly 8% of all deaths. 3.16 million of these injuries are unintentional, and 1.25 million are intentional. Traffic accidents are the most common form of deadly injury, causing about one-third of injury-related deaths. One-sixth are caused by suicide, and one-tenth are caused by homicide. Tens of millions of individ
|
https://en.wikipedia.org/wiki/Economy%20monetization
|
The Economy monetization is a metric of the national economy, reflecting its saturation with liquid assets. The level of monetization is determined both by the development of the national financial system and by the whole economy. The monetization of economy also determines the freedom of capital movement. Long time ago scientists recognized the important role played by the money supply. Nevertheless, only approximately 50 years ago did Milton Friedman convincingly prove that change in the money quantity might have a very serious effect on the GDP. The monetization is especially important in low- to middle-income countries in which it is substantially correlated with the per-capita GDP and real interest rates. This fact suggests that supporting an upward monetization trend can be an important policy objective for governments.
The reverse concept is called economy demonetization.
Monetization coefficient
The monetization coefficient (or ratio) of the economy is an indicator that is equal to the ratio of the money supply aggregate M2 to the gross domestic product (GDP)—both nominated in current prices. The coefficient reflects the proportion of the total of goods and services of an economy that is monetized—being actually paid for in money by the purchaser—to substitute bartering. This is one of the most important characteristics of the level and course of economic development. The ratio can be as low as 10–20% for the emerging economies and as high as 100%+ for the developed countries.
Formula
The ratio is, in fact, based on the money demand function of Milton Friedman.
This coefficient gives an idea of the degree of financial security of the economy. Many scientific publications calculate not only the indicator of M2/GDP but also M3/GDP and M1/GDP. The higher the M3/GDP compared to M1/GDP, the more developed and elaborated the system of non-cash payments and the financial potential of the economy. A small difference indicates that in this country a significant
|
https://en.wikipedia.org/wiki/Systematic%20%26%20Applied%20Acarology%20Society
|
The Systematic & Applied Acarology Society is an international society dedicated to promoting the development of the scientific discipline of acarology (the study of mites and ticks) and facilitating collaboration among acarologists around the world. Founded in 1996, the society publishes three serial publications: the Acarology Bulletin, a newsletter of the society; Systematic & Applied Acarology, a refereed research journal; and Systematic and Applied Acarology Special Publications, a rapid publication for short papers and monographic works which was merged with the journal in 2012. It also occasionally issues books of special interest to members, and maintains an online acarological e-reprint library for acarologists around the world.
|
https://en.wikipedia.org/wiki/Characteristic%20based%20product%20configurator
|
A characteristic-based product configurator is a product configurator extension which uses a set of discrete variables, called characteristics (or features), to define all possible product variations.
Characteristics
There are two characteristic types:
binary variables, which describe the presence or not of a specific feature,
n-values variables, which describe a selection between n possible value for a specific product feature.
Constraints
The range of characteristic-value combinations is reduced by a variety of constraints that define which combinations can, cannot, and must occur alongside each other. These constraints can be reflective of technological or commercial constraints in the real world.
The constraints can represent:
incompatibility: they indicate the mutual exclusivity between some product feature-values;
implication: they indicate the presence of a specific feature-value is constrained to the presence of another feature-value.
Characteristic filters
The use of characteristics permits the user to abstract the finished product by describing filter conditions, which describe subsets of product variations using boolean functions on the characteristics:
The AND, NOR, OR logical operators use and simplify the boolean function definitions, because they permit the user to regroup together the characteristic-values which may be present (AND), absent (NOR), or not all absent (OR);
Thanks to the decoupling introduced by the use of characteristics, it is not necessary to redefine the boolean functions when new commercial codes are introduced which can be mapped to some product-variation already covered by some characteristics combination.
Closed or open configuration
Using a characteristic-based configurator, it is possible to define a product variation in two ways:
Open configuration: the user will simply valuate all the characteristics complying with the technological and commercial constraints, without having a set of base values to work from;
Cl
|
https://en.wikipedia.org/wiki/Vestibular%20fold
|
The vestibular fold (ventricular fold, superior or false vocal cord) is one of two thick folds of mucous membrane, each enclosing a narrow band of fibrous tissue, the vestibular ligament, which is attached in front to the angle of the thyroid cartilage immediately below the attachment of the epiglottis, and behind to the antero-lateral surface of the arytenoid cartilage, a short distance above the vocal process.
The lower border of this ligament, enclosed in mucous membrane, forms a free crescentic margin, which constitutes the upper boundary of the ventricle of the larynx.
They are lined with respiratory epithelium, while true vocal cords have stratified squamous epithelium.
Function
The vestibular folds of the larynx play a significant role in the maintenance of the laryngeal functions of breathing and preventing food and drink from entering the airway during swallowing. They aid phonation (speech) by suppressing dysphonia. In some ethnic singing and chanting styles, such as in Tuva, Sardinia, Mongolia, South Africa and Tibet (...) the vestibular folds may be used in co-oscillation with the vocal folds, producing very low or high pitched sounds(most of the time, one octave higher).
Conversely, people who have had their epiglottis removed because of cancer do not choke any more than when it was present.
Society and culture
They have a minimal role in normal phonation, but are often used to produce deep sonorous tones in Tuvan throat singing, as well as in musical screaming and the death growl singing style used in various forms of metal. Simultaneous voicing with the vocal and vestibular folds is diplophonia. Some voice actors occasionally employ small amounts of this phonation for its dark, growling quality while portraying a "villainous" or antagonistic voice.
See also
Vocal folds
Larynx
|
https://en.wikipedia.org/wiki/Smart%20transducer
|
A smart transducer is an analog or digital transducer, actuator or sensor combined with a processing unit and a communication interface.
As sensors and actuators become more complex they provide support for various modes of operation and interfacing. Some applications require additionally fault-tolerance and distributed computing. Such high-level functionality can be achieved by adding an embedded microcontroller to the classical sensor/actuator, which increases the ability to cope with complexity at a fair price. Typically, these on-board technologies in smart sensors are used for digital processing, either frequency-to-code or analog-to-digital conversations, interfacing functions and calculations. Interfacing functions include decision-making tools like self-adaption, self-diagnostics and self-identification functions, but also to control how long and when the sensor will be fully awake, to minimize power consumption and to decide when to dump and store data.
They are often made using CMOS, VLSI technology and may contain MEMS devices leading to lower cost. They may provide full digital outputs for easier interface or they may provide quasi-digital outputs like pulse-width modulation. In the machine vision field, a single compact unit that combines the imaging functions and the complete image processing functions is often called a smart sensor.
Smart sensors are a crucial element in the phenomenon Internet of Things (IoT). Within such a network, multiple physical vehicles and devices are embedded with sensors, software and electronics. Data will be collected and shared for better integration between digital environments and the physical world. The connectivity between sensors is an important requirement for an IoT innovation to perform well. Interoperability can therefore be seen as an consequence of connectivity. The sensors work and complement each other.
Improvement over traditional sensors
The key features of smart sensors as part of the IoT that differ
|
https://en.wikipedia.org/wiki/Rastrigin%20function
|
In mathematical optimization, the Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It is a typical example of non-linear multimodal function. It was first proposed in 1974 by Rastrigin as a 2-dimensional function and has been generalized by Rudolph. The generalized version was popularized by Hoffmeister & Bäck and Mühlenbein et al. Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima.
On an -dimensional domain it is defined by:
where and . There are many extrema:
The global minimum is at where .
The maximum function value for is located around :
Here are all the values at 0.5 interval listed for the 2D Rastrigin function with :
The abundance of local minima underlines the necessity of a global optimization algorithm when needing to find the global minimum. Local optimization algorithms are likely to get stuck in a local minimum.
See also
Test functions for optimization
Notes
Mathematical optimization
|
https://en.wikipedia.org/wiki/Geobacter%20toluenoxydans
|
Geobacter toluenoxydans is a bacterium from the genus of Geobacter which has been isolated from sludge from an aquifer in Stuttgart in Germany.
See also
List of bacterial orders
List of bacteria genera
|
https://en.wikipedia.org/wiki/30th%20meridian%20east
|
The meridian 30° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Europe, Turkey, Africa, the Indian Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 30th meridian east forms a great circle with the 150th meridian west.
The meridian is the mid point of Eastern European Time.
The 1992 BBC travel documentary Pole to Pole followed Michael Palin's journey along the 30° east meridian, which was selected as his travel axis as it covered the most land.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 30th meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="125" | Co-ordinates
! scope="col" width="235" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Barents Sea
| style="background:#b0e0e6;" | Passing between the islands of Kongsøya and Abel Island, Svalbard,
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Varangerfjord
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Passing just west of Kirkenes
|-
|
! scope="row" |
|
|-
|
! scope="row" |
| For about 8 km
|-
|
! scope="row" |
|
|-
|
! scope="row" |
|
|-
|
! scope="row" |
|
|-
|
! scope="row" |
|
|-
|
! scope="row" |
| For about 2 km
|-
|
! scope="row" |
|
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Baltic Sea
| style="background:#b0e0e6;" | Gulf of Finland, 300 meters of Lisy Nos, Saint Petersburg coast
|-
|
! scope="row" |
| Passing just west of Saint Petersburg proper
|-
|
! scope="row" |
|
|-
|
! scope="row" |
|
|-
|
! scope="row" |
| For about 12 km
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! sco
|
https://en.wikipedia.org/wiki/Triggertrap
|
Triggertrap was a company that created hardware and software products centred on triggering SLR cameras. Products included several Arduino-based camera triggers, along with mobile apps which interfaced with cameras using a device that plugs into the headphone socket of the smartphone or tablet. In May 2012, Triggertrap introduced Triggertrap Mobile for iOS, followed by a version for Android in September 2012. Triggertrap Mobile utilized the sensors and processing power of a smartphone or tablet running IOS to trigger cameras based on sound, motion, vibration, or location, in addition to timelapse, bulb ramping, and other features. Triggertrap ceased trading on the 31st of January 2017. The founder and CEO was the Dutch photographer Haje Jan Kamps.
Background
The story of Triggertrap started in July 2011, when Haje Jan Kamps started a Kickstarter campaign aiming to raise support for a new type of camera trigger. The project asked for $25,000, but within a month nearly 900 supporters had pledged more than $77,000 in exchange for more than 950 Triggertrap v1 products - which is nearly three times more than what they wanted for the project.
Arduino-based products
The Triggertrap v1 is a programmable trigger based on Arduino open-source architecture, and the source-code for the product is downloadable from GitHub. It has a built-in ambient light sensor, laser sensor, and sound sensor. In addition, it has an auxiliary port, which enables Triggertrap v1 to trigger a camera based on anything that generates an electric signal.
The Triggertrap v1 is classed as a high-speed device, able to use the ambient light sensor to respond and fire the external flash such that it would correctly sync at shutter speeds down to 1/640th of a second- that's a response time of less than 1.6 milliseconds.
In addition to the Triggertrap v1, the Triggertrap company marketed a Triggertrap Shield for Arduino. This was a feature-compatible version of the Triggertrap v1. After a user-configurab
|
https://en.wikipedia.org/wiki/Campus%20Party
|
Campus Party (CP) is a conference and hackathon.
Founded in 1997 as a technology festival and LAN party, the event was first held in Málaga, Spain, and has since been run in Argentina, Brazil, Canada, Colombia, Costa Rica, Ecuador, El Salvador, Germany, Italy, Mexico, Paraguay, Singapore, Spain, the Netherlands, Uruguay and USA.
The event has evolved into an annual week-long, 24-hour-a-day festival involving online communities, gamers, programmers, bloggers, governments, universities, companies and students and covers technology innovation and electronic entertainment, with an emphasis on free software, programming, astronomy, social media, gaming, green technology, robotics, security networks and computer modeling.
History
In December 1996 EnRED, a Spanish youth organization, wanted to found a small, private LAN party held at the Benalmádena Youth Center in Andalucía, Spain. Paco Regageles, then director of Channel 100, suggested they expand the event, and promoted it as a LAN party under the original name, the "Ben-Al Party" in reference to the event's location in Benalmádena.
In April 1998 the second Ben-Al Party was held, attracting 5 times the number of participants and national media attention to the gaming event. EnRED abandoned the project as it grew, and in April 1999 Paco Regageles along with Belinda Galiano, Yolanda Rueda, Charles Pinto, Pablo Antón, Juanma Moreno and Rafa Revert founded the non-profit organization E3 Futura, with the broader objective of making technology in all forms more accessible to society. Asociación E3 Futura founded Futura Networks to organize the Campus Party festivals, Campus IT Summer University and the Cibervoluntariado digital inclusion movement.
In 2000 Manuel Toharia, a speaker at previous Campus Parties, and director of Príncipe Felipe's Museum of Sciences in Valencia's City of arts and Sciences suggested that Ragageles expand and make the event more international by moving it to the famous museum. That year, Campu
|
https://en.wikipedia.org/wiki/Relative%20volatility
|
Relative volatility is a measure comparing the vapor pressures of the components in a liquid mixture of chemicals. This quantity is widely used in designing large industrial distillation processes. In effect, it indicates the ease or difficulty of using distillation to separate the more volatile components from the less volatile components in a mixture. By convention, relative volatility is usually denoted as .
Relative volatilities are used in the design of all types of distillation processes as well as other separation or absorption processes that involve the contacting of vapor and liquid phases in a series of equilibrium stages.
Relative volatilities are not used in separation or absorption processes that involve components reacting with each other (for example, the absorption of gaseous carbon dioxide in aqueous solutions of sodium hydroxide).
Definition
For a liquid mixture of two components (called a binary mixture) at a given temperature and pressure, the relative volatility is defined as
When their liquid concentrations are equal, more volatile components have higher vapor pressures than less volatile components. Thus, a value (= ) for a more volatile component is larger than a value for a less volatile component. That means that ≥ 1 since the larger value of the more volatile component is in the numerator and the smaller of the less volatile component is in the denominator.
is a unitless quantity. When the volatilities of both key components are equal, = 1 and separation of the two by distillation would be impossible under the given conditions because the compositions of the liquid and the vapor phase are the same (azeotrope). As the value of increases above 1, separation by distillation becomes progressively easier.
A liquid mixture containing two components is called a binary mixture. When a binary mixture is distilled, complete separation of the two components is rarely achieved. Typically, the overhead fraction from the distillation
|
https://en.wikipedia.org/wiki/Pathosystem
|
A pathosystem is a subsystem of an ecosystem and is defined by the phenomenon of parasitism. A plant pathosystem is one in which the host species is a plant. The parasite is any species in which the individual spends a significant part of its lifespan inhabiting one host individual and obtaining nutrients from it. The parasite may thus be an insect, mite, nematode, parasitic Angiosperm, fungus, bacterium, mycoplasma, virus or viroid. Other consumers, however, such as mammalian and avian herbivores, which graze populations of plants, are normally considered to be outside the conceptual boundaries of the plant pathosystem.
A host has the property of resistance to a parasite. And a parasite has the property of parasitic ability on a host. Parasitism is the interaction of these two properties. The main feature of the pathosystem concept is that it concerns parasitism, and it is not concerned with the study of either the host or parasite on its own. Another feature of the pathosystem concept is that the parasitism is studied in terms of populations, at the higher levels and in ecologic aspects of the system. The pathosystem concept is also multidisciplinary. It brings together various crop science disciplines such as entomology, nematology, plant pathology, and plant breeding. It also applies to wild populations and to agricultural, horticultural, and forest crops, and to tropical, subtropical, as well as both subsistence and commercial farming.
In a wild plant pathosystem, both the host and the parasite populations exhibit genetic diversity and genetic flexibility. Conversely, in a crop pathosystem, the host population normally exhibits genetic uniformity and genetic inflexibility (i.e., clones, pure lines, hybrid varieties), and the parasite population assumes a comparable uniformity. This distinction means that a wild pathosystem can respond to selection pressures, but that a crop pathosystem does not. It also means that a system of locking (see below) can function
|
https://en.wikipedia.org/wiki/Indian%20states%20ranking%20by%20prevalence%20of%20open%20defecation
|
This is a list of Indian states and territories by the percentage of households which are open defecation free, that is those that have access to sanitation facilities, in both urban and rural areas along with data from the Swachh Bharat Mission (under the Ministry of Drinking Water and Sanitation), National Family Health Survey, and the National Sample Survey (under the Ministry of Statistics and Programme Implementation). The reliability of this information can be questioned, as it has been observed that there is still open defecation in some states claimed "ODF".
More Indians living in villages owned a latrine in 2018 than four years ago, yet 44% of them still defecate in the open, according to a survey covering Rajasthan, Bihar, Madhya Pradesh and Uttar Pradesh that was released on January 4, 2019. These four states together contain two-fifths of India's rural population and reported high open defecation rates, over 87% in 2016.
By 2016, three states/UTs namely Sikkim, Himachal Pradesh, Kerala had been declared ODF.
Rajasthan and Madhya Pradesh, two states that had declared themselves open defecation-free, are yet to achieve that goal. In Rajasthan and Madhya Pradesh 63% and 35% respectively were estimated to be defecating in the open.
Household toilet construction increased from 43.79% in 2014, to 65.74% in 2016, to 98.53 in 2018. On 2 October 2019, all 35 states and union territories were declared defecation free.
List
Notes
|
https://en.wikipedia.org/wiki/Chinese%20Journal%20of%20Physics
|
The Chinese Journal of Physics is a bimonthly peer-reviewed scientific journal covering all aspects of physics. It is published by Elsevier on behalf of the Physical Society of Taiwan. The journal publishes reviews, articles, and refereed conference papers in all the major areas of physics. The editor-in-chief is Fu-Jen Kao.
|
https://en.wikipedia.org/wiki/Lone%20Signal
|
Lone Signal was a crowdfunded active SETI project designed to send interstellar messages from Earth to a possible extraterrestrial civilization. Founded by businessman Pierre Fabre and supported by several entrepreneurs, Lone Signal was based at the Jamesburg Earth Station in Carmel, California.
The project's beacon, which commenced continuous operations on June 17, 2013, transmitted short, 144-character messages by citizens of Earth to the red dwarf star Gliese 526, located 17.6 light-years away from Earth in the constellation Boötes. The Lone Signal team hoped to earn million to construct a network of satellite dishes across the Earth's surface, which could beam messages to many regions of the Milky Way galaxy. The project ceased transmission shortly after it began, due to lack of funding.
Message components
Lone Signal's message consists of two key components, a background hailing component and a more complex message component. The hailing component, designed by planetary scientist Michael W. Busch, uses a universal binary encoding system, which goes through an base-4 intermediary with 8-quad "words", representing numbers, mathematical operators, or other symbols. Each value corresponds to a single unique frequency. The offsets between those frequencies are set to be much larger than the bit rate (i.e. if transmitting at 100 Hz, the offsets between adjacent frequencies will be ~300 Hz). Using these code blocks, coherent mathematical statements about the laws of physics and Earth's location in the galaxy are produced. The hailing message repeats on average three times in order to allow the recipient to decode it at any time when observation begins, with some parts repeating more often than others. The hailing component was designed to be easily decoded by an extraterrestrial civilization, which was confirmed by a double-blind experiment.
The hailing component is not an end in itself, but is designed to be a "Rosetta Stone" toward understanding the more complex
|
https://en.wikipedia.org/wiki/Revolving%20stage
|
A revolving stage is a mechanically controlled platform within a theatre that can be rotated in order to speed up the changing of a scene within a show. A fully revolving set was an innovation constructed by the hydraulics engineer Tommaso Francini for an elaborately produced pageant, Le ballet de la délivrance de Renaud, which was presented for Marie de Medici in January 1617 at the Palais du Louvre and noted with admiration by contemporaries. Such a stage is also commonly referred to as a turntable.
Kabuki theatre development
Background
Kabuki theatre began in Japan around 1603 when Okuni, a Shinto priestess of the Izumi shrine, traveled with a group of priestesses to Kyoto to become performers. Okuni and her nuns danced sensualized versions of Buddhist and Shinto ritual dances, using the shows as a shop window for their services at night. They originally performed in the dry river bed of the River Kamo on a makeshift wooden stage, but as Okuni’s shows gained popularity they began to tour, performing at the imperial court at least once. Eventually, they were able to build a permanent theatre in 1604, modeled after Japan's aristocratic Nōh theatre which had dominated the previous era. Kabuki, with its origins in popular entertainment, drew crowds of common folk, along with high-class samurai looking to win their favorite performer for the night. This mixing of social classes troubled the Tokugawa Shogunate, who stressed the strict separation of different classes. When rivalries between Okuni’s samurai clients grew too intense, the shogunate took advantage of the conflict and banned women from performing onstage in 1629. The women were replaced by beautiful teenage boys who took part in the same after-dark activities, leading Kabuki to be banned from the stage completely in 1652. An actor-manager in Kyoto, Murayama Matabei, went to the authorities responsible and staged a hunger strike outside their offices. In 1654 Kabuki was allowed to return with restrictions.
|
https://en.wikipedia.org/wiki/List%20of%20features%20removed%20in%20Windows%20Vista
|
While Windows Vista contains many new features, a number of capabilities and certain programs that were a part of previous Windows versions up to Windows XP were removed or changed – some of which were later re-introduced in Windows 7 and later versions.
The following is a list of features that were present in Windows XP and earlier versions but were removed in Windows Vista.
Windows Explorer
Windows Briefcase no longer allows synchronizing items across multiple computers and a removable media device.
Windows Briefcase cannot sync files or folders in locations protected by User Account Control. This removes the ability to sync many locations.
Grouping items by name in Explorer no longer groups them under each individual letter of the alphabet (A, B, C... Z) like in Windows XP. When using Group By Name, items are always combined into just a few groups (A-H, I-P, Q-Z). This removes the ability to locate items by their first letter.
If hidden files are not allowed to be shown in Windows Explorer, the Status bar does not report how many hidden files are present. In addition, if all items within a folder are selected at once (by pressing or Select all), the user is not alerted to hidden files being selected.
Even after setting the ForceCopyAclwithFile and MoveSecurityAttributes values as documented in KB310316, permissions are not retained/copied when Windows Explorer is used to copy or move objects across volumes or in the same volume. A hotfix is available (KB2617058) to restore the MoveSecurityAttributes value but not ForceCopyAclwithFile.
Thumbnails can no longer be forced to regenerate by right-clicking the image and selecting Refresh thumbnail.
Thumbnail support for .HTM, .HTML, .MHT and .URL files has been removed in Windows Vista.
The Explorer thumbnail handler and metadata property handler for .AVI and .WAV files (Shmedia.dll) has been removed.
Ctrl+Enter on the selected folder no longer opens it in a new Explorer window.
Tiles view only shows the name, type
|
https://en.wikipedia.org/wiki/Niobium%E2%80%93tin
|
Niobium–tin is an intermetallic compound of niobium (Nb) and tin (Sn), used industrially as a type-II superconductor. This intermetallic compound has a simple structure: A3B. It is more expensive than niobium–titanium (NbTi), but remains superconducting up to a magnetic flux density of , compared to a limit of roughly 15 T for NbTi.
Nb3Sn was discovered to be a superconductor in 1954. The material's ability to support high currents and magnetic fields was discovered in 1961 and started the era of large-scale applications of superconductivity.
The critical temperature is . Application temperatures are commonly around , the boiling point of liquid helium at atmospheric pressure.
In April 2008 a record non-copper current density was claimed of 2,643 A mm−2 at 12 T and 4.2 K.
History
Nb3Sn was discovered to be a superconductor in 1954, one year after the discovery of V3Si, the first example of an A3B superconductor. In 1961 it was discovered that niobium–tin still exhibits superconductivity at large currents and strong magnetic fields, thus becoming the first known material to support the high currents and fields necessary for making useful high-power magnets and electric power machinery.
Notable uses
The central solenoid and toroidal field superconducting magnets for the planned experimental ITER fusion reactor use niobium–tin as a superconductor. The central solenoid coil will produce a field of . The toroidal field coils will operate at a maximum field of 11.8 T. Estimated use is of Nb3Sn strands and 250 metric tonnes of NbTi strands.
At the Large Hadron Collider at CERN, extra-strong quadrupole magnets (for focussing beams) made with niobium–tin are being installed in key points of the accelerator between late 2018 and early 2020. Niobium tin had been proposed in 1986 as an alternative to niobium–titanium, since it allowed coolants less complex than superfluid helium, but this was not pursued in order to avoid delays while competing with the then-planned US
|
https://en.wikipedia.org/wiki/Manifold%20%28magazine%29
|
Manifold was a mathematical magazine published at the University of Warwick. It was established in 1968. Its philosophy was "It is possible to be serious about mathematics, without being solemn." Its best known editor was the mathematician Ian Stewart who edited the magazine in the late 1960s.
A 1969 edition of the magazine mentioned a game called "Finchley Central", which became the basis for the game of Mornington Crescent as popularised by the BBC Radio 4 panel game I'm Sorry I Haven't a Clue.
In 1983 the magazine was reincarnated as 2-Manifold.
|
https://en.wikipedia.org/wiki/Smooth%20infinitesimal%20analysis
|
Smooth infinitesimal analysis is a modern reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed in terms of discrete entities. As a theory, it is a subset of synthetic differential geometry.
The nilsquare or nilpotent infinitesimals are numbers ε where ε² = 0 is true, but ε = 0 need not be true at the same time.
Overview
This approach departs from the classical logic used in conventional mathematics by denying the law of the excluded middle, e.g., NOT (a ≠ b) does not imply a = b. In particular, in a theory of smooth infinitesimal analysis one can prove for all infinitesimals ε, NOT (ε ≠ 0); yet it is provably false that all infinitesimals are equal to zero. One can see that the law of excluded middle cannot hold from the following basic theorem (again, understood in the context of a theory of smooth infinitesimal analysis):
Every function whose domain is R, the real numbers, is continuous and infinitely differentiable.
Despite this fact, one could attempt to define a discontinuous function f(x) by specifying that f(x) = 1 for x = 0, and f(x) = 0 for x ≠ 0. If the law of the excluded middle held, then this would be a fully defined, discontinuous function. However, there are plenty of x, namely the infinitesimals, such that neither x = 0 nor x ≠ 0 holds, so the function is not defined on the real numbers.
In typical models of smooth infinitesimal analysis, the infinitesimals are not invertible, and therefore the theory does not contain infinite numbers. However, there are also models that include invertible infinitesimals.
Other mathematical systems exist which include infinitesimals, including nonstandard analysis and the surreal numbers. Smooth infinitesimal analysis is like nonstandard analysis in that (1) it is meant to serve as a foundation for analysis, and (2) the infinitesimal quantities do no
|
https://en.wikipedia.org/wiki/User-mode%20Linux
|
User-mode Linux (UML) is a virtualization system for the Linux operating system based on an architectural port of the Linux kernel to its own system call interface, which enables multiple virtual Linux kernel-based operating systems (known as guests) to run as an application within a normal Linux system (known as the host). A Linux kernel compiled for the um architecture can then boot as a process under another Linux kernel, entirely in user space, without affecting the host environment's configuration or stability.
This method gives the user a way to run many virtual Linux machines on a single piece of hardware, allowing some isolation, typically without changing the configuration or stability of the host environment because each guest is just a regular application running as a process in user space.
Applications
Numerous things become possible through the use of UML. One can run network services from a UML environment and remain totally sequestered from the main Linux system in which the UML environment runs. Administrators can use UML to set up honeypots, which allow one to test the security of one's computers or network. UML can serve to test and debug new software without adversely affecting the host system. UML can also be used for teaching and research, providing a realistic Linux networked environment with a high degree of safety.
In UML environments, host and guest kernel versions don't need to match, so it is entirely possible to test a "bleeding edge" version of Linux in User-mode on a system running a much older kernel. UML also allows kernel debugging to be performed on one machine, where other kernel debugging tools (such as kgdb) require two machines connected with a null modem cable.
Some web hosting providers offer UML-powered virtual servers for lower prices than true dedicated servers. Each customer has root access on what appears to be their own system, while in reality one physical computer is shared between many people.
libguestfs has su
|
https://en.wikipedia.org/wiki/Neptune%20Bank%20Power%20Station
|
Neptune Bank Power Station was a coal-fired power station situated on the River Tyne at Wallsend near Newcastle upon Tyne. Commissioned in 1901 by the Newcastle upon Tyne Electric Supply Company, the station was the first in the world to provide electricity for purposes other than domestic and street lighting. It was also the first in the world to generate electricity using three-phase electrical power distribution at a voltage of 5,500 volts.
The station had an initial generating capacity of 2,800 kW, which was increased to 3,000 kW a year after the station opened, with the introduction of two 1,500 kW Parsons turbo alternators, the largest ever built at that time. The station closed in 1915, following the completion of an extension to Carville Power Station and the opening of Dunston Power Station.
History
At the beginning of the twentieth century, the use of electricity for general purposes began to be considered, and the Newcastle upon Tyne Electric Supply Company (NESCo) realised the potential it offered for development. In June 1899, the Walker and Wallsend Union Gas Company (WWUGC) acquired parliamentary powers for the supply of electricity to the area around Wallsend. In January 1900 they erected Neptune Bank power station near the North Eastern Railway's (NER) North Tyneside Loop, midway between Wallsend and Walker. In October 1900, NESCo acquired the entire power station from the WWUGC, with the exception of the cables and sub-station machinery installed for the purpose of supplying the works in the area in which the WWUGC had obtained parliamentary powers. the WWUGC continued to buy electricity in bulk from NESCo.
The station was officially opened on 18 June 1901 by Lord Kelvin. At the opening he said:
We have seen at work what many have not seen before – a system realised in which a central station generates power by steam engines and delivers electricity to consumers at distances varying, I think, from a quarter of a mile to over three and a half mi
|
https://en.wikipedia.org/wiki/Winkler%20vine
|
The Winkler Vine was an example of large-vine grape culture. The vine was named after Albert J. Winkler, Chair of the Department of Viticulture and Enology (1935–1957) at University of California, Davis. Planted in 1979, the Winkler vine was a Vitis vinifera cv. Mission, grafted on to a Vitis rupestris St George rootstock. A great deal of research has been carried out on pruning, training and vine size; by researchers such as: Ravaz, Winkler, Shaulis, Koblet, Howell, Carbonneau, Smart & Clingeleffer. Minimal pruning is a pruning/training system adapted from the principles of big-vine theory.
History
The Mission grape cultivar was originally brought to South America from Spain by Catholic missionaries and was introduced into California in the 18th century. Historically Mission was the most widely planted cultivar in California, up to the 1850s.
The Winkler Vine was trained on a 60x60ft steel arbour, covering 1/12 of an acre and was capable of producing over a tonne of fruit. The Winkler vine was damaged by a tractor 'early in its life' and carried a large canker; eventually the vine became infected with the wood rot, Eutypa, causing its death in spring 2008.
Big vine viticulture
The vine was the classic example of big vine culture, where vines are trained and pruned to allow the plant to express its natural vigour; rather than forcing a vine into a restricted trellis system by heavy pruning. Eventually a 'big-vine' will reach its maximum size and yield compensation will result in 'balanced' cropping. Big vine production of grapes is not economically feasible (due to the time taken to establish the plant); however it demonstrates the natural ability of a vine produce in sustainable manner (yield vs. fruit ripeness vs. carbohydrate storage).
|
https://en.wikipedia.org/wiki/Galactosylceramide
|
A galactosylceramide, or galactocerebroside is a type of cerebroside consisting of a ceramide with a galactose residue at the 1-hydroxyl moiety.
The galactose is cleaved by galactosylceramidase.
Galactosylceramide is a marker for oligodendrocytes in the brain, whether or not they form myelin.
Additional images
See also
Alpha-Galactosylceramide
Krabbe disease
Myelin
|
https://en.wikipedia.org/wiki/Singleton%20pattern
|
In software engineering, the singleton pattern is a software design pattern that restricts the instantiation of a class to a singular instance. One of the well-known "Gang of Four" design patterns, which describe how to solve recurring problems in object-oriented software, the pattern is useful when exactly one object is needed to coordinate actions across a system.
More specifically, the singleton pattern allows objects to:
Ensure they only have one instance
Provide easy access to that instance
Control their instantiation (for example, hiding the constructors of a class)
The term comes from the mathematical concept of a singleton.
Common uses
Singletons are often preferred to global variables because they do not pollute the global namespace (or their containing namespace). Additionally, they permit lazy allocation and initialization, whereas global variables in many languages will always consume resources.
The singleton pattern can also be used as a basis for other design patterns, such as the abstract factory, factory method, builder and prototype patterns. Facade objects are also often singletons because only one facade object is required.
Logging is a common real-world use case for singletons, because all objects that wish to log messages require a uniform point of access and conceptually write to a single source.
Implementations
Implementations of the singleton pattern ensure that only one instance of the singleton class ever exists and typically provide global access to that instance.
Typically, this is accomplished by:
Declaring all constructors of the class to be private, which prevents it from being instantiated by other objects
Providing a static method that returns a reference to the instance
The instance is usually stored as a private static variable; the instance is created when the variable is initialized, at some point before when the static method is first called.
This C++11 implementation is based on the pre C++98 implementation in th
|
https://en.wikipedia.org/wiki/Pseudoautosomal%20region
|
The pseudoautosomal regions, PAR1, PAR2, are homologous sequences of nucleotides on the X and Y chromosomes.
The pseudoautosomal regions get their name because any genes within them (so far at least 29 have been found for humans) are inherited just like any autosomal genes. PAR1 comprises 2.6 Mbp of the short-arm tips of both X and Y chromosomes in humans and great apes (X and Y are 154 Mbp and 62 Mbp in total). PAR2 is at the tips of the long arms, spanning 320 kbp.
Location
The locations of the PARs within GRCh38 are:
The locations of the PARs within GRCh37 are:
Inheritance and function
Normal male mammals have two copies of these genes: one in the pseudoautosomal region of their Y chromosome, the other in the corresponding portion of their X chromosome. Normal females also possess two copies of pseudoautosomal genes, as each of their two X chromosomes contains a pseudoautosomal region. Crossing over between the X and Y chromosomes is normally restricted to the pseudoautosomal regions; thus, pseudoautosomal genes exhibit an autosomal, rather than sex-linked, pattern of inheritance. So, females can inherit an allele originally present on the Y chromosome of their father.
The function of these pseudoautosomal regions is that they allow the X and Y chromosomes to pair and properly segregate during meiosis in males.
Genes
Pseudoautosomal genes are found in two different locations: PAR1 and PAR2. These are believed to have evolved independently.
PAR1
pseudoautosomal PAR1
AKAP17A
ASMT
ASMTL
CD99
CRLF2
CSF2RA
DHRSX
GTPBP6
IL3RA
P2RY8
PLCXD1
PPP2R3B
SHOX
SLC25A6
XG, which straddles the PAR1 region boundary
ZBED1
in mice, some PAR1 genes have transferred to autosomes.
PAR2
pseudoautosomal PAR2
IL9R
SPRY3
VAMP7, also known as SYBL1
CXYorf1, also known as FAM39A and now mapped to the pseudogene WASH6P, but of interest due to its proximity to the telomere.
Pathology
Pairing (synapsis) of the X and Y chromosomes and crossing over (recombinat
|
https://en.wikipedia.org/wiki/Jordan%20operator%20algebra
|
In mathematics, Jordan operator algebras are real or complex Jordan algebras with the compatible structure of a Banach space. When the coefficients are real numbers, the algebras are called Jordan Banach algebras. The theory has been extensively developed only for the subclass of JB algebras. The axioms for these algebras were devised by . Those that can be realised concretely as subalgebras of self-adjoint operators on a real or complex Hilbert space with the operator Jordan product and the operator norm are called JC algebras. The axioms for complex Jordan operator algebras, first suggested by Irving Kaplansky in 1976, require an involution and are called JB* algebras or Jordan C* algebras. By analogy with the abstract characterisation of von Neumann algebras as C* algebras for which the underlying Banach space is the dual of another, there is a corresponding definition of JBW algebras. Those that can be realised using ultraweakly closed Jordan algebras of self-adjoint operators with the operator Jordan product are called JW algebras. The JBW algebras with trivial center, so-called JBW factors, are classified in terms of von Neumann factors: apart from the exceptional 27 dimensional Albert algebra and the spin factors, all other JBW factors are isomorphic either to the self-adjoint part of a von Neumann factor or to its fixed point algebra under a period two *-anti-automorphism. Jordan operator algebras have been applied in quantum mechanics and in complex geometry, where Koecher's description of bounded symmetric domains using Jordan algebras has been extended to infinite dimensions.
Definitions
JC algebra
A JC algebra is a real subspace of the space of self-adjoint operators on a real or complex Hilbert space, closed under the operator Jordan product a ∘ b = (ab + ba) and closed in the operator norm.
JC algebra
A JC algebra is a norm-closed self-adjoint subspace of the space of operators on a complex Hilbert space, closed under the operator Jordan product a ∘
|
https://en.wikipedia.org/wiki/Categorical%20trace
|
In category theory, a branch of mathematics, the categorical trace is a generalization of the trace of a matrix.
Definition
The trace is defined in the context of a symmetric monoidal category C, i.e., a category equipped with a suitable notion of a product . (The notation reflects that the product is, in many cases, a kind of a tensor product.) An object X in such a category C is called dualizable if there is another object playing the role of a dual object of X. In this situation, the trace of a morphism is defined as the composition of the following morphisms:
where 1 is the monoidal unit and the extremal morphisms are the coevaluation and evaluation, which are part of the definition of dualizable objects.
The same definition applies, to great effect, also when C is a symmetric monoidal ∞-category.
Examples
If C is the category of vector spaces over a fixed field k, the dualizable objects are precisely the finite-dimensional vector spaces, and the trace in the sense above is the morphism
which is the multiplication by the trace of the endomorphism f in the usual sense of linear algebra.
If C is the ∞-category of chain complexes of modules (over a fixed commutative ring R), dualizable objects V in C are precisely the perfect complexes. The trace in this setting captures, for example, the Euler characteristic, which is the alternating sum of the ranks of its terms:
Further applications
have used categorical trace methods to prove an algebro-geometric version of the Atiyah–Bott fixed point formula, an extension of the Lefschetz fixed point formula.
|
https://en.wikipedia.org/wiki/Oncovirus
|
An oncovirus or oncogenic virus is a virus that can cause cancer. This term originated from studies of acutely transforming retroviruses in the 1950–60s, when the term "oncornaviruses" was used to denote their RNA virus origin. With the letters "RNA" removed, it now refers to any virus with a DNA or RNA genome causing cancer and is synonymous with "tumor virus" or "cancer virus". The vast majority of human and animal viruses do not cause cancer, probably because of longstanding co-evolution between the virus and its host. Oncoviruses have been important not only in epidemiology, but also in investigations of cell cycle control mechanisms such as the retinoblastoma protein.
The World Health Organization's International Agency for Research on Cancer estimated that in 2002, infection caused 17.8% of human cancers, with 11.9% caused by one of seven viruses. A 2020 study of 2,658 samples from 38 different types of cancer found that 16% were associated with a virus. These cancers might be easily prevented through vaccination (e.g., papillomavirus vaccines), diagnosed with simple blood tests, and treated with less-toxic antiviral compounds.
Causality
Generally, tumor viruses cause little or no disease after infection in their hosts, or cause non-neoplastic diseases such as acute hepatitis for hepatitis B virus or mononucleosis for Epstein–Barr virus. A minority of persons (or animals) will go on to develop cancers after infection. This has complicated efforts to determine whether or not a given virus causes cancer. The well-known Koch's postulates, 19th-century constructs developed by Robert Koch to establish the likelihood that Bacillus anthracis will cause anthrax disease, are not applicable to viral diseases. Firstly, this is because viruses cannot truly be isolated in pure culture—even stringent isolation techniques cannot exclude undetected contaminating viruses with similar density characteristics, and viruses must be grown on cells. Secondly, asymptomatic virus in
|
https://en.wikipedia.org/wiki/C2orf72
|
C2orf72 (Chromosome 2, Open Reading Frame 72) is a gene in humans (Homo sapiens) that encodes a protein currently named after its gene, C2orf72. It is also designated LOC257407 and can be found under GenBank accession code NM_001144994.2. The protein can be found under UniProt accession code A6NCS6.
This gene is primarily expressed in the liver, brain, placental, and small intestine tissues. C2orf72 is an intracellular protein that has been predicted to reside within the nucleus, cytosol, and plasma membrane of cells. The function of C2orf72 is unknown, but it is predicted to be involved in very-low-density lipoprotein particle assembly and also involved in the regulation of cholesterol esterification. This prediction also matches with the fact that both estradiol and testosterone have been reported to upregulate expression of C2orf72.
Gene
Locus
C2orf72 is a protein-coding gene found on the forward (+) strand of chromosome 2 at the locus 2q37.1, on the long arm of the chromosome.
mRNA
C2orf72's mRNA transcript is reported to be about 3,629 base pairs long. It appears to have two polyadenylation sites near the 5′ end of the mRNA transcript, each preceded by their respective regulatory sequences, such as ATTAAA or AATAAA.
There are three predicted exons reported for human C2orf72.
Expression pattern
C2orf72 is preferentially expressed in brain, liver, placenta, colon, small intestine, gallbladder, stomach, and prostate, and to a lesser extent in adrenal gland, appendix, pancreas, lung, kidney, testis, and urinary bladder.
Predicted Biological Functions
It is predicted via Archs4 (July 16, 2022) that the function of this gene may be related to very-low-density lipoprotein particle assembly and also involved in the regulation of cholesterol esterification.
Regulation
Gene-level regulation
Gene perturbation data
In a study of embryonic liver samples lacking hepatocyte nuclear factor 4 alpha (HNF4α), the expression of C2orf72 was downregulated.
Both est
|
https://en.wikipedia.org/wiki/Consistency
|
In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model, i.e., there exists an interpretation under which all formulas in the theory are true. This is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states a theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences (informally "axioms") and the set of closed sentences provable from under some (specified, possibly implicitly) formal deductive system. The set of axioms is consistent when for no formula .
If there exists a deductive system for which these semantic and syntactic definitions are equivalent for any theory formulated in a particular deductive logic, the logic is called complete. The completeness of the sentential calculus was proved by Paul Bernays in 1918 and Emil Post in 1921, while the completeness of predicate calculus was proved by Kurt Gödel in 1930, and consistency proofs for arithmetics restricted with respect to the induction axiom schema were proved by Ackermann (1924), von Neumann (1927) and Herbrand (1931). Stronger logics, such as second-order logic, are not complete.
A consistency proof is a mathematical proof that a particular theory is consistent. The early development of mathematical proof theory was driven by the desire to provide finitary consistency proofs for all of mathematics as part of Hilbert's program. Hilbert's program was strongly impacted by the incompleteness theorems, which showed that sufficiently strong proof theories cannot prove their consistency (provided that they are consistent).
Although consistency can be proved using model theory, it is often done in
|
https://en.wikipedia.org/wiki/Michels%20syndrome
|
Michels syndrome is a syndrome characterised by intellectual disability, craniosynostosis, blepharophimosis, ptosis, epicanthus inversus, highly arched eyebrows, and hypertelorism. People with Michels syndrome vary in other symptoms such as asymmetry of the skull, eyelid, and anterior chamber anomalies, cleft lip and palate, umbilical anomalies, and growth and cognitive development.
See also
Malpuech facial clefting syndrome, another condition in the 3MC spectrum
|
https://en.wikipedia.org/wiki/Old%20age
|
Old age is the range of ages for persons nearing and surpassing life expectancy. People of old age are also referred to as: old people, elderly, elders, seniors, senior citizens, or older adults.
Old age is not a definite biological stage: the chronological age denoted as "old age" varies culturally and historically. Some disciplines and domains focus on the aging and the aged, such as the organic processes of aging (senescence), medical studies of the aging process (gerontology), diseases that afflict older adults (geriatrics), technology to support the aging society (gerontechnology), and leisure and sport activities adapted to older people (such as senior sport).
Old people often have limited regenerative abilities and are more susceptible to illness and injury than younger adults. They face social problems related to retirement, loneliness, and ageism.
In 2011, the United Nations proposed a human-rights convention to protect old people.
Definitions
Definitions of old age include official definitions, sub-group definitions, and four dimensions as follows.
Official definitions
Most developed Western countries set the retirement age around the age of 65; this is also generally considered to mark the transition from middle to old age. Reaching this age is commonly a requirement to become eligible for senior social programs.
Old age cannot be universally defined because it is context-sensitive. The United Nations, for example, considers old age to be 60 years or older. In contrast, a 2001 joint report by the U.S. National Institute on Aging and the World Health Organization [WHO] Regional Office for Africa set the beginning of old age in Sub-Saharan Africa at 50. This lower threshold stems primarily from a different way of thinking about old age in developing nations. Unlike in the developed world, where chronological age determines retirement, societies in developing countries determine old age according to a person's ability to make active contributions to
|
https://en.wikipedia.org/wiki/Sodium%20stearoyl%20lactylate
|
Sodium stearoyl-2-lactylate (sodium stearoyl lactylate or SSL) is a versatile, FDA approved food additive used to improve the mix tolerance and volume of processed foods. It is one type of a commercially available lactylate. SSL is non-toxic, biodegradable, and typically manufactured using biorenewable feedstocks. Because SSL is a safe and highly effective food additive, it is used in a wide variety of products ranging from baked goods and desserts to pet foods.
As described by the Food Chemicals Codex 7th edition, SSL is a cream-colored powder or brittle solid. SSL is currently manufactured by the esterification of stearic acid with lactic acid and partially neutralized with either food-grade soda ash (sodium carbonate) or caustic soda (concentrated sodium hydroxide). Commercial grade SSL is a mixture of sodium salts of stearoyl lactylic acids and minor proportions of other sodium salts of related acids. The HLB for SSL is 10–12. SSL is slightly hygroscopic, soluble in ethanol and in hot oil or fat, and dispersible in warm water. These properties are the reason that SSL is an excellent emulsifier for fat-in-water emulsions and can also function as a humectant.
Food labeling requirements
To be labeled as SSL for sale within the United States, the product must conform to the specifications detailed in 21 CFR 172.846 and the most recent edition of the Food Chemical Codex. In the EU, the product must conform to the specifications detailed in Regulation (EC) No 96/77. For the 7th edition of the FCC and Regulation (EC) No 96/77, these specifications are:
To be labeled as SSL for sale in other regions, the product must conform to the specifications detailed in that region's codex.
Food applications and maximum use levels
SSL finds widespread application in baked goods, pancakes, waffles, cereals, pastas, instant rice, desserts, icings, fillings, puddings, toppings, sugar confectionaries, powdered beverage mixes, creamers, cream liqueurs, dehydrated potatoes,
|
https://en.wikipedia.org/wiki/Transverse%20Mercator%3A%20Bowring%20series
|
The Bowring series of the transverse mercator published in 1989 by Bernard Russel Bowring gave formulas for the Transverse Mercator that are simpler to program but retain millimeter accuracy.
Bowring rewrote the fourth order Redfearn series (after discarding small terms) in a more compact notation by replacing the spherical terms, i.e. those independent of ellipticity, by the exact expressions used in the spherical transverse Mercator projection. There was no gain in accuracy since the elliptic terms were still truncated at the 1mm level. Such modifications were of possible use when computing resources were minimal.
Notation
= radius of the equator of the chosen spheroid (e.g. 6378137 m for GRS80/WGS84)
= polar semi-axis of the spheroid
= scale factor along the central meridian (e.g. 0.9996 for UTM)
= latitude
= difference in longitude from the central meridian, in radians, positive eastward
= meridian distance, measured on the spheroid from the equator to (see below)
E = distance east of the central meridian, measured on the Transverse Mercator projection
N = distance north of the equator, measured on the Transverse Mercator projection
where r is the reciprocal of the flattening for the chosen spheroid (for WGS84, r = 298.257223563 exactly).
Convert Lat-Lon to Transverse Mercator
(prime vertical radius of curvature)
where is in radians and refers to arctanh).
Transverse Mercator to Lat-Lon
To convert Transverse Mercator coordinates to lat-lon, first calculate , the footprint latitude— i.e. the latitude of the point on the central meridian that has the same N as the point to be converted; i.e. the latitude that has a meridian distance on the spheroid equal to N/. Bowring's formulas below seem quickest, but traditional formulas will suffice. Then
(, and must of course be in radians, and and will be.)
Meridian distance
Bowring gave formulas for meridian distance (the distance from the equator to the given l
|
https://en.wikipedia.org/wiki/Lyophyllum%20littoralis
|
Lyophyllum littoralis, is a grey edible mushroom of the genus Lyophyllum. It is closely related to Lyophyllum decastes. It is hot climate fungus species common in Mediterranean coniferous woodlands. Often growing in clusters.
Taxonomy
Lyophyllum littoralis was first described as a separate species in 1998 by Italian mycologist Marco Contu.
Description
The smooth cap is 5–15 cm (2–6 in) wide mottled grey to brownish grey colour, with darker splotches tending to be around the edge. Often covered in a whitish powder. Cap shape in younger specimen tends to be round and becomes convoluted with age, with the cap edge pointing downwards.
The stipe is 1.5–4 cm (0.6–1.6 in) high and up to 0.4-1.5 cm wide and has no ring nor volva. Often hollow the stipe ranges from white at the top to grey at the bottom with older specimen being grey
The gills are white tending to pale yellow-grey in older specimen. the gills densely packed, can be free (unattached to the stipe) or slightly sliding to the stipe.
The flesh is thin and rubbery, and does not break easily, especially the stipe. Older specimen (especially after sporulation) exude a distinct coconut scent. Younger specimen have no discernible scent or taste prior to cooking.
White spore in bulk. Spores 4.5-5.5 (6) × 4.5-5.5 μm, globular, smooth, cyanophile, often with central guttula. Tetrasporic, trivial basidia. GAF present in all tissues.
Distribution and habitat
Lyophyllum littoralis is found in Mediterranean woodlands, where fruiting bodies appear under conifers, particularly pine, from November to January. They generally appear in clumps connected to the same base, but can also appear individually.
Edibility
With a taste reminiscent of fried chicken and texture that retains its firmness after frying, the species is a good edible.
|
https://en.wikipedia.org/wiki/Academy%20Color%20Encoding%20System
|
The Academy Color Encoding System (ACES) is a color image encoding system created under the auspices of the Academy of Motion Picture Arts and Sciences. ACES is characterised by a color accurate workflow, with "seamless interchange of high quality motion picture images regardless of source".
The system defines its own color primaries based on spectral locus as defined by the CIE xyY specification. The white point is approximate to the chromaticity of CIE Daylight with a Correlated Color Temperature (CCT) of 6000K. Most ACES compliant image files are encoded in 16-bit half-floats, thus allowing ACES OpenEXR files to encode 30 stops of scene information. The ACESproxy format uses integers with a log encoding. ACES supports both high dynamic range (HDR) and wide color gamut (WCG).
The version 1.0 release occurred in December 2014. ACES received a Primetime Engineering Emmy Award in 2012. The system is standardized in part by the Society of Motion Picture and Television Engineers (SMPTE) standards body.
History
Background
The ACES project began its development in 2004 in collaboration with 50 industry technologists. The project began due to the recent incursion of digital technologies into the motion picture industry. The traditional motion picture workflow had been based on film negatives, and with the digital transition, scanning of negatives and digital camera acquisition. The industry lacked a color management scheme for diverse sources coming from a variety of digital motion picture cameras and film. The ACES system is designed to control the complexity inherent in managing a multitude of file formats, image encoding, metadata transfer, color reproduction, and image interchanges that are present in the current motion picture workflow.
Versions
The following versions are available for the reference implementation:
A number of pre-release versions were tagged from 0.1 (March 1, 2012) to 0.7.1 (February 26, 2014).
ACES 1.0 (December 2014) is first release ve
|
https://en.wikipedia.org/wiki/IQue%20Player
|
The iQue Player (, stylised as iQue PLAYER) is a handheld TV game version of the Nintendo 64 console that was manufactured by iQue, a joint venture between Nintendo and Taiwanese-American scientist Wei Yen after China had banned the sale of home video games. Its Chinese name is Shén Yóu Ji (), literally "God Gaming Machine". Shényóu () is a double entendre as "to make a mental journey". It was never released in any English-speaking countries, but the name "iQue Player" appears in the instruction manual. The console and its controller are one unit, plugging directly into the television. A box accessory allows multiplayer gaming.
History
Development
China has a large black market for video games and usually only a few games officially make it to the Chinese market. Many Chinese gamers tend to purchase pirated cartridge or disc copies or download copied game files to play via emulator. Nintendo wanted to curb software piracy in China, and bypass the ban that the Chinese government has implemented on home game consoles since 2000. Nintendo partnered with Wei Yen, who had led past Nintendo product development such as the Nintendo 64 console and the Wii Remote. Originally, the system would support games released on Nintendo consoles prior to the GameCube, including the NES, Super NES, and Nintendo 64, but it was decided only to include Nintendo 64 games. Additionally, The Legend of Zelda: Majora's Mask was planned and is shown on the back of the box, but was later cancelled.
The iQue Player was announced at Tokyo Game Show 2003. It was originally planned to play Super NES in addition to Nintendo 64 games, and had a release date set for mid-October with debut markets including Shanghai, Guangzhou, and Chengdu, expanding into the rest of China by the following spring. The system missed its mid-October launch. By November 21, it was available for purchase at Lik Sang.
Release
The iQue Player was released on November 18, 2003 with few launch games. Nintendo's strategy to
|
https://en.wikipedia.org/wiki/Von%20Zeipel%20theorem
|
In astrophysics, the von Zeipel theorem states that the radiative flux in a uniformly rotating star is proportional to the local effective gravity . The theorem is named after Swedish astronomer Edvard Hugo von Zeipel.
The theorem is:
where the luminosity and mass are evaluated on a surface of constant pressure . The effective temperature can then be found at a given colatitude from the local effective gravity:
This relation ignores the effect of convection in the envelope, so it primarily applies to early-type stars.
According to the theory of rotating stars, if the rotational velocity of a star depends only on the radius, it cannot simultaneously be in thermal and hydrostatic equilibrium. This is called the von Zeipel paradox. The paradox is resolved, however, if the rotational velocity also depends on height, or there is a meridional circulation. A similar situation may arise in accretion disks.
|
https://en.wikipedia.org/wiki/Tax-benefit%20model
|
A tax-benefit model is a form of microsimulation model. It is usually based on a representative or administrative data set and certain policy rules. These models are used to cost certain policy reforms and to determine the winners and losers of reform. One example is EUROMOD, which models taxes and benefits for 27 EU states, and its post-Brexit offshoot, UKMOD.
Overview
Tax-benefit models are used by policy makers and researchers to examine the effects of proposed or hypothetical policy changes on income inequality, poverty and government budget. Their primary advantage over conventional cross-country comparison method is that they are very powerful at evaluating policy changes not only ex post, but also ex ante.
Generally, tax-benefit models can simulate income taxes, property taxes, social contributions, social assistance, income benefits and other benefits.
The underlying micro-data are obtained mainly through household surveys. These data include information about households' income, expenditure and family composition.
Most of the tax-benefit models are operated by governments or research institutions. Very few models are publicly available.
Depending on their purpose, tax-benefit models may or may not ignore behavioral responses of individuals.
General framework
The basic steps in conducting research using a simple tax-benefit model are:
Gross micro-data describing households' income, expenditure and family composition are collected and processed;
These data enter a tax-benefit model;
First simulation takes place;
Disposable income of each household is calculated and the results of the simulation are summarized;
A set of rules of the policies enters the model and the second simulation takes place;
Disposable income of each household is calculated and the results of the simulation are summarized;
The impact of the set of policy changes is evaluated by comparing the results from the two simulations.
A dynamic tax-benefit model PoliSim's webpage p
|
https://en.wikipedia.org/wiki/Supernova%20nucleosynthesis
|
Supernova nucleosynthesis is the nucleosynthesis of chemical elements in supernova explosions.
In sufficiently massive stars, the nucleosynthesis by fusion of lighter elements into heavier ones occurs during sequential hydrostatic burning processes called helium burning, carbon burning, oxygen burning, and silicon burning, in which the byproducts of one nuclear fuel become, after compressional heating, the fuel for the subsequent burning stage. In this context, the word "burning" refers to nuclear fusion and not a chemical reaction.
During hydrostatic burning these fuels synthesize overwhelmingly the alpha nuclides (), nuclei composed of integer numbers of helium-4 nuclei. A rapid final explosive burning is caused by the sudden temperature spike owing to passage of the radially moving shock wave that was launched by the gravitational collapse of the core. W. D. Arnett and his Rice University colleagues demonstrated that the final shock burning would synthesize the non-alpha-nucleus isotopes more effectively than hydrostatic burning was able to do, suggesting that the expected shock-wave nucleosynthesis is an essential component of supernova nucleosynthesis. Together, shock-wave nucleosynthesis and hydrostatic-burning processes create most of the isotopes of the elements carbon (), oxygen (), and elements with (from neon to nickel). As a result of the ejection of the newly synthesized isotopes of the chemical elements by supernova explosions, their abundances steadily increased within interstellar gas. That increase became evident to astronomers from the initial abundances in newly born stars exceeding those in earlier-born stars.
Elements heavier than nickel are comparatively rare owing to the decline with atomic weight of their nuclear binding energies per nucleon, but they too are created in part within supernovae. Of greatest interest historically has been their synthesis by rapid capture of neutrons during the r-process, reflecting the common belief that sup
|
https://en.wikipedia.org/wiki/Superior%20vestibular%20nucleus
|
The superior vestibular nucleus (Bechterew's nucleus) is the dorso-lateral part of the vestibular nucleus and receives collaterals and terminals from the ascending branches of the vestibular nerve.
Sends uncrossed fibers to cranial nerve 3 and 4 via the medial longitudinal fasciculus (MLF)
|
https://en.wikipedia.org/wiki/Agglomerin
|
Agglomerins are bacterial natural products, identified as metabolites of Pantoea agglomerans which was isolated in 1989 from river water in Kobe, Japan. They belong to the class of tetronate antibiotics, which include tetronomycin, tetronasin, and abyssomicin C. The members of the agglomerins differ only in the composition of the acyl chain attached to the tetronate ring. They possess antibiotic activity against anaerobic bacteria and weak activity against aerobic bacteria in vitro. The structures were solved in 1990. Agglomerin A is the major component (38%), followed by agglomerin B (30%), agglomerin C (24%), and agglomerin D (8%).
Biosynthesis
The biosynthetic gene cluster for agglomerins is 12 kb, and codes for 7 open reading frames. The glyceryl-S-ACP is derived from D-1,3-bisphosphoglycerate by Agg2 (glyceryl-S-ACP synthase) and Agg3 (acyl carrier protein). The acyl chain is taken from primary metabolism as a 3-oxoacyl-CoA thioester. The glyceryl-S-ACP and 3-oxoacyl-CoA thioester are joined by Agg1, a FabH-like ketosynthase, forming new C-C and C-O bonds. The primary alcohol of the intermediate 4 is then acylated by Agg4, using acetyl-CoA, before the abstraction of a proton and concomitant loss of acetate catalyzed by Agg5 to generate the exocyclic double bond.
|
https://en.wikipedia.org/wiki/Calcium%3Acation%20antiporter
|
The Ca2+:cation antiporter (CaCA) family (TC# 2.A.19) is a member of the cation diffusion facilitator (CDF) superfamily. This family should not be confused with the Ca2+:H+ Antiporter-2 (CaCA2) Family (TC# 2.A.106) which belongs to the Lysine Exporter (LysE) Superfamily. Proteins of the CaCA family are found ubiquitously, having been identified in animals, plants, yeast, archaea and divergent bacteria. Members of this family facilitate the antiport of calcium ion with another cation.
Homology
Members of the CaCA family exhibit widely divergent sequences. Several homologues show to have arisen by a tandem intragenic duplication event. The most conserved portions of this repeat element, α1 and α2, are found in putative TMSs 2-3 and TMSs 7-8. These conserved sequences are important for transport function and may form an intramembranous pore/loop-like structure. These carriers function primarily in Ca2+ extrusion.
The phylogenetic tree for the CaCA family reveals at least six major branches. Two clusters consist exclusively of animal proteins, a third contains several bacterial and archaeal proteins, a fourth possesses yeast, plant and blue green bacterial homologues, the fifth contains only the ChaA Ca2+:H+ antiporter of E. coli and the sixth contains only one distant S. cerevisiae homologue of unknown function. Several homologues may be present in a single organism. This fact and the shape of the tree suggest either that isoforms of these proteins arose by gene duplication before the three domains of life split off from each other or that horizontal gene transfer has occurred between these domains.
Homologues from several cyanobacteria have been characterized. They play important roles in salt tolerance.
Subfamilies
The CaCA family is composed of at least five subfamilies:
K+-independent exchangers
Na+/Ca2+ exchangers (NCXs)
Cation/Ca2+ exchangers (CCXs),
YBRG transporters
Cation exchangers (CAXs)
A representative list of CaCA family members can be found
|
https://en.wikipedia.org/wiki/Manfredo%20do%20Carmo
|
Manfredo Perdigão do Carmo (15 August 1928, Maceió – 30 April 2018, Rio de Janeiro) was a Brazilian mathematician. He spent most of his career at IMPA and is seen as the doyen of differential geometry in Brazil.
Education and career
Do Carmo studied civil engineering at the University of Recife from 1947 to 1951. After working a few years as engineer, he accepted a teaching position at the newly created Institute of Physics and Mathematics at Recife.
On suggestion of Elon Lima, in 1959 he went to Instituto Nacional de Matemática Pura e Aplicada to improve his background and in 1960 he moved to the US to pursue a Ph.D. in mathematics at the University of California, Berkeley under the supervision of Shiing-Shen Chern. He defended his thesis, entitled "The Cohomology Ring of Certain Kahlerian Manifolds", in 1963.
After working again at University of Recife and at the University of Brasilia, in 1966 he became professor at Instituto Nacional de Matemática Pura e Aplicada (IMPA) in Rio de Janeiro. From 2003 to his death he was emeritus professor at the same institution.
Do Carmo was a Guggenheim Fellow in 1965 and 1968. In 1978 he was invited speaker at the International Congress of Mathematicians held in Helsinki. In 1991 he obtained a Doctorate honoris causa from Federal University of Alagoas and in 2012 from University of Murcia and from Federal University of Amazonas.
He served as president of the Brazilian Mathematical Society in the term 1971–1973. He was elected a member of the Brazilian Academy of Sciences in 1970, a member of The World Academy of Sciences (TWAS) in 1997 and a fellow of the American Mathematical Society In 2013.
Among his awards, he received the Prêmio Almirante Álavaro Alberto from the National Council for Scientific and Technological Development in 1984, the TWAS Prize in Mathematics in 1992, the National Order of Scientific Merit in 1995 and the Comenda Graciliano Ramos from the municipality of Maceió in 2000.
Do Carmo died on 30 April
|
https://en.wikipedia.org/wiki/Disk%20mirroring
|
In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete logical representation of separate volume copies.
In a disaster recovery context, mirroring data over long distance is referred to as storage replication. Depending on the technologies used, replication can be performed synchronously, asynchronously, semi-synchronously, or point-in-time. Replication is enabled via microcode on the disk array controller or via server software. It is typically a proprietary solution, not compatible between various data storage device vendors.
Mirroring is typically only synchronous. Synchronous writing typically achieves a recovery point objective (RPO) of zero lost data. Asynchronous replication can achieve an RPO of just a few seconds while the remaining methodologies provide an RPO of a few minutes to perhaps several hours.
Disk mirroring differs from file shadowing that operates on the file level, and disk snapshots where data images are never re-synced with their origins.
Overview
Typically, mirroring is provided in either hardware solutions such as disk arrays, or in software within the operating system (such as Linux mdadm and device mapper). Additionally, file systems like Btrfs or ZFS provide integrated data mirroring. There are additional benefits from Btrfs and ZFS, which maintain both data and metadata integrity checksums, making themselves capable of detecting bad copies of blocks, and using mirrored data to pull up data from correct blocks.
There are several scenarios for what happens when a disk fails. In a hot swap system, in the event of a disk failure, the system itself typically diagnoses a disk failure and signals a failure. Sophisticated systems may automatically activate a hot standby disk and use the remaining active disk to copy live data onto this disk. Alternatively, a new disk is
|
https://en.wikipedia.org/wiki/DEPBT
|
DEPBT (3-(diethoxyphosphoryloxy)-1,2,3-benzotriazin-4(3H)-one) is a peptide coupling reagent used in peptide synthesis. It shows remarkable resistance to racemization.
Fmoc-Dab(Mtt)-OH, a commercially available amino acid building block for solid-phase peptide synthesis (SPPS), was proven to undergo rapid lactamization, instead of reacting with the N-terminal end of the peptide. Compared with other commercially available coupling reagents, DEPBT has shown superior performance in coupling Fmoc-Dab(Mtt)-OH to the N-terminal end of peptide during SPPS, though the approach was regarded as 'costly and tedious'.
See also
BOP
PyBOP
|
https://en.wikipedia.org/wiki/Microconnect%20distributed%20antenna
|
Microconnect distributed antennas (MDA) are small-cell local area (100 metre radius) transmitter-receivers usually fitted to lampposts and other street furniture in order to provide Wireless LAN, GSM and GPRS connectivity. They are therefore less obtrusive than the usual masts and antennas used for these purposes and meet with less public opposition.
Service provided
The service provided by microconnect distributed antennas cover a market in heavily populated urban area addressing mobile and radio connection. Also MDA is suited for bustling cities and historical areas where mobile connection and ability is impaired. Having many low power, small antennae preforms and covers an area equal to or better than a traditional Macrocellular site. The centrally located radio base station connects to the antennae by fibre optical cable. Each antenna point contains a 63–65 GHz wireless unit alongside a large memory store providing proxy and cache services. Also users will be able to obtain 64 kbit uplink/ 384kbit downlink service. Multiple operators can share this infrastructure. So that different service providers can this technology to benefit their customers.
Four part MDA System
The four part MDA system is, the DAS (Distributed Antenna System) Master unit, access network optical fibre, and the Remote Radio over Fibre (RoF) Unit (Remote Antennae Points). Followed by the Supervisory and Management facilities. This system is compatible GSM (2g and 2.5G) and 3G network requirements of mobile users.
The MDA is an economical device that gives a somewhat low-cost solution to give more people access to mobile and broadband connection. This solution also has a low environmental impact that might not clutter up a historical part of an urban area. As communities become more and more dependent on technology solutions like the MDA system is perfect for protecting the natural beauty.
See also
Distributed antenna system
In-Building Cellular Enhancement System
|
https://en.wikipedia.org/wiki/Neural%20substrate
|
A neural substrate is a term used in neuroscience to indicate the part of the central nervous system (i.e., brain and spinal cord) that underlies a specific behavior, cognitive process, or psychological state. Neural is an adjective relating to "a nerve or the nervous system", while a substrate is an "underlying substance or layer".
Some examples are the neural substrates of language acquisition, memory, prediction and reward, pleasure, facial recognition, envisioning the future, intentional empathy, religious experience, spontaneous musical performance, and anxiety.
See also
Neural correlate
Neural substrates of visual imagery
|
https://en.wikipedia.org/wiki/StAR-related%20transfer%20domain
|
START (StAR-related lipid-transfer) is a lipid-binding domain in StAR, HD-ZIP and signalling proteins. The archetypical domain is found in StAR (Steroidogenic acute regulatory protein), a mitochondrial protein that is synthesized in steroid-producing cells. StAR initiates steroid production by mediating the delivery of cholesterol to the first enzyme in the steroidogenic pathway. The START domain is critical for this activity, perhaps through the binding of cholesterol. Following the discovery of StAR, 15 START-domain-containing proteins (termed STARD1 through STARD15) were subsequently identified in vertebrates as well as other that are related.
Thousands of proteins containing at least one START domain have been determined in invertebrates, bacteria and plants to form a larger superfamily, variously known as START, Bet v1-like or SRPBCC (START/RHOalphaC/PITP/Bet v1/CoxG/CalC) domain proteins, all of which bind hydrophobic ligands. In the case of plants, many of the START proteins fall into the category of putative lipid/sterol-binding homeodomain (HD) transcription factors or HD-START proteins.
Representatives of the START domain family bind different substances or ligands such as sterols (e.g., StAR or STARD1) and lipids like phosphatidylcholine (phosphatidylcholine transfer protein, also called PCTP or STARD2) and have enzymatic activities. Ligand binding by the START domain in multidomain proteins can also regulate the activities of the other domains, such as the RhoGAP domain, the homeodomain and the thioesterase domain.
Structure
The crystal structure of START domain of human MLN64 shows an alpha/beta fold built around a U-shaped incomplete beta-barrel. Most importantly, the interior of the protein encompasses a 26 × 12 × 11-Angstrom hydrophobic tunnel that is apparently large enough to bind a single cholesterol molecule. The START domain structure revealed an unexpected similarity to that of the birch pollen allergen Bet v 1 and to bacterial polyketide
|
https://en.wikipedia.org/wiki/Umlaut%20%28diacritic%29
|
The umlaut () is the diacritical mark () used to indicate in writing (as part of the letters , , and ) the result of the historical sound shift due to which former back vowels are now pronounced as front vowels (for example , , and as , , and ). (The term [Germanic] umlaut is also used for the underlying historical sound shift process.)
In its contemporary printed form, the mark consists of two dots placed over the letter to represent the changed vowel sound. It looks identical to the diaeresis mark used in other European languages and is represented by the same Unicode code point. The word trema (), used in linguistics and also classical scholarship, describes the form of both the umlaut diacritic and the diaeresis rather than their function and is used in those contexts to refer to either.
German origin and current usage
(literally "changed sound") is the German name of the sound shift phenomenon also known as i-mutation. In German, this term is also used for the corresponding letters ä, ö, and ü (and the diphthong äu) and the sounds that these letters represent. In German, the combination of a letter with the diacritical mark is called , while the marks themselves are called (literally "umlaut sign").
In German, the umlaut diacritic indicates that the short back vowels and the diphthong are pronounced ("shifted forward in the mouth") as follows:
→
→
→
→
And the long back vowels are pronounced in the front of the mouth as follows:
→ very formal/old fashioned , in most speakers (resulting in a merger with )
→
→
In modern German orthography, the affected graphemes , , , and are written as , , , and , i.e. they are written with the umlaut diacritic, which looks identical to the diaeresis mark used in other European languages and is represented by the same Unicode character.
History
The Germanic umlaut is a specific historical phenomenon of vowel-fronting in German and other Germanic languages, including English. English examples are
|
https://en.wikipedia.org/wiki/Teichm%C3%BCller%20space
|
In mathematics, the Teichmüller space of a (real) topological (or differential) surface is a space that parametrizes complex structures on up to the action of homeomorphisms that are isotopic to the identity homeomorphism. Teichmüller spaces are named after Oswald Teichmüller.
Each point in a Teichmüller space may be regarded as an isomorphism class of "marked" Riemann surfaces, where a "marking" is an isotopy class of homeomorphisms from to itself. It can be viewed as a moduli space for marked hyperbolic structure on the surface, and this endows it with a natural topology for which it is homeomorphic to a ball of dimension for a surface of genus . In this way Teichmüller space can be viewed as the universal covering orbifold of the Riemann moduli space.
The Teichmüller space has a canonical complex manifold structure and a wealth of natural metrics. The study of geometric features of these various structures is an active body of research.
The sub-field of mathematics that studies the Teichmüller space is called Teichmüller theory.
History
Moduli spaces for Riemann surfaces and related Fuchsian groups have been studied since the work of Bernhard Riemann (1826-1866), who knew that parameters were needed to describe the variations of complex structures on a surface of genus . The early study of Teichmüller space, in the late nineteenth–early twentieth century, was geometric and founded on the interpretation of Riemann surfaces as hyperbolic surfaces. Among the main contributors were Felix Klein, Henri Poincaré, Paul Koebe, Jakob Nielsen, Robert Fricke and Werner Fenchel.
The main contribution of Teichmüller to the study of moduli was the introduction of quasiconformal mappings to the subject. They allow us to give much more depth to the study of moduli spaces by endowing them with additional features that were not present in the previous, more elementary works. After World War II the subject was developed further in this analytic vein, in particular by L
|
https://en.wikipedia.org/wiki/Nuclear%20or%20Not%3F
|
Nuclear or Not? Does Nuclear Power Have a Place in a Sustainable Energy Future? is a 2007 book edited by Professor David Elliott. The book offers various views and perspectives on nuclear power. Authors include:
Paul Allen from the Centre for Alternative Technology
Dr Ian Fairlie, who served on the Committee Examining Radiation Risks of Internal Emitters (CERRIE)
Stephen Kidd of the World Nuclear Association
Professor Elliott calls for continued debate on the nuclear power issue. He has worked with the United Kingdom Atomic Energy Authority before moving to the Open University where he is Professor of Technology Policy and has developed courses on technological innovation, focusing in particular on renewable energy technology.
See also
List of books about nuclear issues
Nuclear Power and the Environment
Reaction Time
Contesting the Future of Nuclear Power
Non-Nuclear Futures
|
https://en.wikipedia.org/wiki/Gene%20regulatory%20circuit
|
Genetic regulatory circuits (also referred to as transcriptional regulatory circuits) is a concept that evolved from the Operon Model discovered by François Jacob and Jacques Monod. They are functional clusters of genes that impact each other's expression through inducible transcription factors and cis-regulatory elements.
Genetic regulatory circuits are analogous in many ways to electronic circuits in how they use signal inputs and outputs to determine gene regulation. Like electronic circuits, their organization determines their efficiency, and this has been demonstrated in circuits working in series to have a greater sensitivity of gene regulation. They also use inputs such as trans and cis sequence regulators of genes, and outputs such as gene expression level. Depending on the type of circuit, they respond constantly to outside signals, such as sugars and hormone levels, that determine how the circuit will return to its fixed point or periodic equilibrium state. Genetic regulatory circuits also have an ability to be evolutionarily rewired without the loss of the original transcriptional output level. This rewiring is defined by the change in regulatory-target gene interactions, while there is still conservation of regulatory factors and target genes.
In-silico application
These circuits can be modelled in silico to predict the dynamics of a genetic system. Having constructed a computational model of the natural circuit of interest, one can use the model to make testable predictions about circuit performance. When designing a synthetic circuit for a specific engineering task, a model is useful for identifying necessary connections and parameter operating regimes that give rise to a desired functional output. Similarly, when studying a natural circuit, one can use the model to identify the parts or parameter values necessary for a desired biological outcome. In other words, computational modelling and experimental synthetic perturbations can be used to probe
|
https://en.wikipedia.org/wiki/Beam%20spoiler
|
A beam spoiler is a piece of material, placed into the path of the photon beam in radiotherapy. The purpose of the spoiler is to reduce the depth of the maximum radiation dosage.
Composition
The beam spoiler is composed of a sheet of material which has a low atomic number, typically lucite, the thickness of which is varied according to the beam energy and the distance by which the radiation dose must be shifted.
Action
As the primary photon beam passes through the plate, secondary electrons are generated. The beam exiting the spoiler is a combination of the spoiler-attenuated photons and the spoiler-generated electrons. The electron component alters the depth dose in the buildup region in a way that depends on the photon beam energy, the field size, and the distance of the spoiler from the treatment surface.
|
https://en.wikipedia.org/wiki/Semidefinite%20embedding
|
Maximum Variance Unfolding (MVU), also known as Semidefinite Embedding (SDE), is an algorithm in computer science that uses semidefinite programming to perform non-linear dimensionality reduction of high-dimensional vectorial input data.
It is motivated by the observation that kernel Principal Component Analysis (kPCA) does not reduce the data dimensionality, as it leverages the Kernel trick to non-linearly map the original data into an inner-product space.
Algorithm
MVU creates a mapping from the high dimensional input vectors to some low dimensional Euclidean vector space in the following steps:
A neighbourhood graph is created. Each input is connected with its k-nearest input vectors (according to Euclidean distance metric) and all k-nearest neighbors are connected with each other. If the data is sampled well enough, the resulting graph is a discrete approximation of the underlying manifold.
The neighbourhood graph is "unfolded" with the help of semidefinite programming. Instead of learning the output vectors directly, the semidefinite programming aims to find an inner product matrix that maximizes the pairwise distances between any two inputs that are not connected in the neighbourhood graph while preserving the nearest neighbors distances.
The low-dimensional embedding is finally obtained by application of multidimensional scaling on the learned inner product matrix.
The steps of applying semidefinite programming followed by a linear dimensionality reduction step to recover a low-dimensional embedding into a Euclidean space were first proposed by Linial, London, and Rabinovich.
Optimization formulation
Let be the original input and be the embedding. If are two neighbors, then the local isometry constraint that needs to be satisfied is:
Let be the Gram matrices of and (i.e.: ). We can express the above constraint for every neighbor points in term of :
In addition, we also want to constrain the embedding to center at the origin:
As describe
|
https://en.wikipedia.org/wiki/SSLeay
|
SSLeay is an open-source SSL implementation. It was developed by Eric Andrew Young and Tim J. Hudson as an SSL 3.0 implementation using RC2 and RC4 encryption. The recommended pronunciation is to say each letter s-s-l-e-a-y and was first developed by Eric A. Young ("eay"). SSLeay also included an implementation of the DES from earlier work by Eric Young which was believed to be the first open-source implementation of DES. Development of SSLeay unofficially mostly ended, and volunteers forked the project under the OpenSSL banner around December 1998, when Tim and Eric both commenced working for RSA Security in Australia.
SSLeay
SSLeay was developed by Eric A. Young, starting in 1995. Windows support was added by Tim J. Hudson. Patches to open source applications to support SSL using SSLeay were produced by Tim Hudson. Development by Young and Hudson ceased in 1998. The SSLeay library and codebase is licensed under its own SSLeay License, a form of free software license. The SSLeay License is a BSD-style open-source license, almost identical to a four-clause BSD license.
SSLeay supports X.509v3 certificates and PKCS#10 certificate requests. It supports SSL2 and SSL3. Also supported is TLSv1.
The first secure FTP implementation was created under BSD using SSLeay by Tim Hudson.
The first open source Certifying Authority implementation was created with CGI scripts using SSLeay by Clifford Heath.
Forks
OpenSSL is a fork and successor project to SSLeay and has a similar interface to it. After Young and Hudson joined RSA Corporation, volunteers forked SSLeay and continued development as OpenSSL.
BSAFE SSL-C is a fork of SSLeay developed by Eric A. Young and Tim J. Hudson for RSA Corporation. It was released as part of BSAFE SSL-C.
|
https://en.wikipedia.org/wiki/RS-449
|
The RS-449 specification, also known as EIA-449 or TIA-449, defines the functional and mechanical characteristics of the interface between data terminal equipment, typically a computer, and data communications equipment, typically a modem or terminal server. The full title of the standard is EIA-449 General Purpose 37-Position and 9-Position Interface for Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange.
449 was part of an effort to replace RS-232C, offering much higher performance and longer cable lengths while using the same DB-25 connectors. This was initially split into two closely related efforts, RS-422 and RS-423. As feature creep set in, the number of required pins began to grow beyond what a DB-25 could handle, and the RS-449 effort started to define a new connector.
449 emerged as an unwieldy system using a large DC-37 connector along with a separate DE-9 connector if the 422 protocol was used. The resulting cable mess was already dismissed as hopeless before the standard was even finalized. The effort was eventually abandoned in favor of RS-530, which used a single DB-25 connector.
Background
During the late 1970s, the EIA began developing two new serial data standards to replace RS-232. RS-232 had a number of issues that limited its performance and practicality. Among these was the relatively large voltages used for signalling, +5 and -5V for mark and space. To supply these, a +12V power supply was typically required, which made it somewhat difficult to implement in a market that was rapidly being dominated by +5/0V transistor-transistor logic (TTL) circuitry and even lower-voltage CMOS implementations. These high voltages and unbalanced communications also resulted in relatively short cable lengths, nominally set to a maximum of , although in practice they could be somewhat longer if running at slower speeds.
The reason for the large voltages was due to ground voltages. RS-232 included both a pr
|
https://en.wikipedia.org/wiki/Neustar
|
Neustar, Inc. is an American technology company that provides real-time information and analytics for risk, digital performance, defense, telecommunications, entertainment, and marketing industries, and also provides clearinghouse and directory services to the global communications and Internet industries. Neustar was the domain name registry for a number of top-level domains, including .biz, .us (on behalf of United States Department of Commerce), .co, .nyc (on behalf of the city of New York), and .in (on behalf of the National Internet Exchange of India) until the sale of the division to GoDaddy in 2020.
Until the end of 2018, Neustar was also a North American Numbering Plan Administrator under behalf of the Federal Communications Commission, a role continued from its founder, Lockheed Martin. Their first contract was granted in 1997 and was renewed under its spin-off in 1999, 2004, and 2012. Since 2019, it has been replaced by Somos, Inc.
History
Neustar was founded in Delaware in 1998 as a business unit within Lockheed Martin Corporation. It was spun off to keep the neutrality that was essential to its original core contract with the nation's telecommunications providers. In November 2006, it bought Followap, Inc., a UK-based enabler of mobile instant messaging services.
In 2010, Lisa Hook was named the firm's President and Chief Operating Officer. In January 2010, The Washington Post reported that under Hook's leadership, Neustar was chosen by a consortium of Hollywood studios and technology executives to manage a system whereby consumers could access movies and other video entertainment from multiple digital devices. This system was named "UltraViolet". Over the next years, Neustar bought several companies: TARGUSInfo (2011), Aggregate Knowledge (2013) and .CO Internet (2014).
Neustar entered into an asset purchase agreement in 2015 with Transaction Network Services for their caller authentication assets. The following year, Neustar planned to split into t
|
https://en.wikipedia.org/wiki/Buchholz%20psi%20functions
|
Buchholz's psi-functions are a hierarchy of single-argument ordinal functions introduced by German mathematician Wilfried Buchholz in 1986. These functions are a simplified version of the -functions, but nevertheless have the same strength as those. Later on this approach was extended by Jäger and Schütte.
Definition
Buchholz defined his functions as follows. Define:
Ωξ = ωξ if ξ > 0, Ω0 = 1
The functions ψv(α) for α an ordinal, v an ordinal at most ω, are defined by induction on α as follows:
ψv(α) is the smallest ordinal not in Cv(α)
where Cv(α) is the smallest set such that
Cv(α) contains all ordinals less than Ωv
Cv(α) is closed under ordinal addition
Cv(α) is closed under the functions ψu (for u≤ω) applied to arguments less than α.
The limit of this notation is the Takeuti–Feferman–Buchholz ordinal.
Properties
Let be the class of additively principal ordinals. Buchholz showed following properties of this functions:
Fundamental sequences and normal form for Buchholz's function
Normal form
The normal form for 0 is 0. If is a nonzero ordinal number then the normal form for is where and and each is also written in normal form.
Fundamental sequences
The fundamental sequence for an ordinal number with cofinality is a strictly increasing sequence with length and with limit , where is the -th element of this sequence. If is a successor ordinal then and the fundamental sequence has only one element . If is a limit ordinal then .
For nonzero ordinals , written in normal form, fundamental sequences are defined as follows:
If where then and
If , then and ,
If , then and ,
If then and (and note: ),
If and then and ,
If and then and where .
Explanation
Buchholz is working in Zermelo–Fraenkel set theory, that means every ordinal is equal to set . Then condition means that set includes all ordinals less than in other words .
The condition means that set includes:
all ordinals from previous set ,
all ordinals that
|
https://en.wikipedia.org/wiki/LILRB4
|
Leukocyte immunoglobulin-like receptor subfamily B member 4 is a protein that in humans is encoded by the LILRB4 gene.
This gene is a member of the leukocyte immunoglobulin-like receptor (LIR) family, which is found in a gene cluster at chromosomal region 19q13.4. The encoded protein belongs to the subfamily B class of LIR receptors which contain two or four extracellular immunoglobulin domains, a transmembrane domain, and two to four cytoplasmic immunoreceptor tyrosine-based inhibitory motifs (ITIMs). The receptor is expressed on monocytic cells and transduces a negative signal that inhibits stimulation of an immune response. The receptor can also function in antigen capture and presentation. It is thought to control inflammatory responses and cytotoxicity to help focus the immune response and limit autoreactivity. LILRB4 has also been proposed to be a potential target for tumor immunotherapy. It has been shown to express on tumor-associated macrophages and negatively regulate immune response in tumor. The expression of LILRB4 on monocytic myeloid leukemia cells supports infiltration and inhibits T cell proliferation. Multiple transcript variants encoding different isoforms have been found for this gene.
Interactions
LILRB4 has been shown to interact with PTPN6 and INPP5D (SHIP-1).
See also
Cluster of differentiation
Immunoglobulin superfamily
|
https://en.wikipedia.org/wiki/Uno%20Platform
|
Uno Platform () is an open source cross-platform graphical user interface that allows WinUI and Universal Windows Platform (UWP) - based code to run on iOS, macOS, Linux, Android, and WebAssembly. Uno Platform is released under the Apache 2.0 license.
Applications can be built by using the UWP tools in Visual Studio on Windows, including XAML and C# Edit and Continue, and run on iOS, Android or in WebAssembly in a web browser. A plug in for Microsoft Visual Studio is available from Microsoft's Visual Studio Marketplace. The community surrounding Uno Platform open source project comes together at its annual conference UnoConf.
See also
WebAssembly
Blazor
.NET Multi-platform App UI (.NET MAUI)
Windows App SDK
|
https://en.wikipedia.org/wiki/IBM%207090
|
The IBM 7090 is a second-generation transistorized version of the earlier IBM 709 vacuum tube mainframe computer that was designed for "large-scale scientific and technological applications". The 7090 is the fourth member of the IBM 700/7000 series scientific computers. The first 7090 installation was in December 1959. In 1960, a typical system sold for $2.9 million (equivalent to $ million in ) or could be rented for $63,500 a month ().
The 7090 uses a 36-bit word length, with an address space of 32,768 words (15-bit addresses). It operates with a basic memory cycle of 2.18 μs, using the IBM 7302 Core Storage core memory technology from the IBM 7030 (Stretch) project.
With a processing speed of around 100 Kflop/s, the 7090 is six times faster than the 709, and could be rented for half the price. An upgraded version, the 7094, was up to twice as fast. Both the 7090 and the 7094 were withdrawn from sale on July 14, 1969, but systems remained in service for more than a decade after.
Development and naming
Although the 709 was a superior machine to its predecessor, the 704, it was being built and sold at the time that transistor circuitry was supplanting vacuum tube circuits. Hence, IBM redeployed its 709 engineering group to the design of a transistorized successor. That project became called the 709-T (for transistorized), which because of the sound when spoken, quickly shifted to the nomenclature 7090 (i.e., seven - oh - ninety). Similarly, the related machines such as the 7070 and other 7000 series equipment were sometimes called by names of digit - digit - decade (e.g., seven - oh - seventy).
IBM 7094
An upgraded version, the IBM 7094, was first installed in September 1962. It has seven index registers, instead of three on the earlier machines. The 7094 console has a distinctive box on top that displays lights for the four new index registers. The 7094 introduced double-precision floating point and additional instructions, but is largely backward compatib
|
https://en.wikipedia.org/wiki/Plant%20ontology
|
Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education.
Project Members
Oregon State University
New York Botanical Garden
L. H. Bailey Hortorium at Cornell University
Ensembl
SoyBase
SSWAP
SGN
Gramene
The Arabidopsis Information Resource (TAIR)
MaizeGDB
University of Missouri at St. Louis
Missouri Botanical Garden
See also
Generic Model Organism Database
Open Biomedical Ontologies
OBO Foundry
|
https://en.wikipedia.org/wiki/Volume%20boot%20record
|
A volume boot record (VBR) (also known as a volume boot sector, a partition boot record or a partition boot sector) is a type of boot sector introduced by the IBM Personal Computer. It may be found on a partitioned data storage device, such as a hard disk, or an unpartitioned device, such as a floppy disk, and contains machine code for bootstrapping programs (usually, but not necessarily, operating systems) stored in other parts of the device. On non-partitioned storage devices, it is the first sector of the device. On partitioned devices, it is the first sector of an individual partition on the device, with the first sector of the entire device being a Master Boot Record (MBR) containing the partition table.
The code in volume boot records is invoked either directly by the machine's firmware or indirectly by code in the master boot record or a boot manager. Code in the MBR and VBR is in essence loaded the same way.
Invoking a VBR via a boot manager is known as chain loading. Some dual-boot systems, such as NTLDR (the boot loader for all releases of Microsoft's Windows NT-derived operating systems up to and including Windows XP and Windows Server 2003), take copies of the bootstrap code that individual operating systems install into a single partition's VBR and store them in disc files, loading the relevant VBR content from file after the boot loader has asked the user which operating system to bootstrap.
In Windows Vista, Windows Server 2008 and newer versions, NTLDR was replaced; the boot-loader functionality is instead provided by two new components: WINLOAD.EXE and the Windows Boot Manager.
In file systems such as FAT12 (except for in DOS 1.x), FAT16, FAT32, HPFS and NTFS, the VBR also contains a BIOS Parameter Block (BPB) that specifies the location and layout of the principal on-disk data structures for the file system. (A detailed discussion of the sector layout of FAT VBRs, the various FAT BPB versions and their entries can be found in the FAT article.)
|
https://en.wikipedia.org/wiki/Jonathan%20Pila
|
Jonathan Solomon Pila (born 1962) FRS is an Australian mathematician at the University of Oxford.
Education
Pila earned his bachelor's degree at the University of Melbourne in 1984. He was awarded a PhD from Stanford University in 1988, for research supervised by Peter Sarnak. His dissertation was entitled "Frobenius Maps of Abelian Varieties and Finding Roots of Unity in Finite Fields". In 2010 he received an MA from Oxford.
Career and research
Pila's research interests lie in number theory and model theory. A focus has been applying the theory of o-minimality to Diophantine problems. This work began with an early paper with Enrico Bombieri, and developed through collaborations with Alex Wilkie and Umberto Zannier. The techniques obtained have led to advances in Diophantine problems, including Pila's unconditional proof of the André–Oort conjecture for powers of the modular curve. Work by Pila and Jacob Tsimerman, demonstrated the André–Oort conjecture in the case of the Siegel modular variety.
Pila has held posts at Columbia University, McGill University, the University of Bristol and (as a visiting member) the Institute for Advanced Study. Pila also took a substantial break from professional mathematics to work in his family's manufacturing business.
Pila has been the Editor of Proceedings of the Edinburgh Mathematical Society, and of Algebra and Number Theory.
Awards and honours
Pila was awarded the Clay Research Award in 2011 for his resolution of the André–Oort conjecture in the case of products of modular curves. In June 2011, he was awarded the Senior Whitehead Prize by the London Mathematical Society. This prize is "awarded in recognition of work in and influence on and service to mathematics; or lecturing gifts." Specifically, the citation recognized "his startling recent work on the Andre-Oort and Manin-Mumford conjectures. The approach he and his collaborators have developed, which combines analytic ideas with model theory, is entirely new and
|
https://en.wikipedia.org/wiki/Secure%20messaging
|
Secure messaging is a server-based approach to protect sensitive data when sent beyond the corporate borders, and it provides compliance with industry regulations such as HIPAA, GLBA and SOX. Advantages over classical secure e-mail are that confidential and authenticated exchanges can be started immediately by any internet user worldwide since there is no requirement to install any software nor to obtain or to distribute cryptographic keys beforehand. Secure messages provide non-repudiation as the recipients (similar to online banking) are personally identified and transactions are logged by the secure email platform.
Functionality
Secure messaging works as an online messaging service. Firstly, users enroll in a secure messaging platform. Then, the user logs into their account by typing in their username and password (or strong authentication) similar to a web-based email account. Out of a message center, the messages can be sent over a secure SSL-connection or via other equally protecting methods to any recipient. If the recipient is contacted for the first time, a message unlock code (see below MUC) is needed to authenticate the recipient. Alternatively, secure messaging can be used out of any standard email program without installing software.
Secure delivery
Secure messaging possesses different types of delivery: secured web interface, S/MIME or PGP encrypted communication or TLS secured connections to email domains, or individual email clients. One single secure message can be sent to different recipients with different types of secure delivery that the sender does not have to worry about.
Trust management
Secure messaging relies on a web of trust. This method synthesizes the authentication approach of web of trust, known from PGP, with the advantages of hierarchical structures, known from centralized PKI systems. Those combined with certificates provide a high quality of electronic identities. This approach focuses on the user and allows for immediate an
|
https://en.wikipedia.org/wiki/Web%20development
|
Web development is the work involved in developing a website for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing a simple single static page of plain text to complex web applications, electronic businesses, and social network services. A more comprehensive list of tasks to which Web development commonly refers, may include Web engineering, Web design, Web content development, client liaison, client-side/server-side scripting, Web server and network security configuration, and e-commerce development.
Among Web professionals, "Web development" usually refers to the main non-design aspects of building Web sites: writing markup and coding. Web development may use content management systems (CMS) to make content changes easier and available with basic technical skills.
For larger organizations and businesses, Web development teams can consist of hundreds of people (Web developers) and follow standard methods like Agile methodologies while developing Web sites. Smaller organizations may only require a single permanent or contracting developer, or secondary assignment to related job positions such as a graphic designer or information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. There are three kinds of Web developer specialization: front-end developer, back-end developer, and full-stack developer. Front-end developers are responsible for behavior and visuals that run in the user browser, while back-end developers deal with the servers. Since the commercialization of the Web, the industry has boomed and has become one of the most used technologies ever.
See also
Outline of web design and web development
Web design
Web development tools
Web application development
Web developer
|
https://en.wikipedia.org/wiki/Almost%20disjoint%20sets
|
In mathematics, two sets are almost disjoint if their intersection is small in some sense; different definitions of "small" will result in different definitions of "almost disjoint".
Definition
The most common choice is to take "small" to mean finite. In this case, two sets are almost disjoint if their intersection is finite, i.e. if
(Here, '|X|' denotes the cardinality of X, and '< ∞' means 'finite'.) For example, the closed intervals [0, 1] and [1, 2] are almost disjoint, because their intersection is the finite set {1}. However, the unit interval [0, 1] and the set of rational numbers Q are not almost disjoint, because their intersection is infinite.
This definition extends to any collection of sets. A collection of sets is pairwise almost disjoint or mutually almost disjoint if any two distinct sets in the collection are almost disjoint. Often the prefix "pairwise" is dropped, and a pairwise almost disjoint collection is simply called "almost disjoint".
Formally, let I be an index set, and for each i in I, let Ai be a set. Then the collection of sets {Ai : i in I} is almost disjoint if for any i and j in I,
For example, the collection of all lines through the origin in R2 is almost disjoint, because any two of them only meet at the origin. If {Ai} is an almost disjoint collection consisting of more than one set, then clearly its intersection is finite:
However, the converse is not true—the intersection of the collection
is empty, but the collection is not almost disjoint; in fact, the intersection of any two distinct sets in this collection is infinite.
The possible cardinalities of a maximal almost disjoint family (commonly referred to as a MAD family) on the set of the natural numbers has been the object of intense study. The minimum infinite such cardinal is one of the classical Cardinal characteristics of the continuum.
Other meanings
Sometimes "almost disjoint" is used in some other sense, or in the sense of measure theory or topological catego
|
https://en.wikipedia.org/wiki/TeamForge
|
TeamForge (formerly SourceForge Enterprise Edition or SFEE) is a proprietary collaborative application lifecycle management forge supporting version control and a software development management system.
Background
TeamForge provides a front-end to a range of software development lifecycle services and integrates with a number of free software / open source software applications (such as PostgreSQL and Subversion).
Its predecessor, SourceForge, started as open source software, but a version of it (based on the v2.5 prototype code) was eventually relicensed under a proprietary software license as SourceForge Enterprise Edition, which was re-written in Java and marketed for offshore outsourcing software development.
The original codebase of SourceForge (code-named "Alexandria") was forked by the GNU Project as GNU Savannah; then, Savannah was also modified at CERN and released as Savane. SourceForge was also later forked as GForge by one of the SourceForge programmers, and then GForge was itself forked as FusionForge by three GForge developers.
Originally sold by VA Software, SourceForge Enterprise Edition was acquired by CollabNet on April 24, 2007. CollabNet subsequently integrated SourceForge Enterprise Edition with its own CollabNet Enterprise Edition and product, taking architectural and product elements from both systems, and re-launched the enhanced product as TeamForge in 2008. Since 2007, TeamForge has continued to undergo development, adding in a series of application lifecycle management tools.
See also
Comparison of source-code-hosting facilities
|
https://en.wikipedia.org/wiki/Inharmonicity
|
In music, inharmonicity is the degree to which the frequencies of overtones (also known as partials or partial tones) depart from whole multiples of the fundamental frequency (harmonic series).
Acoustically, a note perceived to have a single distinct pitch in fact contains a variety of additional overtones. Many percussion instruments, such as cymbals, tam-tams, and chimes, create complex and inharmonic sounds.
Music harmony and intonation depends strongly on the harmonicity of tones. An ideal, homogeneous, infinitesimally thin or infinitely flexible string or column of air has exact harmonic modes of vibration. In any real musical instrument, the resonant body that produces the music tone—typically a string, wire, or column of air—deviates from this ideal and has some small or large amount of inharmonicity. For instance, a very thick string behaves less as an ideal string and more like a cylinder (a tube of mass), which has natural resonances that are not whole number multiples of the fundamental frequency.
However, in stringed instruments such as the piano, violin, and guitar, or in some Indian drums such as tabla, the overtones are close to—or in some cases, quite exactly—whole number multiples of the fundamental frequency. Any departure from this ideal harmonic series is known as inharmonicity. The less elastic the strings are (that is, the shorter, thicker, smaller tension or stiffer they are), the more inharmonicity they exhibit.
When a string is bowed or tone in a wind instrument initiated by vibrating reed or lips, a phenomenon called mode-locking counteracts the natural inharmonicity of the string or air column and causes the overtones to lock precisely onto integer multiples of the fundamental pitch, even though these are slightly different from the natural resonance points of the instrument. For this reason, a single tone played by a bowed string instrument, brass instrument, or reed instrument does not necessarily exhibit inharmonicity.
However,
|
https://en.wikipedia.org/wiki/Parallel%20motion%20linkage
|
In kinematics, the parallel motion linkage is a six-bar mechanical linkage invented by the Scottish engineer James Watt in 1784 for the double-acting Watt steam engine. It allows a rod moving practically straight up and down to transmit motion to a beam moving in an arc, without putting significant sideways strain on the rod.
Description
In previous engines built by Newcomen and Watt, the piston pulled one end of the walking beam downwards during the power stroke using a chain, and the weight of the pump pulled the other end of the beam downwards during the recovery stroke using a second chain, the alternating forces producing the rocking motion of the beam. In Watt's new double-acting engine, the piston produced power on both the upward and downward strokes, so a chain could not be used to transmit the force to the beam. Watt designed the parallel motion to transmit force in both directions whilst keeping the piston rod very close to vertical. He called it "parallel motion" because both the piston and the pump rod were required to move vertically, parallel to one another.
In a letter to his son in 1808 describing how he arrived at the design, James Watt wrote "I am more proud of the parallel motion than of any other invention I have ever made." The sketch he included actually shows what is now known as Watt's linkage which was a linkage described in Watt's 1784 patent but it was immediately superseded by the parallel motion.
The parallel motion differed from Watt's linkage by having an additional pantograph linkage incorporated in the design. This did not affect the fundamental principle but it allowed the engine room to be smaller because the linkage was more compact.
The Newcomen engine's piston was propelled downward by the atmospheric pressure. Watt's device allowed live steam to be used for direct work on both sides of the piston, thus almost doubling the power, and also delivering the power more evenly through the cycle, an advantage when converting th
|
https://en.wikipedia.org/wiki/Guttural%20pouch
|
Guttural pouches are large, auditory-tube diverticula that contain between 300 and 600 ml of air. They are present in odd-toed mammals, some bats, hyraxes, and the American forest mouse. They are paired bilaterally just below the ears, behind the skull and connect to the nasopharynx.
Due to the general inaccessibility of the pouches in horses, they can be an area of infection by fungi and bacteria, and these infections can be extremely severe and hard to treat. The condition guttural pouch tympany affects several breeds, including the Arabian horse. The condition predisposes young horses to infection, often including severe swelling and often requires surgery to correct. The guttural pouch is also the site of infection in equine strangles.
Structure
The guttural pouches are located behind the cranial cavity, caudally the skull and below the wings of the atlas (C1). They are enclosed by the parotid and mandibular salivary glands, and the pterygoid muscles. The ventral portion lies on the pharynx and beginning of the esophagus, with the retropharyngeal lymph nodes located between the ventral wall and pharynx. The left and right pouches are separated by the longus capitis and rectus capitis ventralis muscles dorsomedially. Below these muscles, the two pouches fuse to form a median septum.
The guttural pouches connect the middle ear to the pharynx. The opening into the pharynx is called the nasopharyngeal ostium, which is composed of the pharyngeal wall laterally and a fibrocartilaginous fold medially. This opening leads to a short soft tissue passageway into the respective guttural pouch. The openings are located rostrally to enable drainage of mucous when the head is lowered and prevent fluid build-up. The plica salpingopharyngea, a mucosal fold at the caudal portion of the Eustachian tube, forms an uninterrupted channel between the medial lamina of the Eustachian tube and the lateral wall of the pharynx. The plica salpingopharyngea can sometimes act as a one-
|
https://en.wikipedia.org/wiki/MALPAS%20Software%20Static%20Analysis%20Toolset
|
MALPAS is a software toolset that provides a means of investigating and proving the correctness of software by applying a rigorous form of static program analysis. The tool uses directed graphs and regular algebra to represent the program under analysis. Using the automated tools in MALPAS an analyst can describe the structure of a program, classify the use made of data and provide the information relationships between input and output data. It also supports a formal proof that the code meets its specification.
MALPAS has been used to confirm the correctness of safety critical applications in the nuclear, aerospace and defence industries. It has also been used to provide compiler correctness in the nuclear industry on Sizewell B. Languages that have been analysed include: Ada, C, PLM and Intel Assembler.
MALPAS is well suited to the independent static analysis required by the UK's Health and Safety Executive guidance for computer based protection systems for nuclear reactors due to its rigour and flexibility in handling many programming languages.
Technical Overview
The MALPAS toolset comprises five specific analysis tools that address various properties of a program. The input to the analysers needs to be written in MALPAS Intermediate Language (IL); this can be hand-written or produced by an automated translation tool from the original source code. Automatic translators exist for common high-level programming languages such as Ada, C and Pascal, as well as assembler languages such as Intel 80*86, PowerPC and 68000. The IL text is input into MALPAS via the "IL Reader", which constructs a directed graph and associated semantics for the program under analysis. The graph is reduced using a series of graph reduction techniques.
The MALPAS toolset consists of 5 analysers:
Control Flow Analyser. This examines the program structure, identifying key features: Entry/Exit points, Loops, Branches and unreachable code. It provides a summary report drawing attention to u
|
https://en.wikipedia.org/wiki/Li%C3%A9nard%E2%80%93Chipart%20criterion
|
In control system theory, the Liénard–Chipart criterion is a stability criterion modified from the Routh–Hurwitz stability criterion, proposed by A. Liénard and M. H. Chipart. This criterion has a computational advantage over the Routh–Hurwitz criterion because it involves only about half the number of determinant computations.
Algorithm
The Routh–Hurwitz stability criterion says that a necessary and sufficient condition for all the roots of the polynomial with real coefficients
to have negative real parts (i.e. is Hurwitz stable) is that
where is the i-th leading principal minor of the Hurwitz matrix associated with .
Using the same notation as above, the Liénard–Chipart criterion is that is Hurwitz stable if and only if any one of the four conditions is satisfied:
Hence one can see that by choosing one of these conditions, the number of determinants required to be evaluated is reduced.
Alternatively Fuller formulated this as follows for (noticing that is never needed to be checked):
This means if n is even, the second line ends in and if n is odd, it ends in and so this is just 1. condition for odd n and 4. condition for even n from above. The first line always ends in , but is also needed for even n.
|
https://en.wikipedia.org/wiki/Cassini%20and%20Catalan%20identities
|
__notoc__
Cassini's identity (sometimes called Simson's identity) and Catalan's identity are mathematical identities for the Fibonacci numbers. Cassini's identity, a special case of Catalan's identity, states that for the nth Fibonacci number,
Note here is taken to be 0, and is taken to be 1.
Catalan's identity generalizes this:
Vajda's identity generalizes this:
History
Cassini's formula was discovered in 1680 by Giovanni Domenico Cassini, then director of the Paris Observatory, and independently proven by Robert Simson (1753). However Johannes Kepler presumably knew the identity already in 1608.
Catalan's identity is named after Eugène Catalan (1814–1894). It can be found in one of his private research notes, entitled "Sur la série de Lamé" and dated October 1879. However, the identity did not appear in print until December 1886 as part of his collected works . This explains why some give 1879 and others 1886 as the date for Catalan's identity .
The Hungarian-British mathematician Steven Vajda (1901–95) published a book on Fibonacci numbers (Fibonacci and Lucas Numbers, and the Golden Section: Theory and Applications, 1989) which contains the identity carrying his name. However the identity was already published in 1960 by Dustan Everman as problem 1396 in The American Mathematical Monthly.
Proof of Cassini identity
Proof by matrix theory
A quick proof of Cassini's identity may be given by recognising the left side of the equation as a determinant of a 2×2 matrix of Fibonacci numbers. The result is almost immediate when the matrix is seen to be the th power of a matrix with determinant −1:
Proof by induction
Consider the induction statement:
The base case is true.
Assume the statement is true for . Then:
so the statement is true for all integers .
Proof of Catalan identity
We use Binet's formula, that , where and .
Hence, and .
So,
Using ,
and again as ,
The Lucas number is defined as , so
Because
Cancelling the 's gives the result
|
https://en.wikipedia.org/wiki/Scalar%20multiplication
|
In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra). In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction. Scalar multiplication is the multiplication of a vector by a scalar (where the product is a vector), and is to be distinguished from inner product of two vectors (where the product is a scalar).
Definition
In general, if K is a field and V is a vector space over K, then scalar multiplication is a function from K × V to V.
The result of applying this function to k in K and v in V is denoted kv.
Properties
Scalar multiplication obeys the following rules (vector in boldface):
Additivity in the scalar: (c + d)v = cv + dv;
Additivity in the vector: c(v + w) = cv + cw;
Compatibility of product of scalars with scalar multiplication: (cd)v = c(dv);
Multiplying by 1 does not change a vector: 1v = v;
Multiplying by 0 gives the zero vector: 0v = 0;
Multiplying by −1 gives the additive inverse: (−1)v = −v.
Here, + is addition either in the field or in the vector space, as appropriate; and 0 is the additive identity in either.
Juxtaposition indicates either scalar multiplication or the multiplication operation in the field.
Interpretation
Scalar multiplication may be viewed as an external binary operation or as an action of the field on the vector space. A geometric interpretation of scalar multiplication is that it stretches or contracts vectors by a constant factor. As a result, it produces a vector in the same or opposite direction of the original vector but of a different length.
As a special case, V may be taken to be K itself and scalar multiplication may then be taken to be simply the multiplication in the field.
When V is Kn, scalar multiplication is equivalent to multiplication of each component with the scalar, and may be
|
https://en.wikipedia.org/wiki/AHCC
|
AHCC is an alpha-glucan rich nutritional supplement produced from shiitake (Lentinula edodes). The product is a subject of research as a potential anti-cancer agent. AHCC is a popular alternative medicine in Japan.
AHCC is a registered trademark of and manufactured by Amino Up Co., Ltd. in Sapporo City, Hokkaido, Japan.
Development and Chemical composition
AHCC was developed by Amino Up Co., LTD. and Dr. Toshihiko Okamoto (School of Pharmaceutical Sciences, University of Tokyo) in 1989.
Polysaccharides form a large part of the composition of AHCC. These include beta-glucan (β-glucan) and partially acylated α-glucan. Partially acylated α-glucan, produced by the patented long term culturing process, is unique to AHCC. Approximately 20% of the make up of AHCC is α-glucans.
Glucans are saccharides, of which some are known to have immune stimulating effects.
Potential Mechanisms of Action
The manufacturer of AHCC, Amino Up Co., Ltd., states that the culturing process utilized in its manufacture favors the release of small bioactive molecules that act as nontoxic agonists for toll-like receptors (TLRs), specifically TLR-4, initiating a systemic anti-inflammatory response. AHCC is believed to bind to TLR-2 and TLR-4, and act as an immune modulator, as Immune cells such as CD4+ and CD8+ T cells and natural killer (NK) cells will produce cytokines by either cytokine stimulation by dendritic cells or ligand binding to TLRs.
Use in Integrative Medicine
AHCC is widely used in the world and many people use it for general health maintenance and treatment of various diseases.
It is often used as a Complementary and Alternative Medicine (CAM) for immune support, as reports in animal and clinical settings have indicated that AHCC is associated with an enhanced response to infection and increased survival. AHCC is in some cases also used by those undergoing conventional cancer therapy (e.g. chemotherapy) for its reported immunomodulatory functions.
In Japan, AHCC is the 2
|
https://en.wikipedia.org/wiki/Atomic%20packing%20factor
|
In crystallography, atomic packing factor (APF), packing efficiency, or packing fraction is the fraction of volume in a crystal structure that is occupied by constituent particles. It is a dimensionless quantity and always less than unity. In atomic systems, by convention, the APF is determined by assuming that atoms are rigid spheres. The radius of the spheres is taken to be the maximum value such that the atoms do not overlap. For one-component crystals (those that contain only one type of particle), the packing fraction is represented mathematically by
where Nparticle is the number of particles in the unit cell, Vparticle is the volume of each particle, and Vunit cell is the volume occupied by the unit cell. It can be proven mathematically that for one-component structures, the most dense arrangement of atoms has an APF of about 0.74 (see Kepler conjecture), obtained by the close-packed structures. For multiple-component structures (such as with interstitial alloys), the APF can exceed 0.74.
The atomic packing factor of a unit cell is relevant to the study of materials science, where it explains many properties of materials. For example, metals with a high atomic packing factor will have a higher "workability" (malleability or ductility), similar to how a road is smoother when the stones are closer together, allowing metal atoms to slide past one another more easily.
Single component crystal structures
Common sphere packings taken on by atomic systems are listed below with their corresponding packing fraction.
Hexagonal close-packed (HCP): 0.74
Face-centered cubic (FCC): 0.74 (also called cubic close-packed, CCP)
Body-centered cubic (BCC): 0.68
Simple cubic: 0.52
Diamond cubic: 0.34
The majority of metals take on either the HCP, FCC, or BCC structure.
Simple cubic
For a simple cubic packing, the number of atoms per unit cell is one. The side of the unit cell is of length 2r, where r is the radius of the atom.
Face-centered cubic
For a face-centered
|
https://en.wikipedia.org/wiki/Advanced%20sleep%20phase%20disorder
|
Advanced Sleep Phase Disorder (ASPD), also known as the advanced sleep-phase type (ASPT) of circadian rhythm sleep disorder, is a condition that is characterized by a recurrent pattern of early evening (e.g. 7-9 PM) sleepiness and very early morning awakening (e.g. 2-4 AM). This sleep phase advancement can interfere with daily social and work schedules, and results in shortened sleep duration and excessive daytime sleepiness. The timing of sleep and melatonin levels are regulated by the body's central circadian clock, which is located in the suprachiasmatic nucleus in the hypothalamus.
Symptoms
Individuals with ASPD report being unable to stay awake until conventional bedtime, falling asleep early in the evening, and being unable to stay asleep until their desired waking time, experiencing early morning insomnia. When someone has advanced sleep phase disorder their melatonin levels and core body temperature cycle hours earlier than an average person. These symptoms must be present and stable for a substantial period of time to be correctly diagnosed.
Diagnosis
Individuals expressing the above symptoms may be diagnosed with ASPD using a variety of methods and tests. Sleep specialists measure the patient's sleep onset and offset, dim light melatonin onset, and evaluate Horne-Ostberg morningness-eveningness questionnaire results. Sleep specialists may also conduct a polysomnography test to rule out other sleep disorders like narcolepsy. Age and family history of the patient is also taken into consideration.
Treatment
Once diagnosed, ASPD may be treated with bright light therapy in the evenings, or behaviorally with chronotherapy, in order to delay sleep onset and offset. The use of pharmacological approaches to treatment are less successful due to the risks of administering sleep-promoting agents early in the morning. Additional methods of treatment, like timed melatonin administration or hypnotics have been proposed, but determining their safety and efficacy will
|
https://en.wikipedia.org/wiki/Abstract%20rewriting%20system
|
In mathematical logic and theoretical computer science, an abstract rewriting system (also (abstract) reduction system or abstract rewrite system; abbreviated ARS) is a formalism that captures the quintessential notion and properties of rewriting systems. In its simplest form, an ARS is simply a set (of "objects") together with a binary relation, traditionally denoted with ; this definition can be further refined if we index (label) subsets of the binary relation. Despite its simplicity, an ARS is sufficient to describe important properties of rewriting systems like normal forms, termination, and various notions of confluence.
Historically, there have been several formalizations of rewriting in an abstract setting, each with its idiosyncrasies. This is due in part to the fact that some notions are equivalent, see below in this article. The formalization that is most commonly encountered in monographs and textbooks, and which is generally followed here, is due to Gérard Huet (1980).
Definition
An abstract reduction system (ARS) is the most general (unidimensional) notion about specifying a set of objects and rules that can be applied to transform them. More recently, authors use the term abstract rewriting system as well. (The preference for the word "reduction" here instead of "rewriting" constitutes a departure from the uniform use of "rewriting" in the names of systems that are particularizations of ARS. Because the word "reduction" does not appear in the names of more specialized systems, in older texts reduction system is a synonym for ARS.)
An ARS is a set A, whose elements are usually called objects, together with a binary relation on A, traditionally denoted by →, and called the reduction relation, rewrite relation or just reduction. This (entrenched) terminology using "reduction" is a little misleading, because the relation is not necessarily reducing some measure of the objects.
In some contexts it may be beneficial to distinguish between some subsets
|
https://en.wikipedia.org/wiki/Synchronous%20flowering
|
Flowering synchrony is the amount of overlap between flowering periods of plants in their mating season compared to what would be expected to occur randomly under given environmental conditions. A population which is flowering synchronously has more plants flowering (producing pollen or receiving pollen) at the same time than would be expected to occur randomly. A population which is flowering asynchronously has fewer plants flowering at the same time than would be expected randomly. Flowering synchrony can describe synchrony of flowering periods within a year, across years, and across species in a community. There are fitness benefits and disadvantages to synchronized flowering, and it is a widespread phenomenon across pollination syndromes.
History
Synchronous flowering has been observed in nature for centuries. Sources from the ninth and 10th centuries noted the interannual synchrony of bamboo species. Early scholarly work focused on interannual variation in the form of mast seeding in tree species such as pines and oaks. An early proposed explanation for masting was the resource management (or weather tracking) hypothesis. This suggested that trees produced large amounts of seeds in response to favorable resource availability and weather conditions. Subsequent research has shown that while weather and resource availability may act as proximate mechanisms for interannually synchronized flowering, the ultimate driver is adaptive evolution for increased mating opportunities.
Early studies of synchronous flowering were biased towards trees species, which typically exhibit higher within-year synchrony than herbaceous species. The field has since expanded to include more herbaceous species. Researchers have also begun to investigate biotic drivers of synchrony, such as pollinating mutualists and herbivorous antagonists.
Community and global patterns of flowering synchrony are emerging across species. Such broad patterns are prone to disturbance by anthropogenic ch
|
https://en.wikipedia.org/wiki/Piriformis%20muscle
|
The piriformis muscle () is a flat, pyramidally-shaped muscle in the gluteal region of the lower limbs. It is one of the six muscles in the lateral rotator group.
The piriformis muscle has its origin upon the front surface of the sacrum, and inserts onto the greater trochanter of the femur. Depending upon the given position of the leg, it acts either as external (lateral) rotator of the thigh or as abductor of the thigh. It is innervated by the piriformis nerve.
Structure
The piriformis is a flat muscle, and is pyramidal in shape.
Origin
The piriformis muscle originates from the anterior (front) surface of the sacrum by three fleshy digitations attached to the second, third, and fourth sacral vertebra.
It also arises from the superior margin of the greater sciatic notch, the gluteal surface of the ilium (near the posterior inferior iliac spine), the sacroiliac joint capsule, and (sometimes) the sacrotuberous ligament (more specifically, the superior part of the pelvic surface of this ligament).
Insertion
The muscle inserts onto the greater trochanter of the femur (its tendon often unites with the tendons of the superior gemellus, inferior gemellus, and obturator internus muscles prior to insertion).
Innervation
The piriformis muscle is innervated by the piriformis nerve.
Relations
The posterior aspect of the muscle lies against the sacrum. The anterior surface of the muscle is related to the rectum (especially on the left side of the body), and the sacral plexus.
The muscle lies almost parallel with the posterior margin of the gluteus medius. It is situated partly within the pelvis against its posterior wall, and partly at the back of the hip joint.
It exits the pelvis through the greater sciatic foramen superior to the sacrospinous ligament.
Variation
In around 80% of the population, the sciatic nerve travels below the piriformis muscle. In 17% of people, the piriformis muscle is pierced by parts or all of the sciatic nerve. Several variations occur
|
https://en.wikipedia.org/wiki/Chemokinesis
|
Chemokinesis is chemically prompted kinesis, a motile response of unicellular prokaryotic or eukaryotic organisms to chemicals that cause the cell to make some kind of change in their migratory/swimming behaviour. Changes involve an increase or decrease of speed, alterations of amplitude or frequency of motile character, or direction of migration. However, in contrast to chemotaxis, chemokinesis has a random, non-vectorial moiety, in general.
Due to the random character, techniques dedicated to evaluate chemokinesis are partly different from methods used in chemotaxis research. One of the most valuable ways to measure chemokinesis is computer-assisted (see, e.g., Image J) checker-board analysis, which provides data about migration of identical cells, whereas, in Protozoa (e.g., Tetrahymena), techniques based on measurement of opalescence were also developed.
|
https://en.wikipedia.org/wiki/Phosphosilicate%20glass
|
Phosphosilicate glass, commonly referred to by the acronym PSG, is a silicate glass commonly used in semiconductor device fabrication for intermetal layers, i.e., insulating layers deposited between succeedingly higher metal or conducting layers, due to its effect in gettering alkali ions. Another common type of phosphosilicate glass is borophosphosilicate glass (BPSG).
Soda-lime phosphosilicate glasses also form the basis for bioactive glasses (e.g. Bioglass), a family of materials which chemically convert to mineralised bone (hydroxy-carbonate-apatite) in physiological fluid.
Bismuth doped phosphosilicate glasses are being explored for use as the active gain medium in fiber lasers for fiber-optic communication.
See also
Wafer (electronics)
|
https://en.wikipedia.org/wiki/Index%20of%20protein-related%20articles
|
Proteins are a class of biomolecules composed of amino acid chains.
Biochemistry
Antifreeze protein, class of polypeptides produced by certain fish, vertebrates, plants, fungi and bacteria
Conjugated protein, protein that functions in interaction with other chemical groups attached by covalent bonds
Denatured protein, protein which has lost its functional conformation
Matrix protein, structural protein linking the viral envelope with the virus core
Protein A, bacterial surface protein that binds antibodies
Protein A/G, recombinant protein that binds antibodies
Protein C, anticoagulant
Protein G, bacterial surface protein that binds antibodies
Protein L, bacterial surface protein that binds antibodies
Protein S, plasma glycoprotein
Protein Z, glycoprotein
Protein catabolism, the breakdown of proteins into amino acids and simple derivative compounds
Protein complex, group of two or more associated proteins
Protein electrophoresis, method of analysing a mixture of proteins by means of gel electrophoresis
Protein folding, process by which a protein assumes its characteristic functional shape or tertiary structure
Protein isoform, version of a protein with some small differences
Protein kinase, enzyme that modifies other proteins by chemically adding phosphate groups to them
Protein ligands, atoms, molecules, and ions which can bind to specific sites on proteins
Protein microarray, piece of glass on which different molecules of protein have been affixed at separate locations in an ordered manner
Protein phosphatase, enzyme that removes phosphate groups that have been attached to amino acid residues of proteins
Protein purification, series of processes intended to isolate a single type of protein from a complex mixture
Protein sequencing, protein method
Protein splicing, intramolecular reaction of a particular protein in which an internal protein segment is removed from a precursor protein
Protein structure, unique three-dimensional shape of amino
|
https://en.wikipedia.org/wiki/Global%20Storage%20Architecture
|
GSA (Global Storage Architecture) is a distributed file system created by IBM to replace the Andrew File System and the DCE Distributed File System.
External links
GSA Presentation by Stanley Wood
Distributed file systems
Network file systems
Internet Protocol based network software
|
https://en.wikipedia.org/wiki/Degenerate%20energy%20levels
|
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy (or simply the degeneracy) of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue. When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy.
Degeneracy plays a fundamental role in quantum statistical mechanics. For an -particle system in three dimensions, a single energy level may correspond to several different wave functions or energy states. These degenerate states at the same level all have an equal probability of being filled. The number of such states gives the degeneracy of a particular energy level.
Mathematics
The possible states of a quantum mechanical system may be treated mathematically as abstract vectors in a separable, complex Hilbert space, while the observables may be represented by linear Hermitian operators acting upon them. By selecting a suitable basis, the components of these vectors and the matrix elements of the operators in that basis may be determined.
If is a matrix, a non-zero vector, and is a scalar, such that , then the scalar is said to be an eigenvalue of and the vector is said to be the eigenvector corresponding to . Together with the zero vector, the set of all eigenvectors corresponding to a given eigenvalue form a subspace of , which is called the eigenspace of .
|
https://en.wikipedia.org/wiki/Tajug
|
Tajug is a pyramidal or pyramid square (i.e. an equilateral square base with a peak) ornament which is usually used for sacred buildings in Southeast Asia including Indonesia, such as mosque or cupola graveyard. It is considered derived from Indian and Chinese architecture, which has history since pre-Islamic era, although there's also an element of an influence from Indian mosques. The term tajug is also used to refer to mosques or surau (Islamic assembly building) in some regions of Indonesia.
See also
Indonesian mosques
|
https://en.wikipedia.org/wiki/Parent%E2%80%93offspring%20conflict
|
Parent–offspring conflict (POC) is an expression coined in 1974 by Robert Trivers. It is used to describe the evolutionary conflict arising from differences in optimal parental investment (PI) in an offspring from the standpoint of the parent and the offspring. PI is any investment by the parent in an individual offspring that decreases the parent's ability to invest in other offspring, while the selected offspring's chance of surviving increases.
POC occurs in sexually reproducing species and is based on a genetic conflict: Parents are equally related to each of their offspring and are therefore expected to equalize their investment among them. Offspring are only half or less related to their siblings (and fully related to themselves), so they try to get more PI than the parents intended to provide even at their siblings' disadvantage.
However, POC is limited by the close genetic relationship between parent and offspring: If an offspring obtains additional PI at the expense of its siblings, it decreases the number of its surviving siblings. Therefore, any gene in an offspring that leads to additional PI decreases (to some extent) the number of surviving copies of itself that may be located in siblings. Thus, if the costs in siblings are too high, such a gene might be selected against despite the benefit to the offspring.
The problem of specifying how an individual is expected to weigh a relative against itself has been examined by W. D. Hamilton in 1964 in the context of kin selection. Hamilton's rule says that altruistic behavior will be positively selected if the benefit to the recipient multiplied by the genetic relatedness of the recipient to the performer is greater than the cost to the performer of a social act. Conversely, selfish behavior can only be favoured when Hamilton's inequality is not satisfied. This leads to the prediction that, other things being equal, POC will be stronger under half siblings (e.g., unrelated males father a female's successive
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.