source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Bateman-Mukai%20method | In genetics, the Bateman–Mukai method, sometimes referred to as the Bateman–Mukai technique, is a traditional method used for describing the mutation rates for genes through the observation of physical traits (phenotype) of a living organism. The method involves the maintenance of many mutation accumulation lineages of the organism studied, and it is therefore labor intensive.
Origin
The foundational papers from which this method gets its name were conducted by geneticists A. J. Bateman in 1959 and T. Mukai in 1964. Bateman used an early form of this method to understand how radiation affects the survival of chromosomes due to radiation induced mutations. Mukai’s experimental design largely followed the design of Bateman’s study, but rather than inducing mutations via any external factor, the study aimed to describe the spontaneous naturally occurring deleterious mutation rate of the common fruit fly.
Procedure
The method requires the establishment of many mutation accumulation lineages using within line breeding of diploid organisms. These lines are maintained in a favorable environment for deleterious mutations to accumulate so that they are not to be purged by natural selection: excess food and other resources are kept available to eliminate competition, and the parents of the next generation are chosen at random without any regards to fitness. Importantly, in this way, mutation accumulation experiments attempt to describe the true mutation rates that would be observed in the absence of natural selection.
Asexually reproducing organisms can simply have a single parent selected as the parent for the next generation of each line. In sexually reproducing organisms, measures must be taken such that researchers can be sure that mutations are inherited in by future generations of the mutation accumulation lines. The use of a balancer gene can be implemented towards this end. In the Mukai experiment, male flies homozygous for the wild type chromosome 2 were always |
https://en.wikipedia.org/wiki/Index%20%28economics%29 | In statistics, economics, and finance, an index is a statistical measure of change in a representative group of individual data points. These data may be derived from any number of sources, including company performance, prices, productivity, and employment. Economic indices track economic health from different perspectives. Examples include the consumer price index, which measures changes in retail prices paid by consumers, and the cost-of-living index (COLI), which measures the relative cost of living over time.
Influential global financial indices such as the Global Dow, and the NASDAQ Composite track the performance of selected large and powerful companies in order to evaluate and predict economic trends.
The Dow Jones Industrial Average and the S&P 500 primarily track U.S. markets, though some legacy international companies are included. The consumer price index tracks the variation in prices for different consumer goods and services over time in a constant geographical location and is integral to calculations used to adjust salaries, bond interest rates, and tax thresholds for inflation.
The GDP Deflator Index, or real GDP, measures the level of prices of all-new, domestically produced, final goods and services in an economy. Market performance indices include the labour market index/job index and proprietary stock market index investment instruments offered by brokerage houses.
Some indices display market variations. For example, the Economist provides a Big Mac Index that expresses the adjusted cost of a globally ubiquitous Big Mac as a percentage over or under the cost of a Big Mac in the U.S. in USD. Such indices can be used to help forecast currency values.
Index numbers
An index number is an economic data figure reflecting price or quantity compared with a standard or base value. The base usually equals 100 and the index number is usually expressed as 100 times the ratio to the base value. For example, if a commodity costs twice as much in 1970 |
https://en.wikipedia.org/wiki/Pubmatic | PubMatic, Inc. develops and implements online advertising software and strategies for the digital publishing and advertising industry. PubMatic's sell-side, real-time programmatic ad transaction advertising software puts publishers of websites, videos, and mobile apps into contact with ad buyers by using automated systems, while allowing users to opt-out of having their personal information collected on internet searches. PubMatic has a number of offices in countries around the world.
History
PubMatic was founded in 2006 by brothers Rajeev Goel and Amar Goel, Anand Das and Mukul Kumar. PubMatic software was developed in Pune, India.
In 2011 the company hired Steve Pantelick as CFO, and in 2012 PubMatic raised $45 million from investors.
In 2014 PubMatic acquired mobile ad server Mocean Mobile, formerly knows as Mojiva, for $15.5 million.
In 2015, PubMatic opened an office in Latin America.
By 2016, the firm was operating by storing most of its data on OpenStack private cloud servers.
In January 2020, PubMatic launched an Identity Hub integrating identity partner IDs, including IAB DigiTrust, The Trade Desk Unified ID (UID 2.0), ID5, and LiveIntent.
In February 2020, PubMatic released the OpenWrap SDK to enhance header bidding options for mobile publishers.
In November 2020, PubMatic filed for an IPO in Nasdaq. The company launched its IPO on 9 December 2020. Its clients in 2020 included Verizon, News Corp, Electronic Arts, and Zynga, with Verizon comprising about a quarter of Pubmatic's revenue during the previous year.
Activities
PubMatic, for a fee, participates in online auctions to help advertisers buy and publishers sell media and advertising spots between various advertising companies. The company also produces quarterly reports about advertising prices. |
https://en.wikipedia.org/wiki/Bordetella%20ansorpii | Bordetella ansorpii is a Gram-negative, oxidase-negative bacterium from the genus Bordetella which has been isolated from the purulent exudate of an epidermal cyst of an immunocompromised patient. A 16S rRNA gene analysis has confirmed B. ansorpii belongs to this genus. |
https://en.wikipedia.org/wiki/Electrochromatography | Electrochromatography is a chemical separation technique in analytical chemistry, biochemistry and molecular biology used to resolve and separate mostly large biomolecules such as proteins. It is a combination of size exclusion chromatography (gel filtration chromatography) and gel electrophoresis. These separation mechanisms operate essentially in superposition along the length of a gel filtration column to which an axial electric field gradient has been added. The molecules are separated by size due to the gel filtration mechanism and by electrophoretic mobility due to the gel electrophoresis mechanism. Additionally there are secondary chromatographic solute retention mechanisms.
Capillary electrochromatography
Capillary electrochromatography (CEC) is an electrochromatography technique in which the liquid mobile phase is driven through a capillary containing the chromatographic stationary phase by electroosmosis. It is a combination of high-performance liquid chromatography and capillary electrophoresis. The capillaries is packed with HPLC stationary phase and a high voltage is applied to achieve separation is achieved by electrophoretic migration of the analyte and differential partitioning in the stationary phase.
See also
Chromatography
Protein electrophoresis
Electrofocusing
Two-dimensional gel electrophoresis
Temperature gradient gel electrophoresis |
https://en.wikipedia.org/wiki/Gun.Smoke | is vertically scrolling run and gun video game and designed by Yoshiki Okamoto and released in arcades in 1985. Gun.Smoke centers on a character named Billy Bob, a bounty hunter going after the criminals of the Wild West.
Gameplay
Gun Smoke is a run and gun video game in which the screen automatically scrolls upward. Players use three buttons to shoot left, right, and center. The player can also change the way Billy shoots through button combinations. The player dies by getting shot, struck by enemies, or caught between an obstacle and the bottom of the screen. The player can collect various items, including a horse for extra protection, boots for increased movement speed, bullets for faster shots, a yashichi for an extra life, and a rifle for longer shot range. Other items add points to your score such as stars, bottles, bags, and dragonflies.
Two versions of Gun.Smoke were released in North America by Romstar.
Ports
Gun.Smoke was ported to these systems:
The MSX
The PlayStation and Sega Saturn as a part of Capcom Generation 4
The PlayStation 2, PlayStation Portable and Xbox as a part of Capcom Classics Collection
The PlayStation 3 and Xbox 360 as a part of Capcom Arcade Cabinet
The, Nintendo Switch, PlayStation 4, Xbox One, and Microsoft Windows as part of Capcom Arcade 2nd Stadium, referred to as Gan Sumoku.
Windows 98 and Windows XP as a part of Capcom Arcade Hits Volume 3
The Amstrad CPC as Desperado – Gun.Smoke; this platform received a sequel called Desperado 2
The ZX Spectrum
NES version
The game was later ported to the Nintendo Entertainment System (NES) and Family Computer Disk System (FDS) in 1988. The game has a new storyline: In 1849, a gang known as the Wingates attacks the town of Hicksville, kills the sheriff, and causes trouble everyday until Billy, the main character, comes to town to free it from the gang. The NES version also has different music.
Soundtrack
The soundtrack for the arcade version was composed by Ayako Mori. On August 25, 19 |
https://en.wikipedia.org/wiki/Bromodeoxyuridine | Bromodeoxyuridine (5-bromo-2'-deoxyuridine, BrdU, BUdR, BrdUrd, broxuridine) is a synthetic nucleoside analogue with a chemical structure similar to thymidine. BrdU is commonly used to study cell proliferation in living tissues and has been studied as a radiosensitizer and diagnostic tool in people with cancer.
During the S phase of the cell cycle (when DNA replication occurs), BrdU can be incorporated in place of thymidine in newly synthesized DNA molecules of dividing cells. Cells that have recently performed DNA replication or DNA repair can be detected with antibodies specific for BrdU using techniques such as immunohistochemistry or immunofluorescence. BrdU-labelled cells in humans can be detected up to two years after BrdU infusion.
Because BrdU can replace thymidine during DNA replication, it can cause mutations, and its use is therefore potentially a health hazard. However, because it is neither radioactive nor myelotoxic at labeling concentrations, it is widely preferred for in vivo studies of cancer cell proliferation. However, at radiosensitizing concentrations, BrdU becomes myelosuppressive, thus limiting its use for radiosensitizing.
BrdU differs from thymidine in that BrdU substitutes a bromine atom for thymidine's CH3 group. The Br substitution can be used in X-ray diffraction experiments in crystals containing either DNA or RNA. The Br atom acts as an anomalous scatterer and its larger size will affect the crystal's X-ray diffraction enough to detect isomorphous differences as well.
Bromodeoxyuridine releases gene silencing caused by DNA methylation.
BrdU can also be used to identify microorganisms that respond to specific carbon substrates in aquatic and soil environments. A carbon substrate added to the incubations of environmental samples will cause the growth of microorganisms that can utilize that substrate. These microorganisms will then incorporate BrdU into their DNA as they grow. Community DNA can then be isolated and BrdU-labeled DN |
https://en.wikipedia.org/wiki/Projection%20plane | A projection plane, or plane of projection, is a type of view in which graphical projections from an object intersect. Projection planes are used often in descriptive geometry and graphical representation. A picture plane in perspective drawing is a type of projection plane.
With perspective drawing, the lines of sight, or projection lines, between an object and a picture plane return to a vanishing point and are not parallel. With parallel projection the lines of sight from the object to the projection plane are parallel.
See also
Image plane
Picture plane |
https://en.wikipedia.org/wiki/Physics%20Galaxy | Physics Galaxy is an interactive physics online course and e-learning method for students aspiring for JEE Main, JEE Advanced and NEET.
History
Physics Galaxy was founded as an online learning portal by Ashish Arora. It was founded as an initiative to coach students for free especially for those in rural areas, who cannot afford expensive coaching facilities. Later in 2011, a YouTube channel of the same name was founded.
The website presently has 8000+ lectures available on the web for free of cost. Also more than 30000 video lectures are being watched per day on this website and on its youtube channel. The videos that can be viewed using subtitles are also available in 67 languages using google translator including English, Hindi, Chinese, French, Marathi, Bangla, Urdu and other regional and international languages. Besides subtitles, synchronized voice accents of all theory videos are also available for students in the USA, European and other countries. |
https://en.wikipedia.org/wiki/Circle%20packing%20in%20a%20circle | Circle packing in a circle is a two-dimensional packing problem with the objective of packing unit circles into the smallest possible larger circle.
Table of solutions, 1 ≤ n ≤ 20
If more than one equivalent solution exists, all are shown.
Special cases
Only 26 optimal packings are thought to be rigid (with no circles able to "rattle"). Numbers in bold are prime:
Proven for n = 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 19
Conjectured for n = 14, 15, 16, 17, 18, 22, 23, 27, 30, 31, 33, 37, 61, 91
Of these, solutions for n = 2, 3, 4, 7, 19, and 37 achieve a packing density greater than any smaller number > 1. (Higher density records all have rattles.)
See also
Disk covering problem
Square packing in a circle |
https://en.wikipedia.org/wiki/Invitation%20system | An invitation system is a method of encouraging people to join an organization, such as a club or a website. In regular society, it refers to any system whereby new members are chosen; they cannot simply apply. In relation to websites and other technology-related organisations, the term refers to a more specific situation whereby invitations are sent, but there is never any approval needed from other members. Popular alternatives to this specific version are open registration and closed registration. Open registration is where any user can freely join. Closed registration involves an existing member recommending a new member and approval is sought amongst existing members. The basis of the invitation system is that a member can grant approval to a new user without having to consult any other members.
Existing members may receive a set number of invitations (sometimes in the form of tokens) to allow others to join the service. Those invited to a website are typically sent either a specialized URL or a single-use pass code.
Applications
Invitation systems for websites are usually temporary. They are typically used for services in private beta testing, in order to control the number of users on the service. In other cases, they can be used due to limited availability of server resources. There are a growing number of sites which use invitation systems on a permanent basis to create exclusivity and to control quality of user-generated content. Rarely, they may be used on a permanent basis in order to aggregate social network statistics (all users will ultimately have a traceable connection to all others). They are sometimes used to avoid abusive types or spammers, by relying on mutual trust between all members.
Variations
Sometimes, a tiered invitation system may be in place, wherein those higher up the hierarchy will have the power to grant more invitations, whereas low-ranking members may receive few or even no invitation rights.
Examples
Some prominent se |
https://en.wikipedia.org/wiki/Reflexogenous%20zone | Reflexogenous (reflexogenic) zone (or the receptive field of a reflex) is the area of the body stimulation of which causes a definite unconditioned reflex. For example, stimulation of the mucosa of the nasopharynx elicits a sneezing reflex, and stimulation of the tracheae and bronchi elicits a coughing reflex. The receptive fields of various reflexes may overlap, and in consequence a stimulus applied to a certain part of the skin can elicit one reflex or another depending on its strength and the state of the central nervous system. |
https://en.wikipedia.org/wiki/Pentation | In mathematics, pentation (or hyper-5) is the next hyperoperation (infinite sequence of arithmetic operations) after tetration and before hexation. It is defined as iterated (repeated) tetration (assuming right-associativity), just as tetration is iterated right-associative exponentiation. It is a binary operation defined with two numbers a and b, where a is tetrated to itself b-1 times. For instance, using hyperoperation notation for pentation and tetration, means 2 to itself 2 times, or . This can then be reduced to
Etymology
The word "pentation" was coined by Reuben Goodstein in 1947 from the roots penta- (five) and iteration. It is part of his general naming scheme for hyperoperations.
Notation
There is little consensus on the notation for pentation; as such, there are many different ways to write the operation. However, some are more used than others, and some have clear advantages or disadvantages compared to others.
Pentation can be written as a hyperoperation as . In this format, may be interpreted as the result of repeatedly applying the function , for repetitions, starting from the number 1. Analogously, , tetration, represents the value obtained by repeatedly applying the function , for repetitions, starting from the number 1, and the pentation represents the value obtained by repeatedly applying the function , for repetitions, starting from the number 1. This will be the notation used in the rest of the article.
In Knuth's up-arrow notation, is represented as or . In this notation, represents the exponentiation function and represents tetration. The operation can be easily adapted for hexation by adding another arrow.
In Conway chained arrow notation, .
Another proposed notation is , though this is not extensible to higher hyperoperations.
Examples
The values of the pentation function may also be obtained from the values in the fourth row of the table of values of a variant of the Ackermann function: if is defined by the Ackermann recu |
https://en.wikipedia.org/wiki/Club%20of%20Rome | The Club of Rome is a nonprofit, informal organization of intellectuals and business leaders whose goal is a critical discussion of pressing global issues. The Club of Rome was founded in 1968 at Accademia dei Lincei in Rome, Italy. It consists of one hundred full members selected from current and former heads of state and government, UN administrators, high-level politicians and government officials, diplomats, scientists, economists, and business leaders from around the globe. It stimulated considerable public attention in 1972 with the first report to the Club of Rome, The Limits to Growth. Since 1 July 2008, the organization has been based in Winterthur, Switzerland.
History
Origins
In 1965, an Italian industrialist named Aurelio Peccei gave a speech about the dramatic scientific and technological changes happening in the world. The speech was noticed by Alexander King, a British scientist who had advised the British government, and who was currently serving as Director-General for Scientific Affairs at the OECD. King arranged a meeting with Peccei. The pair shared a lack of confidence that the problems faced by the world could be solved by development and technological progress.
In April 1968, Peccei and King convened a small international group of people from the fields of academia, civil society, diplomacy, and industry met at Villa Farnesina in Rome. The background paper to set the tone of the meeting was entitled "A tentative framework for initiating system wide planning of world scope", by Austrian OECD consultant Erich Jantsch. However, the meeting was described as a "monumental flop", with discussions becoming bogged down in technical and semantic debates.
After the meeting, Peccei, King, Jantsch and Hugo Thiemann decided to form the Club of Rome, named for the city of their meeting.
First steps
Central to the formation of the club was Peccei's concept of the problematic. It was his opinion that viewing the problems of humankind—environmental dete |
https://en.wikipedia.org/wiki/Bitmain | Bitmain Technologies Ltd., is a privately owned company headquartered in Beijing, China, that designs application-specific integrated circuit (ASIC) chips for bitcoin mining.
History
It was founded by Micree Zhan and Jihan Wu in 2013. Prior to founding Bitmain, Zhan was running DivaIP, a startup that allowed users to stream television to a computer screen via a set-top box, and Wu was a financial analyst and private equity fund manager.
By 2018 it had become the world's largest designer of application-specific integrated circuit (ASIC) chips for bitcoin mining. The company also operates BTC.com and Antpool, historically two of the largest mining pools for bitcoin. In an effort to boost Bitcoin Cash (BCH) prices, Antpool "burned" 12% of the BCH they mined by sending them to irrecoverable addresses. Bitmain was reportedly profitable in early 2018, with a net profit of $742.7 million in the first half of 2018, and negative operating cash flow. TechCrunch reported that unsold inventory ballooned to one billion dollars in the second quarter of 2018. Bitmain's first product was the Antminer S1 which is an ASIC Bitcoin miner making 180 gigahashes per second (GH/s) while using 80200 watts of power. Bitmain as of 2018 had 11 mining farms operating in China. Bitmain was involved in the 2018 Bitcoin Cash split, siding with Bitcoin Cash ABC alongside Roger Ver. In December 2018 the company laid off about half of its 3000 staff. The company has since closed its offices in Israel and the Netherlands, while significantly downsizing its Texas mining operation. In February 2019, Bitmain had lost "about $500 million" in the third quarter of 2018. Bitmain issued a statement saying "the rumors are not true and we will make announcements in due course."
In June 2021, suspended spot delivery of sales of machines globally aiming to support local prices following Beijing's crackdown.
Bitmain's attempts at initial public offering
In June 2018, Wu told Bloomberg that Bitmain was conside |
https://en.wikipedia.org/wiki/Greater%20trochanter | The greater trochanter of the femur is a large, irregular, quadrilateral eminence and a part of the skeletal system.
It is directed lateral and medially and slightly posterior. In the adult it is about 2–4 cm lower than the femoral head. Because the pelvic outlet in the female is larger than in the male, there is a greater distance between the greater trochanters in the female.
It has two surfaces and four borders. It is a traction epiphysis.
Surfaces
The lateral surface, quadrilateral in form, is broad, rough, convex, and marked by a diagonal impression, which extends from the postero-superior to the antero-inferior angle, and serves for the insertion of the tendon of the gluteus medius.
Above the impression is a triangular surface, sometimes rough for part of the tendon of the same muscle, sometimes smooth for the interposition of a bursa between the tendon and the bone. Below and behind the diagonal impression is a smooth triangular surface, over which the tendon of the gluteus maximus lies, a bursa being interposed.
The medial surface, of much less extent than the lateral, presents at its base a deep depression, the trochanteric fossa (digital fossa), for the insertion of the tendon of the obturator externus, and above and in front of this an impression for the insertion of the obturator internus and superior and inferior gemellus muscles.
Borders
The superior border is free; it is thick and irregular, and marked near the center by an impression for the insertion of the piriformis.
The inferior border corresponds to the line of junction of the base of the trochanter with the lateral surface of the body; it is marked by a rough, prominent, slightly curved ridge, which gives origin to the upper part of the vastus lateralis.
The anterior border is prominent and somewhat irregular; it affords insertion at its lateral part to the gluteus minimus.
The posterior border is very prominent and appears as a free, rounded edge, which bounds the back part of the |
https://en.wikipedia.org/wiki/Stuart%20Hall%20Building | The Stuart Hall Building is located at 2121 Central Street in the Crossroads Arts District neighborhood of Kansas City, Missouri. The former commercial building is known as the Freight House Lofts or Stuart Hall Lofts.
History
The seven-story building was constructed in 1910-1911 as a manufacturing facility for the National Biscuit Company. To this day, the massive brick ovens still remain in the building. The building was later used as a warehouse for the Stuart Hall company. The building was not only a warehouse for the company, it was where the company's operations were located. The company made various paper items that included envelopes, spiral notebooks and notebook paper.
The Stuart Hall building was converted into 127 residential lofts, following a $24 million renovation project that was completed in 2004. |
https://en.wikipedia.org/wiki/Ice%20pick | The ice pick is a pointed metal tool used from the 1800s to the 1900s to break, pick or chip at ice. The design consists of a sharp metal spike attached to a wooden handle. The tool's design has been relatively unchanged since its creation. The only notable differences in the design are the material used for the handle. The handle material is usually made out of wood but can also be made from plastic or rubber. These materials can be better in terms of safety and allow the user to better grip the pick during use.
History
During the 1800s, ice blocks were gathered from frozen water sources and distributed to nearby homes. Ice picks were used to easily cut the blocks into smaller pieces for use. In many cases these smaller blocks were used in iceboxes. Iceboxes are similar in use to refrigerators, with the major difference being that iceboxes could only stay cold for a limited time. They needed to be restocked with ice regularly to continue proper functioning. The ice pick slowly began to lose popularity in the early to mid-1900s due to the creation of the modern refrigerator. Many refrigerators came with a built-in ice maker which allowed for easy access to small ice chunks at any time and eliminated the need for the ice pick. Today, the ice pick has become essentially obsolete.
Characteristics
The effectiveness of an ice pick is dependent on the weight of the pick and the force applied by the user. Most ice picks are pointed with a slight bend at the tip. If the tip is bent, the ice shatters upon striking. The bent tip allows for large chunks to break off in comparison to a tip that is straight. To avoid the handle slipping, most ice picks feature a knob at the end. Additionally, most picks have a wood handle made out of spruce as it is sturdy. The wood handle also helps to prevent the user's hand from freezing. The steel bar in an ice pick is affixed to the wood handle.
Use in bartending
One way that the ice pick is still used today is through bartending. Bar |
https://en.wikipedia.org/wiki/OmNovia%20Technologies | omNovia Technologies is a software company, founded by Shahin (Shawn) Shadfar in 2004 that provides web conferencing platform for realtime, rich-media online meetings, webinars, webcasts and eLearning sessions with two to 5,000 interactive participants. The company's headquarters is located in Houston, Texas. According to the company, the name omNovia derives from "omni" and "innovation".
Products
omNovia Technologies provides interactive web conferencing and webinar platform for online collaborative meetings, online trainings, remote learning sessions and live webcasting. The web based platform hosts online meetings with up to 5,000 participants in a virtual meeting room. The interactive features of the platform include integrated full 1080p High Definition video and audio, white board, desktop sharing, polling, movie and YouTube player, Q&A Manager, Cobrowser, world map and Twitter integration. The company also provides webcasting and remote support technologies. The company's web conferencing platform also offers the ability of multilingual web conferencing platform, allowing companies to host web conferences in a variety of languages simultaneously.
On April 7, 2009, omNovia was selected as the official webcast technology for the 2009 Digital Energy Conference organized by the Society of Petroleum Engineers.
In November 2011, omNovia was selected by the US State Department for its internal webinars.
About.com recognized omNovia as a "Great Alternative to Citrix GotoMeeting.
Awards
In 2008, omNovia Technologies won the 19th place in FastTech 50 awards program. In 2009 the company was nominated to the same award program. |
https://en.wikipedia.org/wiki/Wireless%20Multimedia%20Extensions | Wireless Multimedia Extensions (WME), also known as Wi-Fi Multimedia (WMM), is a Wi-Fi Alliance interoperability certification, based on the IEEE 802.11e standard. It provides basic Quality of service (QoS) features to IEEE 802.11 networks. WMM prioritizes traffic according to four Access Categories (AC): voice (AC_VO), video (AC_VI), best effort (AC_BE), and background (AC_BK). However, it does not provide guaranteed throughput. It is suitable for well-defined applications that require QoS, such as Voice over IP (VoIP) on Wi-Fi phones (VoWLAN).
WMM replaces the Wi-Fi DCF distributed coordination function for CSMA/CA wireless frame transmission with Enhanced Distributed Coordination Function (EDCF). EDCF, according to version 1.1 of the WMM specifications by the Wi-Fi Alliance, defines Access Categories labels AC_VO, AC_VI, AC_BE, and AC_BK for the Enhanced Distributed Channel Access (EDCA) parameters that are used by a WMM-enabled station to control how long it sets its Transmission Opportunity (TXOP), according to the information transmitted by the access point to the station. It is implemented for wireless QoS between RF media.
Power Save Certification
The Wi-Fi Alliance has added Power Save Certification to the WMM specification. Power Save uses mechanisms from 802.11e and legacy 802.11 to save power (for battery powered equipment) and fine-tune power consumption. The certification provides an indication that the certified product is targeted for power critical applications like Mobile/Smart Phones and portable power devices (I.e Those that require battery or recharging such as smart phones.)
The underlying concept of WMM PowerSave is that the station (STA) triggers the release of buffered data from the access point (AP) by sending an uplink data frame. Upon receipt of such a data (trigger) frame the AP releases previously buffered data stored in each of its queues. Queues may be configured to be trigger enabled, (i.e. a receipt of a data frame corresponding |
https://en.wikipedia.org/wiki/Coherent%20file%20distribution%20protocol | Coherent File Distribution Protocol (CFDP) is an IETF-documented experimental protocol intended for high-speed one-to-many file transfers. Class 1 is assured delivery, class 2 is blind unassured delivery. |
https://en.wikipedia.org/wiki/Crunch%20%28video%20games%29 | In the video game industry, crunch (or crunch culture) is compulsory overtime during the development of a game. Crunch is common in the industry and can lead to work weeks of 65–80 hours for extended periods of time, often uncompensated beyond the normal working hours. It is often used as a way to cut the costs of game development, a labour-intensive endeavour. However, it leads to negative health impacts for game developers and a decrease in the quality of their work, which drives developers out of the industry temporarily or permanently. Critics of crunch note how it has become normalized within the game-development industry, to deleterious effects for all involved. A lack of unionization on the part of game developers has often been suggested as the reason crunch exists. Organizations such as Game Workers Unite aim to fight against crunch by forcing studios to honour game developers' labor rights.
Description
Crunch time vs. crunch culture
"Crunch time" is the point at which the team is thought to be failing to achieve milestones needed to launch a game on schedule. The complexity of work flow, reliance on third-party deliverables, and the intangibles of artistic and aesthetic demands in video-game creation create difficulty in predicting milestones. The use of crunch time is also seen to be exploitative of the younger male-dominated workforce in video games, who have not had the time to establish a family and who were eager to advance within the industry by working long hours. In some cases, the drive for crunch may come from the developers themselves as individual developers may want to work extra hours without a mandate to assure their product meets delivery milestones and is of high quality, which can influence other developers to also commit to extra hours or avoid taking time off as to appear slacking.
Because crunch time tends to come from a combination of corporate practices as well as peer influence, the term "crunch culture" is often used to discuss |
https://en.wikipedia.org/wiki/Telephone%20number%20mapping | Telephone number mapping is a system of unifying the international telephone number system of the public switched telephone network with the Internet addressing and identification name spaces. Internationally, telephone numbers are systematically organized by the E.164 standard, while the Internet uses the Domain Name System (DNS) for linking domain names to IP addresses and other resource information. Telephone number mapping systems provide facilities to determine applicable Internet communications servers responsible for servicing a given telephone number using DNS queries.
The most prominent facility for telephone number mapping is the E.164 number to URI mapping (ENUM) standard. It uses special DNS record types to translate a telephone number into a Uniform Resource Identifier (URI) or IP address that can be used in Internet communications.
Rationale
Being able to dial telephone numbers the way customers have come to expect is considered crucial for the convergence of classic telephone service (PSTN) and Internet telephony (Voice over IP, VoIP), and for the development of new IP multimedia services. The problem of a single universal personal identifier for multiple communication services can be solved with different approaches. One simple approach is the Electronic Number Mapping System (ENUM), developed by the IETF, using existing E.164 telephone numbers, protocols and infrastructure to indirectly access different services available under a single personal identifier. ENUM also permits connecting the IP world to the telephone system in a seamless manner.
System details
For an ENUM subscriber to be able to activate and use the ENUM service, it needs to obtain three elements from a Registrar:
A personal Uniform Resource Identifier (URI) to be used on the IP part of the network, as explained below.
One E.164 regular personal telephone number associated with the personal URI, to be used on the PSTN part of the network.
Authority to write their call forwarding/ |
https://en.wikipedia.org/wiki/Great%20Salinity%20Anomaly | The Great Salinity Anomaly (GSA) originally referred to an event in the late 1960s to early 1970s where a large influx of freshwater from the Arctic Ocean led to a salinity anomaly in the northern North Atlantic Ocean, which affected the Atlantic meridional overturning circulation. Since then, the term "Great Salinity Anomaly" has been applied to successive occurrences of the same phenomenon, including the Great Salinity Anomaly of the 1980s and the Great Salinity Anomaly of the 1990s. The Great Salinity Anomalies were advective events, propagating to different sea basins and areas of the North Atlantic, and is on the decadal-scale for the anomalies in the 1970s, 1980s, and 1990s.
Salinity anomaly occurrences
The Great Salinity Anomalies of the 1970s and 1980s were well-documented decadal-scale events, where minima in salinity (and temperature) were observed successively in different basins around the northern North Atlantic Ocean. The fact that the anomaly was observed in different basins after each other indicates that this was an advective event, accounted for by the movement of a fresh (and cold) anomaly along main ocean currents. For the 1970s GSA, the propagation was traceable around the Atlantic sub-polar gyre from its origins in the northeast of Iceland in the mid- to late- 1960s until its return to the Greenland Sea in 1981–82. The 1980s GSA began with the anomaly being advected by the West Greenland Current In 1982 and ending up back in north Icelandic waters in 1989–90.
How salinity is measured
Salinity is a measure of how ‘salty’ water is, or the amount of dissolved matter within seawater. This is measured by passing seawater through a very fine filter to remove particulate matter. Historically, this was measured using a glass fibre filter with a nominal pore size of 0.45 . More recently, though, smaller and smaller pores have been used.
Salinity is difficult to measure directly as dissolved matter in seawater is a complicated mixture of virtually |
https://en.wikipedia.org/wiki/Bioenergetics | Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha |
https://en.wikipedia.org/wiki/Commvault | Commvault is an American publicly traded data protection and data management software company headquartered in Tinton Falls, New Jersey. Commvault enterprise software can be used for data backup and recovery, cloud and infrastructure management, retention and compliance.
History
Commvault was originally formed in 1988 as a development group in Bell Labs focused on data management, backup, and recovery; it was later designated a business unit of AT&T Network Systems. After becoming a part of Lucent Technologies, the unit was sold in 1996 and became a corporation, with Scotty R. Neal as CEO.
In March 1998, Bob Hammer joined Commvault as chairman, president and CEO, and Al Bunte joined as vice president and COO. In 2000, the company began releasing products aimed at managing network storage. In March 2006, Commvault filed for an initial public offering, and officially went public later that year as CVLT on NASDAQ. At the end of 2013, the company moved from its space in Oceanport, New Jersey, to its new $146 million headquarters at the former Fort Monmouth in Tinton Falls, New Jersey.
On February 5, 2019, Sanjay Mirchandani replaced the retiring Hammer as president and CEO, and Nick Adamo was announced as chairman of the board. Mirchandani joined Commvault from Puppet, an Oregon-based IT automation company, where he served as CEO.
Software
Commvault software is an enterprise-level data platform that contains modules to back up, restore, archive, replicate, and search data. It is built from the ground-up on a single platform and unified code base. It has four product lines: Complete Backup and Recovery, HyperScale integrated appliances, Orchestrate disaster recovery, and Activate analytics. The software is available across cloud and on-premises environments.
Data is protected by installing agent software on the physical or virtual hosts, which use operating system or application native APIs to protect data in a consistent state. Production data is processed by the |
https://en.wikipedia.org/wiki/Source%E2%80%93sink%20dynamics | Source–sink dynamics is a theoretical model used by ecologists to describe how variation in habitat quality may affect the population growth or decline of organisms.
Since quality is likely to vary among patches of habitat, it is important to consider how a low quality patch might affect a population. In this model, organisms occupy two patches of habitat. One patch, the source, is a high quality habitat that on average allows the population to increase. The second patch, the sink, is a very low quality habitat that, on its own, would not be able to support a population. However, if the excess of individuals produced in the source frequently moves to the sink, the sink population can persist indefinitely. Organisms are generally assumed to be able to distinguish between high and low quality habitat, and to prefer high quality habitat. However, ecological trap theory describes the reasons why organisms may actually prefer sink patches over source patches. Finally, the source–sink model implies that some habitat patches may be more important to the long-term survival of the population, and considering the presence of source–sink dynamics will help inform conservation decisions.
Theory development
Although the seeds of a source–sink model had been planted earlier, Pulliam is often recognized as the first to present a fully developed source–sink model. He defined source and sink patches in terms of their demographic parameters, or BIDE rates (birth, immigration, death, and emigration rates). In the source patch, birth rates were greater than death rates, causing the population to grow. The excess individuals were expected to leave the patch, so that emigration rates were greater than immigration rates. In other words, sources were a net exporter of individuals. In contrast, in a sink patch, death rates were greater than birth rates, resulting in a population decline toward extinction unless enough individuals emigrated from the source patch. Immigration ra |
https://en.wikipedia.org/wiki/Logic%20level | In digital circuits, a logic level is one of a finite number of states that a digital signal can inhabit. Logic levels are usually represented by the voltage difference between the signal and ground, although other standards exist. The range of voltage levels that represent each state depends on the logic family being used.
A logic-level shifter can be used to allow compatibility between different circuits.
2-level logic
In binary logic the two levels are logical high and logical low, which generally correspond to binary numbers 1 and 0 respectively or truth values true and false respectively. Signals with one of these two levels can be used in Boolean algebra for digital circuit design or analysis.
Active state
The use of either the higher or the lower voltage level to represent either logic state is arbitrary. The two options are active high (positive logic) and active low (negative logic). Active-high and active-low states can be mixed at will: for example, a read only memory integrated circuit may have a chip-select signal that is active-low, but the data and address bits are conventionally active-high. Occasionally a logic design is simplified by inverting the choice of active level (see De Morgan's laws).
The name of an active-low signal is historically written with a bar above it to distinguish it from an active-high signal. For example, the name Q, read Q bar or Q not, represents an active-low signal. The conventions commonly used are:
a bar above ()
a leading slash (/Q)
a lower-case n prefix or suffix (nQ or Q_n)
a trailing # (Q#), or
an _B or _L suffix (Q_B or Q_L).
Many control signals in electronics are active-low signals (usually reset lines, chip-select lines and so on). Logic families such as TTL can sink more current than they can source, so fanout and noise immunity increase. It also allows for wired-OR logic if the logic gates are open-collector/open-drain with a pull-up resistor. Examples of this are the I²C bus and the Controller Ar |
https://en.wikipedia.org/wiki/Fuchs%20relation | In mathematics, the Fuchs relation is a relation between the starting exponents of formal series solutions of certain linear differential equations, so called Fuchsian equations. It is named after Lazarus Immanuel Fuchs.
Definition Fuchsian equation
A linear differential equation in which every singular point, including the point at infinity, is a regular singularity is called Fuchsian equation or equation of Fuchsian type. For Fuchsian equations a formal fundamental system exists at any point, due to the Fuchsian theory.
Coefficients of a Fuchsian equation
Let be the regular singularities in the finite part of the complex plane of the linear differential equation
with meromorphic functions . For linear differential equations the singularities are exactly the singular points of the coefficients. is a Fuchsian equation if and only if the coefficients are rational functions of the form
with the polynomial and certain polynomials for , such that . This means the coefficient has poles of order at most , for .
Fuchs relation
Let be a Fuchsian equation of order with the singularities and the point at infinity. Let be the roots of the indicial polynomial relative to , for . Let be the roots of the indicial polynomial relative to , which is given by the indicial polynomial of transformed by at . Then the so called Fuchs relation holds:
.
The Fuchs relation can be rewritten as infinite sum. Let denote the indicial polynomial relative to of the Fuchsian equation . Define as
where gives the trace of a polynomial , i. e., denotes the sum of a polynomial's roots counted with multiplicity.
This means that for any ordinary point , due to the fact that the indicial polynomial relative to any ordinary point is . The transformation , that is used to obtain the indicial equation relative to , motivates the changed sign in the definition of for . The rewritten Fuchs relation is: |
https://en.wikipedia.org/wiki/Pain%20out%20of%20proportion | Pain out of proportion or pain out of proportion to physical examination is a medical sign where apparent pain in the individual does not correspond to other signs. It is found in a number of conditions, including:
Necrotizing fasciitis
Compartment syndrome
Mesenteric ischemia
Mueller-Weiss disease
Also used in reference to the medical diagnosis of Malingering ICD-10 Z76.5 as in "Pain out of proportion to symtoms". |
https://en.wikipedia.org/wiki/Boole%27s%20expansion%20theorem | Boole's expansion theorem, often referred to as the Shannon expansion or decomposition, is the identity: , where is any Boolean function, is a variable, is the complement of , and and are with the argument set equal to and to respectively.
The terms and are sometimes called the positive and negative Shannon cofactors, respectively, of with respect to . These are functions, computed by restrict operator, and (see valuation (logic) and partial application).
It has been called the "fundamental theorem of Boolean algebra". Besides its theoretical importance, it paved the way for binary decision diagrams (BDDs), satisfiability solvers, and many other techniques relevant to computer engineering and formal verification of digital circuits.
In such engineering contexts (especially in BDDs), the expansion is interpreted as a if-then-else, with the variable being the condition and the cofactors being the branches ( when is true and respectively when is false).
Statement of the theorem
A more explicit way of stating the theorem is:
Variations and implications
XOR-Form The statement also holds when the disjunction "+" is replaced by the XOR operator:
Dual form There is a dual form of the Shannon expansion (which does not have a related XOR form):
Repeated application for each argument leads to the Sum of Products (SoP) canonical form of the Boolean function . For example for that would be
Likewise, application of the dual form leads to the Product of Sums (PoS) canonical form (using the distributivity law of over ):
Properties of cofactors
Linear properties of cofactors:
For a Boolean function F which is made up of two Boolean functions G and H the following are true:
If then
If then
If then
If then
Characteristics of unate functions:
If F is a unate function and...
If F is positive unate then
If F is negative unate then
Operations with cofactors
Boolean difference:
The Boolean difference or Boolean derivative of the f |
https://en.wikipedia.org/wiki/Nephelometer | A nephelometer or aerosol photometer is an instrument for measuring the concentration of suspended particulates in a liquid or gas colloid. A nephelometer measures suspended particulates by employing a light beam (source beam) and a light detector set to one side (often 90°) of the source beam. Particle density is then a function of the light reflected into the detector from the particles. To some extent, how much light reflects for a given density of particles is dependent upon properties of the particles such as their shape, color, and reflectivity. Nephelometers are calibrated to a known particulate, then use environmental factors (k-factors) to compensate lighter or darker colored dusts accordingly. K-factor is determined by the user by running the nephelometer next to an air sampling pump and comparing results. There are a wide variety of research-grade nephelometers on the market as well as open source varieties.
Nephelometer uses
The main uses of nephelometers relate to air quality measurement for pollution monitoring, climate monitoring, and visibility. Airborne particles are commonly either biological contaminants, particulate contaminants, gaseous contaminants, or dust.
The accompanying chart shows the types and sizes of various particulate contaminants. This information helps understand the character of particulate pollution inside a building or in the ambient air, as well as the cleanliness level in a controlled environment.
Biological contaminants include mold, fungus, bacteria, viruses, animal dander, dust mites, pollen, human skin cells, cockroach parts, or anything alive or living at one time. They are the biggest enemy of indoor air quality specialists because they are contaminants that cause health problems. Levels of biological contamination depend on humidity and temperature that supports the livelihood of micro-organisms. The presence of pets, plants, rodents, and insects will raise the level of biological contamination.
Sheath air
Sheath |
https://en.wikipedia.org/wiki/Multiple-emitter%20transistor | A multiple-emitter transistor is a specialized bipolar transistor mostly used at the inputs of integrated circuit TTL NAND logic gates. Input signals are applied to the emitters. The voltage presented to the following stage is pulled low if any one or more of the base–emitter junctions is forward biased, allowing logical operations to be performed using a single transistor. Multiple-emitter transistors replace the diodes of diode–transistor logic (DTL) to make transistor–transistor logic (TTL), and thereby allow reduction of switching time and power dissipation.
Logic gate use of multiple-emitter transistors was patented in 1961 in the UK and in the US in 1962. |
https://en.wikipedia.org/wiki/Situational%20application | In computing, a situational application is "good enough" software created for a narrow group of users with a unique set of needs. The application typically (but not always) has a short life span, and is often created within the group where it is used, sometimes by the users themselves. As the requirements of a small team using the application change, the situational application often also continues to evolve to accommodate these changes. Although situational applications are specifically designed to embrace change, significant changes in requirements may lead to an abandonment of the situational application altogether – in some cases it is just easier to develop a new one than to evolve the one in use.
Characteristics
Situational applications are developed fast, easy to use, uncomplicated, and serve a unique set of requirements. They have a narrow focus on a specific business problem, and they are written in a way where if the business problem changes rapidly, so can the situational application.
This contrasts with more common enterprise applications, which are designed to address a large set of business problems, require meticulous planning, and impose a sometimes-slow and often-meticulous change process.
Origination
Clay Shirky in his essay entitled "Situated Software" described a type of software that "...is designed for use by a specific social group, rather than for a generic set of "users"." IBM later morphed the term into "situational applications".
Evolution
The successful large-scale implementation of a situational application environment in an organization requires a strategy, mindset, methodology and support structure quite different from traditional application development. This is now evolving as more companies learn how to best leverage the ideas behind situational applications. In addition, the advent of cloud-based application development and deployment platforms makes the implementation of a comprehensive situational application environment mu |
https://en.wikipedia.org/wiki/Glossary%20of%20field%20theory | Field theory is the branch of mathematics in which fields are studied. This is a glossary of some terms of the subject. (See field theory (physics) for the unrelated field theories in physics.)
Definition of a field
A field is a commutative ring in which and every nonzero element has a multiplicative inverse. In a field we thus can perform the operations addition, subtraction, multiplication, and division.
The non-zero elements of a field F form an abelian group under multiplication; this group is typically denoted by F×;
The ring of polynomials in the variable x with coefficients in F is denoted by F[x].
Basic definitions
Characteristic The characteristic of the field F is the smallest positive integer n such that ; here n·1 stands for n summands . If no such n exists, we say the characteristic is zero. Every non-zero characteristic is a prime number. For example, the rational numbers, the real numbers and the p-adic numbers have characteristic 0, while the finite field Zp with p being prime has characteristic p.
Subfield A subfield of a field F is a subset of F which is closed under the field operation + and * of F and which, with these operations, forms itself a field.
Prime field The prime field of the field F is the unique smallest subfield of F.
Extension field If F is a subfield of E then E is an extension field of F. We then also say that E/F is a field extension.
Degree of an extension Given an extension E/F, the field E can be considered as a vector space over the field F, and the dimension of this vector space is the degree of the extension, denoted by [E : F].
Finite extension A finite extension is a field extension whose degree is finite.
Algebraic extension If an element α of an extension field E over F is the root of a non-zero polynomial in F[x], then α is algebraic over F. If every element of E is algebraic over F, then E/F is an algebraic extension.
Generating set Given a field extension E/F and a subset S of E, we write F |
https://en.wikipedia.org/wiki/Milman%27s%20reverse%20Brunn%E2%80%93Minkowski%20inequality | In mathematics, particularly, in asymptotic convex geometry, Milman's reverse Brunn–Minkowski inequality is a result due to Vitali Milman that provides a reverse inequality to the famous Brunn–Minkowski inequality for convex bodies in n-dimensional Euclidean space Rn. Namely, it bounds the volume of the Minkowski sum of two bodies from above in terms of the volumes of the bodies.
Introduction
Let K and L be convex bodies in Rn. The Brunn–Minkowski inequality states that
where vol denotes n-dimensional Lebesgue measure and the + on the left-hand side denotes Minkowski addition.
In general, no reverse bound is possible, since one can find convex bodies K and L of unit volume so that the volume of their Minkowski sum is arbitrarily large. Milman's theorem states that one can replace one of the bodies by its image under a properly chosen volume-preserving linear map so that the left-hand side of the Brunn–Minkowski inequality is bounded by a constant multiple of the right-hand side.
The result is one of the main structural theorems in the local theory of Banach spaces.
Statement of the inequality
There is a constant C, independent of n, such that for any two centrally symmetric convex bodies K and L in Rn, there are volume-preserving linear maps φ and ψ from Rn to itself such that for any real numbers s, t > 0
One of the maps may be chosen to be the identity.
Notes |
https://en.wikipedia.org/wiki/Food%20intolerance | Food intolerance is a detrimental reaction, often delayed, to a food, beverage, food additive, or compound found in foods that produces symptoms in one or more body organs and systems, but generally refers to reactions other than food allergy. Food hypersensitivity is used to refer broadly to both food intolerances and food allergies.
Food allergies are immune reactions, typically an IgE reaction caused by the release of histamine but also encompassing non-IgE immune responses. This mechanism causes allergies to typically give immediate reaction (a few minutes to a few hours) to foods.
Food intolerances can be classified according to their mechanism. Intolerance can result from the absence of specific chemicals or enzymes needed to digest a food substance, as in hereditary fructose intolerance. It may be a result of an abnormality in the body's ability to absorb nutrients, as occurs in fructose malabsorption. Food intolerance reactions can occur to naturally occurring chemicals in foods, as in salicylate sensitivity. Drugs sourced from plants, such as aspirin, can also cause these kinds of reactions.
Definitions
Food hypersensitivity is used to refer broadly to both food intolerances and food allergies. There are a variety of earlier terms which are no longer in use such as "pseudo-allergy".
Food intolerance reactions can include pharmacologic, metabolic, and gastro-intestinal responses to foods or food compounds. Food intolerance does not include either psychological responses or foodborne illness.
A non-allergic food hypersensitivity is an abnormal physiological response. It can be difficult to determine the poorly tolerated substance as reactions can be delayed, dose-dependent, and a particular reaction-causing compound may be found in many foods.
Metabolic food reactions are due to inborn or acquired errors of metabolism of nutrients, such as in lactase deficiency, phenylketonuria and favism.
Pharmacological reactions are generally due to low-molecular-we |
https://en.wikipedia.org/wiki/Centre%20for%20Astrophysics%20of%20the%20University%20of%20Porto | The Centro de Astrofísica da Universidade do Porto (Centre for Astrophysics of the University of Porto - CAUP) is the largest astronomy research institute in Portugal, with more than 60 people. Since 2000 it has been evaluated as Excellent by international panels, organized under the auspices of the national science foundation (Fundação para a Ciência e Tecnologia - FCT), which was created in May 1989 by the Universidade do Porto. It is a private research institute, non-profit making and recognized as being of public utility by the Portuguese Government.
Its objectives include the promotion and support of astronomy through
research
education at graduate and undergraduate levels
activities for primary and secondary schools
science outreach and the popularisation of astronomy
CAUP is responsible for the scientific management of the planetarium of Porto.
Research teams
Origin and Evolution of Stars and Planets - Star Formation and Early Evolution; Planetary Systems (Exoplanets); Stellar Populations and Stellar Evolution
Galaxies and Observational Cosmology - Physical properties of massive galaxies; Galaxy cluster astrophysics; Structure formation paradigms; Dynamical dark energy; Varying fundamental constants
Directors
Maria Teresa V. T. Lago (1989–2005)
Mário João P. F. G. Monteiro (2006–2012)
Pedro Pina Avelino (2013–2014)
João José F. G. A. Lima (2015–present) |
https://en.wikipedia.org/wiki/Universal%202nd%20Factor | Universal 2nd Factor (U2F) is an open standard that strengthens and simplifies two-factor authentication (2FA) using specialized Universal Serial Bus (USB) or near-field communication (NFC) devices based on similar security technology found in smart cards. It is succeeded by the FIDO2 Project, which includes the W3C Web Authentication (WebAuthn) standard and the FIDO Alliance's Client to Authenticator Protocol 2 (CTAP2).
While initially developed by Google and Yubico, with contribution from NXP Semiconductors, the standard is now hosted by the FIDO Alliance.
Advantages and disadvantages
While Time-based one-time passwords (e.g. 6-digit codes generated on Google Authenticator) were a significant improvement over SMS-based security codes, a number of security vulnerabilities were still possible to exploit, which U2F sought to improve. Specifically:
In terms of disadvantages, one significant difference and potential drawback to be considered regarding hardware-based U2F solutions is that unlike with TOTP shared secret methods, there is no possibility of "backing up" recovery codes or shared secrets. If a hardware duplicate or alternative hardware key is not kept and the original U2F hardware key is lost, no recovery of the key is possible (because the private key exists only in hardware). Therefore, for services that do not provide any alternative account recovery method, the use of U2F should be carefully considered.
Design
The USB devices communicate with the host computer using the human interface device (HID) protocol, essentially mimicking a keyboard. This avoids the need for the user to install special hardware driver software in the host computer, and permits application software (such as a browser) to directly access the security features of the device without user effort other than possessing and inserting the device. Once communication is established, the application exercises a challenge–response authentication with the device using public-key c |
https://en.wikipedia.org/wiki/CryptGenRandom | CryptGenRandom is a deprecated cryptographically secure pseudorandom number generator function that is included in Microsoft CryptoAPI. In Win32 programs, Microsoft recommends its use anywhere random number generation is needed. A 2007 paper from Hebrew University suggested security problems in the Windows 2000 implementation of CryptGenRandom (assuming the attacker has control of the machine). Microsoft later acknowledged that the same problems exist in Windows XP, but not in Vista. Microsoft released a fix for the bug with Windows XP Service Pack 3 in mid-2008.
Background
The Win32 API includes comprehensive support for cryptographic security, including native TLS support (via the SCHANNEL API) and code signing. These capabilities are built on native Windows libraries for cryptographic operations, such as RSA and AES key generation. These libraries in turn rely on a cryptographically secure pseudorandom number generator (CSPRNG). CryptGenRandom is the standard CSPRNG for the Win32 programming environment.
Method of operation
Microsoft-provided cryptography providers share the same implementation of CryptGenRandom, currently based on an internal function called RtlGenRandom. Only a general outline of the algorithm had been published :
[RtlGenRandom] generates as specified in FIPS 186-2 appendix 3.1 with SHA-1 as the G function. And with entropy from:
The current process ID (GetCurrentProcessID).
The current thread ID (GetCurrentThreadID).
The tick count since boot time (GetTickCount).
The current time (GetLocalTime).
Various high-precision performance counters (QueryPerformanceCounter).
An MD4 hash of the user's environment block, which includes username, computer name, and search path. [...]
High-precision internal CPU counters, such as RDTSC, RDMSR, RDPMC
[omitted: long lists of low-level system information fields and performance counters]
Security
The security of a cryptosystem's CSPRNG is significant because it is the origin for dynamic key material. Keys |
https://en.wikipedia.org/wiki/Aerospace%20architecture | Aerospace architecture is broadly defined to encompass architectural design of non-habitable and habitable structures and living and working environments in aerospace-related facilities, habitats, and vehicles. These environments include, but are not limited to: science platform aircraft and aircraft-deployable systems; space vehicles, space stations, habitats and lunar and planetary surface construction bases; and Earth-based control, experiment, launch, logistics, payload, simulation and test facilities. Earth analogs to space applications may include Antarctic, desert, high altitude, underground, undersea environments and closed ecological systems.
The American Institute of Aeronautics and Astronautics (AIAA) Design Engineering Technical Committee (DETC) meets several times a year to discuss policy, education, standards, and practice issues pertaining to aerospace architecture.
The role of Appearance in Aerospace architecture
"The role of design creates and develops concepts and specifications that seek to simultaneously and synergistically optimize function, production, value and appearance." In connection with, and with respect to, human presence and interactions, appearance is a component of human factors and includes considerations of human characteristics, needs and interests.
Appearance in this context refers to all visual aspects – the statics and dynamics of form(s), color(s), patterns, and textures in respect to all products, systems, services, and experiences. Appearance/esthetics affects humans both psychologically and physiologically and can effect/improving both human efficiency, attitude, and well-being.
In reference to non-habitable design the influence of appearance is minimal if not non-existent. However, as the industry of aerospace continues to rapidly grow, and missions to put humans on Mars and back to the Moon are being announced. The role that appearance/esthetics to maintain crew well-being and health of multi-month or year missions b |
https://en.wikipedia.org/wiki/SPIKE%20algorithm | The SPIKE algorithm is a hybrid parallel solver for banded linear systems developed by Eric Polizzi and Ahmed Sameh
Overview
The SPIKE algorithm deals with a linear system , where is a banded matrix of bandwidth much less than , and is an matrix containing right-hand sides. It is divided into a preprocessing stage and a postprocessing stage.
Preprocessing stage
In the preprocessing stage, the linear system is partitioned into a block tridiagonal form
Assume, for the time being, that the diagonal blocks ( with ) are nonsingular. Define a block diagonal matrix
,
then is also nonsingular. Left-multiplying to both sides of the system gives
which is to be solved in the postprocessing stage. Left-multiplication by is equivalent to solving systems of the form
(omitting and for , and and for ), which can be carried out in parallel.
Due to the banded nature of , only a few leftmost columns of each and a few rightmost columns of each can be nonzero. These columns are called the spikes.
Postprocessing stage
Without loss of generality, assume that each spike contains exactly columns ( is much less than ) (pad the spike with columns of zeroes if necessary). Partition the spikes in all and into
and
where , , and are of dimensions . Partition similarly all and into
and
Notice that the system produced by the preprocessing stage can be reduced to a block pentadiagonal system of much smaller size (recall that is much less than )
which we call the reduced system and denote by .
Once all and are found, all can be recovered with perfect parallelism via
SPIKE as a polyalgorithmic banded linear system solver
Despite being logically divided into two stages, computationally, the SPIKE algorithm comprises three stages:
factorizing the diagonal blocks,
computing the spikes,
solving the reduced system.
Each of these stages can be accomplished in several ways, allowing a multitude of variants. Two notable variants are the recursive SPIKE algorith |
https://en.wikipedia.org/wiki/KaiB | KaiB is a gene located in the highly-conserved kaiABC gene cluster of various cyanobacterial species. Along with KaiA and KaiC, KaiB plays a central role in operation of the cyanobacterial circadian clock. Discovery of the Kai genes marked the first-ever identification of a circadian oscillator in a prokaryotic species. Moreover, characterization of the cyanobacterial clock demonstrated the existence of transcription-independent, post-translational mechanisms of rhythm generation, challenging the universality of the transcription-translation feedback loop model of circadian rhythmicity.
Discovery
Prokaryotic circadian rhythms
Circadian rhythms - endogenous, entrainable oscillations in biological processes with periods that roughly correspond to the 24-hour day – were once believed to be an exclusive property of eukaryotic lifeforms. Prokaryotes were thought to lack the cellular complexity to maintain persistent, temperature-compensated timekeeping. In addition, the widely supported "circadian- infradian rule" stipulated that cellular functions could only be coupled to a circadian oscillator in cells dividing only as fast as once in a 24-hour period. Prokaryotes, which often undergo cellular division multiple times in a single day, failed to meet this condition.
Over time, mounting evidence began to challenge this assertion and supported the existence of a bacterial circadian rhythm. For example, discrete temporal separation of photosynthesis and nitrogen fixation observed in cyanobacteria suggested the existence of some mechanism of circadian control. Finally, in 1986 Tan-Chi Huang and colleagues discovered and characterized robust, 24-hour rhythms of nitrogen fixation in Synechococcus cyanobacteria, demonstrating circadian rhythmicity in a prokaryotic species. Following these discoveries, chronobiologists set out to identify the molecular mechanisms governing operation of the cyanobacterial clock.
Discovery of the cyanobacterial clock
Takao Kondo, Carl Johns |
https://en.wikipedia.org/wiki/Laboratory%20automation | Laboratory automation is a multi-disciplinary strategy to research, develop, optimize and capitalize on technologies in the laboratory that enable new and improved processes. Laboratory automation professionals are academic, commercial and government researchers, scientists and engineers who conduct research and develop new technologies to increase productivity, elevate experimental data quality, reduce lab process cycle times, or enable experimentation that otherwise would be impossible.
The most widely known application of laboratory automation technology is laboratory robotics. More generally, the field of laboratory automation comprises many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms, and methodologies used to enable, expedite and increase the efficiency and effectiveness of scientific research in laboratories.
The application of technology in today's laboratories is required to achieve timely progress and remain competitive. Laboratories devoted to activities such as high-throughput screening, combinatorial chemistry, automated clinical and analytical testing, diagnostics, large-scale biorepositories, and many others, would not exist without advancements in laboratory automation. Some universities offer entire programs that focus on lab technologies. For example, Indiana University-Purdue University at Indianapolis offers a graduate program devoted to Laboratory Informatics. Also, the Keck Graduate Institute in California offers a graduate degree with an emphasis on development of assays, instrumentation and data analysis tools required for clinical diagnostics, high-throughput screening, genotyping, microarray technologies, proteomics, imaging and other applications.
History
At least since 1875 there have been reports of automated devices for scientific investigation. These first devices were mostly built by scientists themselves in order to solve problems in the laboratory. After the s |
https://en.wikipedia.org/wiki/Sergei%20Bernstein | Sergei Natanovich Bernstein (, sometimes Romanized as ; 5 March 1880 – 26 October 1968) was a Ukrainian and Russian mathematician of Jewish origin known for contributions to partial differential equations, differential geometry, probability theory, and approximation theory.
Bernstein was born into a Jewish family living in Odessa. After high school Bernstein went to Paris to study mathematics. He returned to Russia in 1905 and taught at Kharkiv University from 1908 to 1933. He was made an ordinary professor in 1920. Bernstein later worked at the Mathematical Institute of the USSR Academy of Sciences in Leningrad, and also taught at the University and Polytechnic Institute. From January 1939, Bernstein also worked also at Moscow University. He and his wife were evacuated to Borovoe, Kazakhstan in 1941. From 1943 he worked at the Mathematical Institute in Moscow, and edited Chebyshev’s complete works. In 1947 he was dismissed from the University and became Head of the Department of Constructive Function Theory at the Steklov Institute. He died in Moscow in 1968.
Work
Partial differential equations
In his doctoral dissertation, submitted in 1904 to the Sorbonne, Bernstein solved Hilbert's nineteenth problem on the analytic solution of elliptic differential equations. His later work was devoted to Dirichlet's boundary problem for non-linear equations of elliptic type, where, in particular, he introduced a priori estimates.
Probability theory
In 1917, Bernstein suggested the first axiomatic foundation of probability theory, based on the underlying algebraic structure. It was later superseded by the measure-theoretic approach of Kolmogorov.
In the 1920s, he introduced a method for proving limit theorems for sums of dependent random variables.
Approximation theory
Through his application of Bernstein polynomials, he laid the foundations of constructive function theory, a field studying the connection between smoothness properties of a function and its approximation |
https://en.wikipedia.org/wiki/Patricia%20Campbell | Patricia F. Campbell is an American mathematician and mathematics educator. She is a professor in the Department of Teaching and Learning, Policy and Leadership at the University of Maryland, College Park. Her work has concerned the improvement of mathematics education in minority and lower-income secondary schools, and the effectiveness of mathematics coaching in mathematics education.
Campbell is a graduate of the College of St. Francis. After earning a master's degree in mathematics at Michigan State University, she completed a Ph.D. in mathematics education at the Florida State University. She was co-chair of the American Educational Research Association Special Interest Group on Research in Math Education for 2007–2009.
In 2011 she was given the Twenty-First Annual Louise Hay Award for Contributions to Mathematics Education. The Association for Women in Mathematics awarded it to her "for her contributions to the teaching and learning of mathematics in urban settings and for working in schools that serve predominantly minority populations from low-income backgrounds". |
https://en.wikipedia.org/wiki/Teratology | Teratology is the study of abnormalities of physiological development in organisms during their life span. It is a sub-discipline in medical genetics which focuses on the classification of congenital abnormalities in dysmorphology caused by teratogens. Teratogens are substances that may cause non-heritable birth defects via a toxic effect on an embryo or fetus. Defects include malformations, disruptions, deformations, and dysplasia that may cause stunted growth, delayed mental development, or other congenital disorders that lack structural malformations. The related term developmental toxicity includes all manifestations of abnormal development that are caused by environmental insult. The extent to which teratogens will impact an embryo is dependent on several factors, such as how long the embryo has been exposed, the stage of development the embryo was in when exposed, the genetic makeup of the embryo, and the transfer rate of the teratogen.
Etymology
The term was borrowed in 1842 from the French , where it was formed in 1830 from the Greek (word stem ), meaning "sign sent by the gods, portent, marvel, monster", and (-ology), used to designate a discourse, treaty, science, theory, or study of some topic.
Old literature referred to abnormalities of all kinds under the Latin term Lusus naturae (lit. "freak of nature"). As early as the 17th century, Teratology referred to a discourse on prodigies and marvels of anything so extraordinary as to seem abnormal. In the 19th century, it acquired a meaning more closely related to biological deformities, mostly in the field of botany. Currently, its most instrumental meaning is that of the medical study of teratogenesis, congenital malformations or individuals with significant malformations. Historically, people have used many pejorative terms to describe/label cases of significant physical malformations. In the 1960s, David W. Smith of the University of Washington Medical School (one of the researchers who became know |
https://en.wikipedia.org/wiki/Eulerian%20coherent%20structure | In applied mathematics, objective Eulerian coherent structures (OECSs) are the instantaneously most influential surfaces or curves that exert a major influence on nearby trajectories in a dynamical system over short time-scales, and are the short-time limit of Lagrangian coherent structures (LCSs). Such influence can be of different types, but OECSs invariably create a short-term coherent trajectory pattern for which they serve as a theoretical centerpiece. While LCSs are intrinsically tied to a specific finite time interval, OECSs can be computed at any time instant regardless of the multiple and generally unknown time scales of the system.
In observations of tracer patterns in nature, one readily identifies short-term variability in material structures such as emerging and dissolving coherent features. However, it is often the underlying structure creating these features that is of interest. While individual tracer trajectories forming coherent patterns are generally sensitive with respect to changes in their initial conditions and the system parameters, OECSs are robust and reveal the instantaneous time-varying skeleton of complex dynamical systems. Despite OECSs are defined for general dynamical systems, their role in creating coherent patterns is perhaps most readily observable in fluid flows. Therefore, OECSs are suitable in a number of applications ranging from flow control to environmental assessment such as now-casting or short-term forecasting of pattern evolution, where quick operational decisions need to be made. Examples include floating debris, oil spills, surface drifters, and control of unsteady flow separation. |
https://en.wikipedia.org/wiki/Tony%20Koester | J. Anthony Koester, more commonly known as Tony Koester, is a well-known member of the United States model railroading community. Along with his friend Allen McClelland and his Virginian & Ohio, Koester popularized the idea of proto-freelancing with his HO scale model railroad, the Allegheny Midland. At Purdue University in the early 1960s, he studied electrical engineering, communication, and art. While at Purdue, he was also a member and president of the Purdue Railroad Club. In 1966, with Glenn Pizer he co-founded the Nickel Plate Road Historical & Technical Society to preserve the memory of his favorite railroad.
In 1969, Koester and his wife and children relocated from Indiana to northeastern New Jersey to take a position with Carstens Publications as editor of Railroad Model Craftsman. In 1973, the company relocated to Newton in northwestern New Jersey, and the Koesters built a new home that housed his last two model railroads. He had previously developed a close friendship with Jim Boyd, who joined Carstens in 1971 and in 1975 became the editor of Railfan & Railroad. It was Koester's exposure to the V&O and eastern mountain coal railroading in the Appalachians that led him to develop the concept of the Allegheny Midland. Blending elements of Nickel Plate and some C&O equipment and operation with Chesapeake & Ohio structures and scenery, the Allegheny Midland became the Nickel Plate's plausible West Virginia coal-hauler. Regular updates in the pages of Railroad Model Craftsman made the Allegheny Midland known to modelers across America and beyond.
Koester left Carstens in 1981 and took a job with Bell Laboratories, editing their publications for two decades. In November 1985, he also began writing a monthly column called "Trains of Thought" in the pages of Model Railroader, published by Kalmbach. In 1995, he became the founding editor of the annual Model Railroad Planning. After 20 years editing telecommunication journals and the corporate science magazine a |
https://en.wikipedia.org/wiki/TIM-600 | TIM-600 was an important PC computer system in the TIM series of microcomputers from Mihajlo Pupin Institute-Belgrade, developed from 1987 to 1988 (see ref.Lit. #1, #2 and #6). It was based on the Intel microprocessor types 80386 and 80387. It has word-length of 32 bits, basic cycle time of 20 MHz and operating system Unix V.3. The TIM-600 computer system was presented at the Munich International Computer Exhibition in September 1988.
System specifications
TIM-600 architecture was based on three system buses (32, 16 and 8 bits respectively). The CPU performs 5,000,000 simple operations per second. Primary memory RAM had a maximum capacity of 8 x 2 MB. There were a maximum of eight TIM terminals or other equipment units connected by RS-232C. Centronics types interface was used for the line printers. Also, there were possibilities for the connections of two hard disks as well as the magnetic cassettes and diskettes.
Software
The TIM-600 uses the programming languages C++, Fortran, Cobol, BASIC and Pascal. Database management was performed by Informix and Oracle software.
Applications
The TIM-600 computer system was used for business data processing in many offices in Serbia, for example in public, health, and scientific organizations; for process automation in industrial production; in road traffic control; in some banks; for military and government services, etc.
See also
Mihajlo Pupin Institute
Personal computer |
https://en.wikipedia.org/wiki/Yurii%20Egorov | Yurii (or Yuri) Vladimirovich Egorov (Юрий Владимирович Егоров, born 14 July 1938 in Moscow, died October 2018 in Toulouse) was a Russian-Soviet mathematician who specialized in differential equations.
Biography
In 1960 he completed his undergraduate studies at the Mechanics and Mathematics Faculty of Moscow State University (MSU). In 1963 from MSU he received his Ph.D. with the thesis "Некоторые задачи теории оптимального управления в бесконечномерных пространствах" ("Some Problems of Optimal Control Theory in Infinite-Dimensional Spaces"). In 1970 from MSU he received his Russian doctorate of sciences (Doctor Nauk) with thesis: "О локальных свойствах псевдодифференциальных операторов главного типа" ("Local Properties of Pseudodifferential Operators of Principal Type"). He was employed at MSU from 1961 to 1992, and he was a full professor in the Department of Differential Equations of the Mechanics and Mathematics Faculty there from 1973 to 1992. Since 1992 he has been a professor of mathematics at Paul Sabatier University (Toulouse III).
Egorov's research deals with differential equations and applications in mathematical physics, spectral theory, and optimal control theory. In 1970 he was an Invited Speaker of the ICM in Nice.
Awards
1981 — Lomonosov Memorial Prize (established in 1944) — for his series of publications on "Субэллиптические операторы и их применения к исследованию краевых задач" (Subelliptic operators and their applications to the study of boundary value problems)
1988 — USSR State Prize (with several co-authors) — for their series of publications (1958–1985) on "Исследования краевых задач для дифференциальных операторов и их приложения в математической физике" (Research on boundary value problems and their applications in mathematical physics)
1998 — Petrovsky Award (jointly with V. A. Kondratiev) for their series of publications on "Исследование спектра эллиптических операторов" (The study of the spectra of elliptic operators)
Selected p |
https://en.wikipedia.org/wiki/Photoactivated%20adenylyl%20cyclase | Photoactivated adenylyl cyclase (PAC) is a protein consisting of an adenylyl cyclase enzyme domain directly linked to a BLUF (blue light receptor using FAD) type light sensor domain. When illuminated with blue light, the enzyme domain becomes active and converts ATP to cAMP, an important second messenger in many cells. In the unicellular flagellate Euglena gracilis, PACα and PACβ (euPACs) serve as a photoreceptor complex that senses light for photophobic responses and phototaxis. Small but potent PACs were identified in the genome of the bacteria Beggiatoa (bPAC) and Oscillatoria acuminata (OaPAC). While natural bPAC has some enzymatic activity in the absence of light, variants with no dark activity have been engineered (PACmn).
Use of PACs as optogenetic tools
As PACs consist of a light sensor and an enzyme in a single protein, they can be expressed in other species and cell types to manipulate cAMP levels with light. When bPAC is expressed in mouse sperm, blue light illumination speeds up the swimming of transgenic sperm cells and aids fertilization. When expressed in neurons, illumination changes the branching pattern of growing axons. PAC has been used in mice to clarify the function of neurons in the hypothalamus, which use cAMP signaling to control mating behavior. Expression of PAC together with K+-specific cyclic-nucleotide-gated ion channels (CNGs) has been used to hyperpolarize neurons at very low light levels, which prevents them from firing action potentials.
Other light-activated cyclases
Photoactivated guanylyl cyclases have been discovered in the aquatic fungi Blastocladiella emersonii and Catenaria anguillulae. Unlike PACs, these light-activated cyclases use retinal as their light sensor and are therefore rhodopsin guanylyl cyclases (RhGC). When expressed in Xenopus oocytes or mammalian neurons, RhGCs generate cGMP in response to green light. Therefore, they are considered useful optogenetic tools to investigate cGMP signaling. |
https://en.wikipedia.org/wiki/Floorplan%20%28microelectronics%29 | In electronic design automation, a floorplan of an integrated circuit is a schematic representation of tentative placement of its major functional blocks.
In modern electronic design process floorplans are created during the floorplanning design stage, an early stage in the hierarchical approach to integrated circuit design.
Depending on the design methodology being followed, the actual definition of a floorplan may differ.
Floorplanning
Floorplanning takes in some of the geometrical constraints in a design. Here are some examples:
bonding pads for off-chip connections (often using wire bonding) are normally located at the circumference of the chip;
line drivers often have to be located as close to bonding pads as possible;
chip area is therefore in some cases given a minimum area in order to fit in the required number of pads;
areas are clustered in order to limit data paths thus frequently featuring defined structures such as cache RAM, multiplier, barrel shifter, line driver and arithmetic logic unit;
purchased intellectual property blocks (IP-blocks), such as a processor core, come in predefined area blocks;
some IP-blocks come with legal limitations such as permitting no routing of signals directly above the block.
Mathematical models and optimization problems
In some approaches the floorplan may be a partition of the whole chip area into axis aligned rectangles to be occupied by IC blocks. This partition is subject to various constraints and requirements of optimization: block area, aspect ratios, estimated total measure of interconnects, etc.
Finding good floorplans has been a research area in combinatorial optimization. Most of the problems related to finding optimal floorplans are NP-hard, i.e., require vast computational resources. Therefore, the most common approach is to use various optimization heuristics for finding good solutions.
Another approach is to restrict design methodology to certain classes of floorplans, such as sliceable floor |
https://en.wikipedia.org/wiki/Palmar%20metacarpal%20veins | The palmar metacarpal veins (or volar metacarpal veins) drain the metacarpal region of the palm, eventually draining into the deep veins of the arm. |
https://en.wikipedia.org/wiki/DSniff | dSniff is a set of password sniffing and network traffic analysis tools written by security researcher and startup founder Dug Song to parse different application protocols and extract relevant information. dsniff, filesnarf, mailsnarf, msgsnarf, urlsnarf, and webspy passively monitor a network for interesting data (passwords, e-mail, files, etc.). arpspoof, dnsspoof, and macof facilitate the interception of network traffic normally unavailable to an attacker (e.g., due to layer-2 switching). sshmitm and webmitm implement active man-in-the-middle attacks against redirected SSH and HTTPS sessions by exploiting weak bindings in ad-hoc PKI.
Overview
The applications sniff usernames and passwords, web pages being visited, contents of an email, etc. As the name implies, dsniff is a network sniffer, but it can also be used to disrupt the normal behavior of switched networks and cause network traffic from other hosts on the same network segment to be visible, not just traffic involving the host dsniff is running on.
It handles FTP, Telnet, SMTP, HTTP, POP, poppass, NNTP, IMAP, SNMP, LDAP, Rlogin, RIP, OSPF, PPTP MS-CHAP, NFS, VRRP, YP/NIS, SOCKS, X11, CVS, IRC, AIM, ICQ, Napster, PostgreSQL, Meeting Maker, Citrix ICA, Symantec pc Anywhere, NAI Sniffer, Microsoft SMB, Oracle SQL*Net, Sybase and Microsoft SQL protocols.
The name "dsniff" refers both to the package as well as an included tool. The "dsniff" tool decodes passwords sent in cleartext across a switched or unswitched Ethernet network. Its man page explains that Dug Song wrote dsniff with "honest intentions - to audit my own network, and to demonstrate the insecurity of cleartext network protocols." He then requests, "Please do not abuse this software."
These are the files that are configured in dsniff folder /etc/dsniff/
/etc/dsniff/dnsspoof.hosts Sample hosts file.
If no host file is specified, replies will be forged for all address queries on the LAN with an answer of the local machine’s IP address.
/etc/d |
https://en.wikipedia.org/wiki/Fear%20conditioning | Pavlovian fear conditioning is a behavioral paradigm in which organisms learn to predict aversive events. It is a form of learning in which an aversive stimulus (e.g. an electrical shock) is associated with a particular neutral context (e.g., a room) or neutral stimulus (e.g., a tone), resulting in the expression of fear responses to the originally neutral stimulus or context. This can be done by pairing the neutral stimulus with an aversive stimulus (e.g., an electric shock, loud noise, or unpleasant odor). Eventually, the neutral stimulus alone can elicit the state of fear. In the vocabulary of classical conditioning, the neutral stimulus or context is the "conditional stimulus" (CS), the aversive stimulus is the "unconditional stimulus" (US), and the fear is the "conditional response" (CR).
Fear conditioning has been studied in numerous species, from snails to humans. In humans, conditioned fear is often measured with verbal report and galvanic skin response. In other animals, conditioned fear is often measured with freezing (a period of watchful immobility) or fear potentiated startle (the augmentation of the startle reflex by a fearful stimulus). Changes in heart rate, breathing, and muscle responses via electromyography can also be used to measure conditioned fear. A number of theorists have argued that conditioned fear coincides substantially with the mechanisms, both functional and neural, of clinical anxiety disorders. Research into the acquisition, consolidation and extinction of conditioned fear promises to inform new drug based and psychotherapeutic treatments for an array of pathological conditions such as dissociation, phobias and post-traumatic stress disorder.
Scientists have discovered that there is a set of brain connections that determine how fear memories are stored and recalled. While studying rats' ability to recall fear memories, researchers found a newly identified brain circuit is involved. Initially, the pre-limbic prefrontal cortex (PL) |
https://en.wikipedia.org/wiki/Arnold%20tongue | In mathematics, particularly in dynamical systems, Arnold tongues (named after Vladimir Arnold) are a pictorial phenomenon that occur when visualizing how the rotation number of a dynamical system, or other related invariant property thereof, changes according to two or more of its parameters. The regions of constant rotation number have been observed, for some dynamical systems, to form geometric shapes that resemble tongues, in which case they are called Arnold tongues.
Arnold tongues are observed in a large variety of natural phenomena that involve oscillating quantities, such as concentration of enzymes and substrates in biological processes and cardiac electric waves. Sometimes the frequency of oscillation depends on, or is constrained (i.e., phase-locked or mode-locked, in some contexts) based on some quantity, and it is often of interest to study this relation. For instance, the outset of a tumor triggers in the area a series of substance (mainly proteins) oscillations that interact with each other; simulations show that these interactions cause Arnold tongues to appear, that is, the frequency of some oscillations constrain the others, and this can be used to control tumor growth.
Other examples where Arnold tongues can be found include the inharmonicity of musical instruments, orbital resonance and tidal locking of orbiting moons, mode-locking in fiber optics and phase-locked loops and other electronic oscillators, as well as in cardiac rhythms, heart arrhythmias and cell cycle.
One of the simplest physical models that exhibits mode-locking consists of two rotating disks connected by a weak spring. One disk is allowed to spin freely, and the other is driven by a motor. Mode locking occurs when the freely-spinning disk turns at a frequency that is a rational multiple of that of the driven rotator.
The simplest mathematical model that exhibits mode-locking is the circle map, which attempts to capture the motion of the spinning disks at discrete time interv |
https://en.wikipedia.org/wiki/Photoelastic%20modulator | A photoelastic modulator (PEM) is an optical device used to modulate the polarization of a light source. The photoelastic effect is used to change the birefringence of the optical element in the photoelastic modulator.
PEM was first invented by J. Badoz in the 1960s and originally called a "birefringence modulator." It was initially developed for physical measurements including optical rotary dispersion and Faraday rotation, polarimetry of astronomical objects, strain-induced birefringence, and ellipsometry. Later developers of the photoelastic modulator include J.C Kemp, S.N Jasperson and S.E Schnatterly.
Description
The basic design of a photoelastic modulator consists of a piezoelectric transducer and a half wave resonant bar; the bar being a transparent material (now most commonly fused silica). The transducer is tuned to the natural frequency of the bar. This resonance modulation results in highly sensitive polarization measurements. The fundamental vibration of the optic is along its longest dimension.
Basic principles
The principle of operation of photoelastic modulators is based on the photoelastic effect, in which a mechanically stressed sample exhibits birefringence proportional to the resulting strain. Photoelastic modulators are resonant devices where the precise oscillation frequency is determined by the properties of the optical element/transducer assembly. The transducer is tuned to the resonance frequency of the optical element along its long dimension, determined by its length and the speed of sound in the material. A current is then sent through the transducer to vibrate the optical element through stretching and compressing which changes the birefringence of the transparent material. Because of this resonant character, the birefringence of the optical element can be modulated to large amplitudes, but also by the same reason, the operation of a PEM is limited to a single frequency, and most commercial devices manufactured today opera |
https://en.wikipedia.org/wiki/National%20Council%20of%20Teachers%20of%20Mathematics | Founded in 1920, The National Council of Teachers of Mathematics (NCTM) is a professional organization for schoolteachers of mathematics in the United States. One of its goals is to improve the standards of mathematics in education. NCTM holds annual national and regional conferences for teachers and publishes five journals.
Journals
NCTM publishes five official journals. All are available in print and online versions.
Teaching Children Mathematics supports improvement of pre-K–6 mathematics education by serving as a resource for teachers so as to provide more and better mathematics for all students. It is a forum for the exchange of mathematics idea, activities, and pedagogical strategies, and or sharing and interpreting research.
Mathematics Teaching in the Middle School supports the improvement of grade 5–9 mathematics education by serving as a resource for practicing and prospective teachers, as well as supervisors and teacher educators. It is a forum for the exchange of mathematics idea, activities, and pedagogical strategies, and or sharing and interpreting research.
Mathematics Teacher is devoted to improving mathematics instruction for grades 8–14 and supporting teacher education programs. It provides a forum for sharing activities and pedagogical strategies, deepening understanding of mathematical ideas, and linking mathematical education research to practice.
Mathematics Teacher Educator, published jointly with the Association of Mathematics Teacher Educators, contributes to building a professional knowledge base for mathematics teacher educators that stems from, develops, and strengthens practitioner knowledge. The journal provides a means for practitioner knowledge related to the preparation and support of teachers of mathematics to be not only public, shared, and stored, but also verified and improved over time (Hiebert, Gallimore, and Stigler 2002).
NCTM does not conduct research in mathematics education, but it does publish the Journal for Rese |
https://en.wikipedia.org/wiki/List%20of%20computing%20mascots | This is a list of computing mascots. A mascot is any person, animal, or object thought to bring luck, or anything used to represent a group with a common public identity. In case of computing mascots, they either represent software, hardware, or any project or collective entity behind them.
See also
List of video game mascots |
https://en.wikipedia.org/wiki/Sawney | Sawney (sometimes Sandie/y, or Sanders, or Sannock) was an English nickname for a Scotsman, now obsolete, and playing much the same linguistic role that "Jock" does now. The name is a Lowland Scots diminutive of the favourite Scottish first name Alexander (also Alasdair in Scottish Gaelic form, anglicised into Alistair) from the last two syllables. The English commonly abbreviate the first two syllables into "Alec".
From the days after the accession of James VI to the English throne under the title of James I, to the time of George III and the Bute administration, when Scotsmen were exceedingly unpopular and Dr. Samuel Johnson - the great Scotophobe, and son of a Scottish bookseller at Lichfield - thought it prudent to disguise his origin, and overdid his prudence by maligning his father's countrymen, it was customary to designate a Scotsman a "Sawney". This vulgar epithet, however, was dying out fast by the 1880s, and was obsolete by the 20th century.
Sawney was a common figure of fun in English cartoons. A particularly stereotypical example, Sawney in the Bog House, shows a stereotypical Scottish Highlander using a communal bench toilet by sticking one of his legs down each of the holes. This was originally published in London in June 1745, just over a month before Charles Edward Stuart landed in Scotland to begin the Jacobite rising of 1745. In this version Sawney's excreta emerge from below his kilt and flow across the bench. The idea was revived in a different and slightly more decorous version of 1779, which is attributed to the young James Gillray. An inscription reads:
'Tis a bra' bonny seat, o' my saul, Sawney cries,
I never beheld sic before with me Eyes,
Such a place in aw' Scotland I never could meet,
For the High and the Low ease themselves in the Street.
It has also been suggested that the Galloway cannibal Sawney Bean may have been a fabrication to emphasise the alleged savagery of the Scots.
Sometimes also used in the term "Sawney Ha'peth" |
https://en.wikipedia.org/wiki/Functional%20Ensemble%20of%20Temperament | Functional Ensemble of Temperament (FET) is a neurochemical model suggesting specific functional roles of main neurotransmitter systems in the regulation of behaviour.
Earlier theories
Medications can adjust the release of brain neurotransmitters in cases of depression, anxiety disorder, schizophrenia and other mental disorders because an imbalance within neurotransmitter systems can emerge as consistent characteristics in behaviour compromising people's lives. All people have a weaker form of such imbalance in at least one of such neurotransmitter systems that make each of us distinct from one another. The impact of this weak imbalance in neurochemistry can be seen in the consistent features of behaviour in healthy people (temperament). In this sense temperament (as neuro-chemically-based individual differences) and mental illness represents varying degrees along the same continuum of neurotransmitter imbalance in neurophysiological systems of behavioural regulation.
In fact, multiple temperament traits (such as Impulsivity, sensation seeking, neuroticism, endurance, plasticity, sociability or extraversion) have been linked to brain neurotransmitters and hormone systems.
By the end of the 20th century, it became clear that the human brain operates with more than a dozen neurotransmitters and a large number of neuropeptides and hormones. The relationships between these different chemical systems are complex as some of them suppress and some of them induce each other's release during neuronal exchanges. This complexity of relationships devalues the old approach of assigning "inhibitory vs. excitatory" roles to neurotransmitters: the same neurotransmitters can be either inhibitory or excitatory depending on what system they interact with. It became clear that an impressive diversity of neurotransmitters and their receptors is necessary to meet a wide range of behavioural situations, but the links between temperament traits and specific neurotransmitters are still a |
https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization%20algorithm | In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
History
The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith. Another was proposed by H.O. Hartley in 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated. Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977. Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers, following his collaboration with Per Martin-Löf and Anders Martin-Löf. The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997).
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983.
Wu's proof established the EM method's convergence also outside of |
https://en.wikipedia.org/wiki/Tissue%20engineering | Tissue engineering is a biomedical engineering discipline that uses a combination of cells, engineering, materials methods, and suitable biochemical and physicochemical factors to restore, maintain, improve, or replace different types of biological tissues. Tissue engineering often involves the use of cells placed on tissue scaffolds in the formation of new viable tissue for a medical purpose but is not limited to applications involving cells and tissue scaffolds. While it was once categorized as a sub-field of biomaterials, having grown in scope and importance it can be considered as a field of its own.
While most definitions of tissue engineering cover a broad range of applications, in practice the term is closely associated with applications that repair or replace portions of or whole tissues (i.e. bone, cartilage, blood vessels, bladder, skin, muscle etc.). Often, the tissues involved require certain mechanical and structural properties for proper functioning. The term has also been applied to efforts to perform specific biochemical functions using cells within an artificially-created support system (e.g. an artificial pancreas, or a bio artificial liver). The term regenerative medicine is often used synonymously with tissue engineering, although those involved in regenerative medicine place more emphasis on the use of stem cells or progenitor cells to produce tissues.
Overview
A commonly applied definition of tissue engineering, as stated by Langer and Vacanti, is "an interdisciplinary field that applies the principles of engineering and life sciences toward the development of biological substitutes that restore, maintain, or improve [Biological tissue] function or a whole organ". In addition, Langer and Vacanti also state that there are three main types of tissue engineering: cells, tissue-inducing substances, and a cells + matrix approach (often referred to as a scaffold). Tissue engineering has also been defined as "understanding the principles of tissue |
https://en.wikipedia.org/wiki/YAWL | YAWL (Yet Another Workflow Language) is a workflow language based on workflow patterns. It is supported by a software system that includes an execution engine, a graphical editor and a worklist handler. It is available as open-source software under the LGPL license.
Production-level implementations of YAWL include deployment by first:utility and first:telecom in the UK to automate frontend service processes, and by the Australian film television and radio school to coordinate film shooting processes. It has also been used for teaching in more than 20 universities.
Features
Comprehensive support for the workflow patterns.
Support for advanced resource allocation policies, including four-eyes principle and chained execution.
Support for dynamic adaptation of workflow models through the notion of worklets.
Sophisticated workflow model validation features (e.g. deadlock detection at design-time).
XML-based model for data definition and manipulation based on XML Schema, XPath and XQuery.
XML-based interfaces for monitoring and controlling workflow instances and for accessing execution logs.
XML-based plug-in interfaces for connecting third-party web services with the system, including third-party worklist/task handlers.
Automated form generation from XML schema.
History
The language and its supporting system were originally developed by researchers at Eindhoven University of Technology and Queensland University of Technology. Subsequently, several organizations such as InterContinental Hotels Group, first:telecom and ATOS Worldline have contributed to the initiative.
The original drivers behind YAWL were to define a workflow language that would support all (or most) of the workflow patterns and that would have a formal semantics. Observing that Petri nets came close to supporting most of the workflow patterns, the designers of YAWL decided to take Petri nets as a starting point and to extend this formalism with three main constructs, namely or-join, cancellat |
https://en.wikipedia.org/wiki/Sodium%20nitrate | Sodium nitrate is the chemical compound with the formula . This alkali metal nitrate salt is also known as Chile saltpeter (large deposits of which were historically mined in Chile) to distinguish it from ordinary saltpeter, potassium nitrate. The mineral form is also known as nitratine, nitratite or soda niter.
Sodium nitrate is a white deliquescent solid very soluble in water. It is a readily available source of the nitrate anion (NO3−), which is useful in several reactions carried out on industrial scales for the production of fertilizers, pyrotechnics, smoke bombs and other explosives, glass and pottery enamels, food preservatives (esp. meats), and solid rocket propellant. It has been mined extensively for these purposes.
History
The first shipment of saltpeter to Europe arrived in England from Peru in 1820 or 1825, right after that country's independence from Spain, but did not find any buyers and was dumped at sea in order to avoid customs toll. With time, however, the mining of South American saltpeter became a profitable business (in 1859, England alone consumed 47,000 metric tons). Chile fought the War of the Pacific (1879–1884) against the allies Peru and Bolivia and took over their richest deposits of saltpeter. In 1919, Ralph Walter Graystone Wyckoff determined its crystal structure using X-ray crystallography.
Occurrence
The largest accumulations of naturally occurring sodium nitrate are found in Chile and Peru, where nitrate salts are bound within mineral deposits called caliche ore. Nitrates accumulate on land through marine-fog precipitation and sea-spray oxidation/desiccation followed by gravitational settling of airborne NaNO3, KNO3, NaCl, Na2SO4, and I, in the hot-dry desert atmosphere. El Niño/La Niña extreme aridity/torrential rain cycles favor nitrates accumulation through both aridity and water solution/remobilization/transportation onto slopes and into basins; capillary solution movement forms layers of nitrates; pure nitrate forms rare v |
https://en.wikipedia.org/wiki/Compatibility%20%28mechanics%29 | In continuum mechanics, a compatible deformation (or strain) tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.
In the continuum description of a solid body we imagine the body to be composed of a set of infinitesimal volumes or material points. Each volume is assumed to be connected to its neighbors without any gaps or overlaps. Certain mathematical conditions have to be satisfied to ensure that gaps/overlaps do not develop when a continuum body is deformed. A body that deforms without developing any gaps/overlaps is called a compatible body. Compatibility conditions are mathematical conditions that determine whether a particular deformation will leave a body in a compatible state.
In the context of infinitesimal strain theory, these conditions are equivalent to stating that the displacements in a body can be obtained by integrating the strains. Such an integration is possible if the Saint-Venant's tensor (or incompatibility tensor) vanishes in a simply-connected body where is the infinitesimal strain tensor and
For finite deformations the compatibility conditions take the form
where is the deformation gradient.
Compatibility conditions for infinitesimal strains
The compatibility conditions in linear elasticity are obtained by observing that there are six strain-displacement relations that are functions of only three unknown displacements. This suggests that the three displacements may be removed from the system of equations without loss of information. The resulting expressions in terms of only the strains provide constraints on the possible forms of a strain |
https://en.wikipedia.org/wiki/Active%20placebo | An active placebo is a placebo that produces noticeable side effects that may convince the person being treated that they are receiving a legitimate treatment, rather than an ineffective placebo.
Nomenclature
According to a 1965 paper, the term "concealed placebo" (German: Kaschiertes Placebo) was suggested in a 1959 paper published in German.
Example
An example of an active placebo is the 1964 work of Shader and colleagues who used a combination of low-dose phenobarbital plus atropine to mimic the sedation and dry mouth produced by phenothiazines.
Morphine and gabapentin are painkillers with the common side effects of sleepiness and dizziness. In a 2005 study assessing the effects of these painkillers on neuropathic pain, lorazepam was chosen as an active placebo because it is not a painkiller but it does cause sleepiness and can cause dizziness.
Testing from the late 1950s onwards on narcotic analgesics like morphine also has used dicyclomine as an active placebo, and on some occasions it was reported to cause the Straub mouse tail reaction, as do most narcotics. Clonidine is now finding more use as an active placebo for narcotics. |
https://en.wikipedia.org/wiki/Type%20VII%20secretion%20system | Type VII secretion systems are bacterial secretion systems first observed in the phyla Actinomycetota and Bacillota. Bacteria use such systems to transport, or secrete, proteins into the environment. The bacterial genus Mycobacterium uses type VII secretion systems (T7SS) to secrete proteins across their cell envelope. The first T7SS system discovered was the ESX-1 System.
T7SS has been studied as a virulence factor associated with the ESX-1 system in Mycobacterium tuberculosis. These secretion systems are often found in gram-positive bacteria. Type VII secretion systems are necessary in Mycobacterium because of their impermeable membrane. The RD1 locus or Gene for Type VII secretion can create a lytic effect through ESX-1 transport.
Structure
Cryogenic electron microscopy was used to determine that a complex of two identical subunits made from four proteins forms the structure of the type VII secretion system in Mycobacterium smegmatis. T7SS forms a six-sided complex that allows for nearly 165 membrane attachments. This shows how complex the secretion system is. The MDa complex of the Type VII secretion system is found embedded in the inner membrane.
The T7SS structure in Mycobacteria is 28.5 nm in width and 20 nm in height. This secretion system is composed of the following components: inner EccB5, outer EccB5, EccC5, inner EccD5, outer EccD5, EccE5 and MycP5. These components make the 2.32-MDa complex. This complex is connected to an inner membrane by 165 transmembrane helices. The membrane is composed of a trimer of dimers. The dimers are made up of one copy of MycP5, EccB5, EccC5, EccE5, and two copies of EccD5. The MycP5 structure is what stabilizes the complex. Without the MycP5 complex, EccB5 copies cannot make the stable triangular scaffold. In the membrane EccD5 create barrels that are hypothetically filled with lipids. EccC is the only component of the T7SS that is present in all species that contain a type VII secretion system.
Mechanism
The core |
https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Storer%E2%80%93Szymanski | Lempel–Ziv–Storer–Szymanski (LZSS) is a lossless data compression algorithm, a derivative of LZ77, that was created in 1982 by James A. Storer and Thomas Szymanski. LZSS was described in article "Data compression via textual substitution" published in Journal of the ACM (1982, pp. 928–951).
LZSS is a dictionary coding technique. It attempts to replace a string of symbols with a reference to a dictionary location of the same string.
The main difference between LZ77 and LZSS is that in LZ77 the dictionary reference could actually be longer than the string it was replacing. In LZSS, such references are omitted if the length is less than the "break even" point. Furthermore, LZSS uses one-bit flags to indicate whether the next chunk of data is a literal (byte) or a reference to an offset/length pair.
Example
Here is the beginning of Dr. Seuss's Green Eggs and Ham, with character numbers at the beginning of lines for convenience. Green Eggs and Ham is a good example to illustrate LZSS compression because the book itself only contains 50 unique words, despite having a word count of 170. Thus, words are repeated, however not in succession.
0: I am Sam
9:
10: Sam I am
19:
20: That Sam-I-am!
35: That Sam-I-am!
50: I do not like
64: that Sam-I-am!
79:
80: Do you like green eggs and ham?
112:
113: I do not like them, Sam-I-am.
143: I do not like green eggs and ham.
This text takes 177 bytes in uncompressed form. Assuming a break even point of 2 bytes (and thus 2 byte pointer/offset pairs), and one byte newlines, this text compressed with LZSS becomes 95 bytes long:
0: I am Sam
9:
10: (5,3) (0,4)
16:
17: That(4,4)-I-am!(19,15)
32: I do not like
46: t(21,14)
50: Do you(58,5) green eggs and ham?
79: (49,14) them,(24,9).(112,15)(92,18).
Note: this does not include the 12 bytes of flags indicating whether the next chunk of text is a pointer or a literal. Adding it, the text becomes 107 bytes long, which is still shorter than the original 177 bytes.
Implementat |
https://en.wikipedia.org/wiki/Received%20noise%20power | In telecommunications, received noise power is a measure of noise in a receiver. For example, the received noise power might be:
The calculated or measured noise power, within the bandwidth being used, at the receive end of a circuit, channel, link, or system.
The absolute power of the noise measured or calculated at a receive point. The related bandwidth and the noise weighting must also be specified.
The value of noise power, from all sources, measured at the line terminals of a telephone set's receiver.' Either flat weighting or some other specific amplitude-frequency characteristic or noise weighting characteristic must be associated with the measurement.
Telecommunication theory
Noise (electronics) |
https://en.wikipedia.org/wiki/Synthetic%20morphology | Synthetic morphology is a sub-discipline of the broader field of synthetic biology.
In standard synthetic biology, artificial gene networks are introduced into cells, inputs (e.g. chemicals, light) are applied to those networks, and the networks perform logical operations on them and output the result of the operation as the activity of an enzyme or as the amount of green fluorescent protein. Using this approach, synthetic biologists have demonstrated the ability of their gene networks to perform Boolean computation, to hold a memory, and to generate pulses and oscillation.
Synthetic morphology extends this idea by adding output modules that alter the shape or social behaviour of cells in response to the state of the artificial gene network. For example, instead of just making a fluorescent protein, a gene network may switch on an adhesion molecule so that cells stick to each other, or activate a motility system so that cells move. It has been argued that the formation of properly-shaped tissues by mammalian cells involves mainly a set of about ten basic cellular events (cell proliferation, cell death, cell adhesion, differential adhesion, cell de-adhesion, cell fusion, cell locomotion, chemotaxis, haptotaxis, cell wedging). Broadly similar lists exist for tissues of plants, fungi etc. In principle, therefore, a fairly small set of output modules might allow biotechnologists to 'program' cells to produce artificially-designed arrangements, shapes and eventually 'tissues'.
The term synthetic morphology was introduced to the peer reviewed scientific literature in 2008 and is now becoming more widely used both in peer-reviewed literature and texts. |
https://en.wikipedia.org/wiki/Human%20%CE%B2-globin%20locus | The human β-globin locus is composed of five genes located on a short region of chromosome 11, responsible for the creation of the beta parts (roughly half) of the oxygen transport protein Haemoglobin. This locus contains not only the beta globin gene but also delta, gamma-A, gamma-G, and epsilon globin. Expression of all of these genes is controlled by single locus control region (LCR), and the genes are differentially expressed throughout development.
The order of the genes in the beta-globin cluster is: 5' - epsilon – gamma-G – gamma-A – delta – beta - 3'.
The arrangement of the genes directly reflects the temporal differentiation of their expression during development, with the early-embryonic stage version of the gene located closest to the LCR. If the genes are rearranged, the gene products are expressed at improper stages of development.
Expression of these genes is regulated in embryonic erythropoiesis by many transcription factors, including KLF1, which is associated with the upregulation of adult hemoglobin in adult definitive erythrocytes, and KLF2, which is vital to the expression of embryonic hemoglobin.
HBB complex
Many CRMs have been mapped within the cluster of genes encoding β-like globins expressed in embryonic (HBE1), fetal (HBG1 and HBG2), and adult (HBB and HBD) erythroid cells. All are marked by DNase I hypersensitive sites and footprints, and many are bound by GATA1 in peripheral blood derived erythroblasts (PBDEs). A DNA segment located between the HBG1 and HBD genes is one of the DNA segments bound by BCL11A and several other proteins to negatively regulate HBG1 and HBG2. It is sensitive to DNase I but is not conserved across mammals. An enhancer located 3′ of the HBG1 gene is bound by several proteins in PBDEs and K562 cells and is sensitive to DNase I, but shows almost no signal for mammalian constraint. |
https://en.wikipedia.org/wiki/POTEB | POTE ankyrin domain family, member B is a protein in humans that is encoded by the POTEB gene.(Prostate, Ovary, Testes Expressed ankyrin domain family member B).It is most likely involved in mediating protein-protein interaction via its 5 ankyrin domains. POTEB is most probably aids in intracellular signaling, but is not likely to be a secreted or nuclear protein. POTEB's function is likely to be regulated via 17 potential phosphorylation sites. There is currently no evidence to suggest that POTEB has nuclear localization signals.
Gene
POTEB is located at 15q11.2 on chromosome 15 in humans and is transcribed from the reverse DNA strand. POTEB is also known as POTEB3 and POTE15. The POTEB gene is 47,547 base pairs in length and is composed of 11 exons.
mRNA
The POTEB gene can be transcribed to create four potential mRNAs. However, only one of these mRNAs, possessing all 11 exons, is capable of being translated to the POTEB protein. The three other transcripts do not encode proteins.
Protein
The POTEB protein is composed of 544 amino acids and, according to bioinformatic analyses, has a molecular weight of 61.7 kDa. It has an isoelectric point of 5.68. Its most common amino acids are leucine and glutamic acid, which account for 11% and 10.3% of the protein respectively. However, this is normal for human proteins. POTEB is most likely a cytoplasmic protein that is phosphorylated at 17 serines, threonines, and tyrosines located throughout the length of the protein, but concentrated at the C-terminus of the protein. Its secondary structure is mainly five helical ankyrin repeat domains, which contain the TALHL motif. There is also one myristoylation site on the protein, close to the N-terminus.
Expression
POTEB is expressed at high levels in the human prostate, ovary, and testes. However, there is also evidence to show that it is expressed at low levels in embryonic stem cells, the nasopharyngeal region, and in breast tissue. In embryonic stem cells, differentiati |
https://en.wikipedia.org/wiki/GeoSMS | GeoSMS is a specification for geotagging SMS messages.
It works by embedding locations in the message text, where the locations are formatted as 'geo' URIs as defined in RFC 5870.
It was developed in 2010 by Matthew Kwan, a PhD Candidate at the RMIT School of Mathematical and Geospatial Sciences and should not be confused with the Open GeoSMS standard.
Examples
A simple geotagged SMS might look like:
I'm at the pub geo:-37.801631,144.980294;u=10
which would contain the message I'm at the pub and a location with latitude 37.801631 degrees south, longitude 144.980294 degrees east, and an uncertainty of (+ or -) 10 metres.
Messages using GeoSMS can also contain multiple locations, for example:
I'll be at the pub geo:-37.801631,144.980294;u=10 until midnight, then heading to a gig geo:-37.864225,144.97294
which contains two locations.
Applications
GeoSMS is used by the free Android application I Am Here (available through the Google Play) to send and receive geotagged SMS messages. It displays received messages using either a compass or map view. The GeoSMS specification is also being used to allow ships and cruising vessels to send position updates from an SMS-capable satellite phone, such as one of the recent models marketed by Iridium Communications or Globalstar.
Open GeoSMS
The Open Geospatial Consortium also has an approved Open GeoSMS standard, published in 2011. This standard has been broadly implemented in Asia. The OGC Open GeoSMS standard was originally developed in Taiwan by ITRI in 2008 and submitted into the OGC in 2009.
See also
Geotagging
Geo URI scheme |
https://en.wikipedia.org/wiki/Height%20function | A height function is a function that quantifies the complexity of mathematical objects. In Diophantine geometry, height functions quantify the size of solutions to Diophantine equations and are typically functions from a set of points on algebraic varieties (or a set of algebraic varieties) to the real numbers.
For instance, the classical or naive height over the rational numbers is typically defined to be the maximum of the numerators and denominators of the coordinates (e.g. for the coordinates ), but in a logarithmic scale.
Significance
Height functions allow mathematicians to count objects, such as rational points, that are otherwise infinite in quantity. For instance, the set of rational numbers of naive height (the maximum of the numerator and denominator when expressed in lowest terms) below any given constant is finite despite the set of rational numbers being infinite. In this sense, height functions can be used to prove asymptotic results such as Baker's theorem in transcendental number theory which was proved by .
In other cases, height functions can distinguish some objects based on their complexity. For instance, the subspace theorem proved by demonstrates that points of small height (i.e. small complexity) in projective space lie in a finite number of hyperplanes and generalizes Siegel's theorem on integral points and solution of the S-unit equation.
Height functions were crucial to the proofs of the Mordell–Weil theorem and Faltings's theorem by and respectively. Several outstanding unsolved problems about the heights of rational points on algebraic varieties, such as the Manin conjecture and Vojta's conjecture, have far-reaching implications for problems in Diophantine approximation, Diophantine equations, arithmetic geometry, and mathematical logic.
History
An early form of height function was proposed by Giambattista Benedetti (c. 1563), who argued that the consonance of a musical interval could be measured by the product of its numerator |
https://en.wikipedia.org/wiki/Global%20Ocean%20Ship-based%20Hydrographic%20Investigations%20Program | GO-SHIP (The Global Ocean Ship-based Hydrographic Investigations Program) is a multidisciplinary project to monitor ocean/climate changes. So far, this program has involved twelve countries and completed/planned 116 cruises. Participation countries are United States, United Kingdom, Japan, Canada, Germany, Spain, Australia, Norway, France, South Africa, Ireland and Sweden. Most of the cruises are completed by United States, United Kingdom, Japan, Canada, Germany and Spain.
Background
During 1872 and 1876, Challenger expedition started the modern marine survey and marked the foundation of oceanography. Since then, human beings are keeping investigations for scientific exploration and have made many great discoveries. At the end of the 19th century, America built their USS Albatross (1882) to do ocean survey. In 1893, Norwegian scientist Fridtjof Nansen fixed his Fram for three years to have long-term observations of oceanographic, meteorological and astronomical data. One of the first acoustic measurements of the ocean floor was in 1919. From 1925 to 1927, the "Meteor" expedition used echo sounders to measure 70000 ocean depth measurements and explore Mid-Atlantic Ridge. In 1953, Maurice Ewing and Bruce Heezen discovered the global ridge system extending along the Mid Atlantic Ridge. In 1960, Harry Hammond Hess developed the seafloor spreading theory by ocean exploration.Deep Sea Drilling Project started in 1968.
In the recent years, oceanographic investigation has revealed that ocean environment is changing, like Ocean acidification, water temperature, Carbon cycle, Sea level rise. Oceanographers are trying to find solutions to these changes by ocean exploration. However, it is hard to understand the whole system in one single subject because the ocean environment is balanced by both its physical conditions and chemical conditions, which is an essential factor for the diversities of marine biology. For example, if the temperature in the ocean surface rises, it w |
https://en.wikipedia.org/wiki/Psychomotor%20retardation | Psychomotor retardation involves a slowing down of thought and a reduction of physical movements in an individual. It can cause a visible slowing of physical and emotional reactions, including speech and affect.
Psychomotor retardation is most commonly seen in people with major depression and in the depressed phase of bipolar disorder; it is also associated with the adverse effects of certain drugs, such as benzodiazepines. Particularly in an inpatient setting, psychomotor retardation may require increased nursing care to ensure adequate food and fluid intake and sufficient personal care. Informed consent for treatment is more difficult to achieve in the presence of this condition.
Causes
Psychiatric disorders: anxiety disorders, bipolar disorder, eating disorders, schizophrenia, severe depression, etc.
Psychiatric medicines (if taken as prescribed or improperly, overdosed, or mixed with alcohol)
Parkinson's disease
Genetic disorders: Qazi–Markouizos syndrome, Say–Meyer syndrome, Tranebjaerg-Svejgaard syndrome, Wiedemann–Steiner syndrome,
Wilson's disease, etc.
Examples
Examples of psychomotor retardation include the following:
Unaccountable difficulty in carrying out what are usually considered "automatic" or "mundane" self care tasks for healthy people (i.e., without depressive illness) such as taking a shower, dressing, grooming, cooking, brushing teeth, and exercising.
Physical difficulty performing activities that normally require little thought or effort, such as walking up stairs, getting out of bed, preparing meals, and clearing dishes from the table, household chores, and returning phone calls.
Tasks requiring mobility suddenly (or gradually) may inexplicably seem "impossible". Activities such as shopping, getting groceries, taking care of daily needs, and meeting the demands of employment or school are commonly affected.
Activities usually requiring little mental effort can become challenging. Balancing a checkbook, making a shopping list, a |
https://en.wikipedia.org/wiki/Vermin | Vermin (colloquially varmint(s) or varmit(s)) are pests or nuisance animals that spread diseases or destroy crops, livestock, and property. Since the term is defined in relation to human activities, which species are included vary by region and enterprise.
The term derives from the Latin vermis (worm), and was originally used for the worm-like larvae of certain insects, many of which infest foodstuffs. The term varmint (and vermint) has been found in sources from c. 1530–1540s.<ref>"Vermint" cited in England in 1539, Oxford English Dictionary', 2nd ed</ref>
Definition
The term "vermin" is used to refer to a wide scope of organisms, including rodents (such as rats), cockroaches, termites, bed bugs, ferrets, stoats, sables.
Historically, in the 16th and 17th century, the expression also became used as a derogatory term associated with groups of persons typically plagued by vermin, namely beggars and vagabonds, and more generally the poor.
Disease-carrying rodents and insects are the usual case, but the term is also applied to larger animals—especially small predators—typically because they consume resources which humans consider theirs, such as livestock and crops. Birds which eat cereal crops and fruit are an example. The American crow (Corvus brachyrhynchos), is widely hated by farmers because of crop depredation. Pigeons, which have been widely introduced in urban environments, are also sometimes considered vermin. Some varieties of snakes and arachnids may also be referred to as vermin. "Vermin" is also used by some people as a term of abuse, either individually or collectively.
Varmint Varmint or varmit is an American-English colloquialism, a corruption of "vermin" particularly common to the American East and South-east within the nearby bordering states of the vast Appalachia region. The term describes species which raid farms from without, as opposed to vermin (such as rats) that infest from within, thus referring mainly to predators such as feral dogs, |
https://en.wikipedia.org/wiki/Extended%20natural%20numbers | In mathematics, the extended natural numbers is a set which contains the values and (infinity). That is, it is the result of adding a maximum element to the natural numbers. Addition and multiplication work as normal for finite values, and are extended by the rules (), and for .
With addition and multiplication, is a semiring but not a ring, as lacks an additive inverse. The set can be denoted by , or . It is a subset of the extended real number line, which extends the real numbers by adding and .
Applications
In graph theory, the extended natural numbers are used to define distances in graphs, with being the distance between two unconnected vertices. They can be used to show the extension of some results, such as the max-flow min-cut theorem, to infinite graphs.
In topology, the topos of right actions on the extended natural numbers is a category PRO of projection algebras.
In constructive mathematics, the extended natural numbers are a one-point compactification of the natural numbers, yielding the set of non-increasing binary sequences i.e. such that . The sequence represents , while the sequence represents . It is a retract of and the claim that implies the limited principle of omniscience.
Notes |
https://en.wikipedia.org/wiki/Kinetic%20energy%20penetrator | A kinetic energy penetrator (KEP), also known as long-rod penetrator (LRP), is a type of ammunition designed to penetrate vehicle armour using a flechette-like, high-sectional density projectile. Like a bullet or kinetic energy weapon, this type of ammunition does not contain explosive payloads and uses purely kinetic energy to penetrate the target. Modern KEP munitions are typically of the armour-piercing fin-stabilized discarding sabot (APFSDS) type.
History
Early cannons fired kinetic energy ammunition, initially consisting of heavy balls of worked stone and later of dense metals. From the beginning, combining high muzzle energy with projectile weight and hardness have been the foremost factors in the design of such weapons. Similarly, the foremost purpose of such weapons has generally been to defeat protective shells of armored vehicles or other defensive structures, whether it is stone walls, sailship timbers, or modern tank armour. Kinetic energy ammunition, in its various forms, has consistently been the choice for those weapons due to the highly focused terminal ballistics.
The development of the modern KE penetrator combines two aspects of artillery design, high muzzle velocity and concentrated force. High muzzle velocity is achieved by using a projectile with a low mass and large base area in the gun barrel. Firing a small-diameter projectile wrapped in a lightweight outer shell, called a sabot, raises the muzzle velocity. Once the shell clears the barrel, the sabot is no longer needed and falls off in pieces. This leaves the projectile traveling at high velocity with a smaller cross-sectional area and reduced aerodynamic drag during the flight to the target (see external ballistics and terminal ballistics). Germany developed modern sabots under the name "treibspiegel" ("thrust mirror") to give extra altitude to its anti-aircraft guns during the Second World War. Before this, primitive wooden sabots had been used for centuries in the form of a wooden pl |
https://en.wikipedia.org/wiki/Mole%20%28animal%29 | Moles are small mammals adapted to a subterranean lifestyle. They have cylindrical bodies, velvety fur, very small, inconspicuous eyes and ears, reduced hindlimbs, and short, powerful forelimbs with large paws adapted for digging.
The word "mole" refers to any species in the family Talpidae, from the Latin word for mole, talpa. Moles are found in most parts of North America, Europe and Asia.
Moles may be viewed as pests to gardeners, but they provide positive contributions to soil, gardens, and ecosystems, including soil aeration, feeding on slugs and small creatures that eat plant roots, and providing prey for other wildlife. They eat earthworms and other small invertebrates in the soil.
Terminology
In Middle English, moles were known as moldwarps. The expression "don't make a mountain out of a molehill" (which means "exaggerating problems") was first recorded in Tudor times. By the era of Early Modern English, the mole was also known in English as mouldywarp or mouldiwarp, a word having cognates in other Germanic languages such as German (Maulwurf), and Danish, Norwegian, Swedish and Icelandic (muldvarp, moldvarp, mullvad, moldvarpa), where muld/mull/mold refers to soil and varp/vad/varpa refers to throwing, hence "one who throws soil" or "dirt-tosser".
Male moles are called "boars"; females are called "sows".
Characteristics
Underground breathing
Moles have been found to tolerate higher levels of carbon dioxide than other mammals, because their blood cells have a special form of hemoglobin that has a higher affinity to oxygen than other forms. In addition, moles use oxygen more effectively by reusing the exhaled air, and can survive in low-oxygen environments such as burrows.
Extra thumbs
Moles have polydactyl forepaws: each has an extra thumb (also known as a prepollex) next to the regular thumb. While the mole's other digits have multiple joints, the prepollex has a single, sickle-shaped bone that develops later and differently from the other fingers d |
https://en.wikipedia.org/wiki/Mikulov%20Castle | Mikulov Castle (German: Nikolsburg) is a castle in the town of Mikulov in South Moravian Region of the Czech Republic. The castle stands on a place of historic Slavonic settlement, where since the end of the 13th century the original stone castle was erected. Henry I, lord of Liechtenstein and Petronell (d. 1265) was given the lordship of Mikulov (Nikolsburg) as free property from Ottokar II of Bohemia, whom he supported politically, in 1249. Nikolsburg Castle was sold by the House of Liechtenstein in 1560 and then became the main seat of the princes of Dietrichstein.
The present castle is the result of a reconstruction in 1719–1730 under this family. The daughter of the 9th Prince married General Count Alexander von Mensdorff-Pouilly, Minister of Foreign Affairs, who was created Fürst (prince) of Dietrichstein and Nikolsburg in 1869.
The end of World War II meant a complete disaster for the castle, as it was destroyed by fire whose origins are unclear. The Mensdorff-Dietrichstein family was expropriated by the new communist government.
During the war, the anthropological collection from the Moravské zemské muzeum had been moved to Mikulov Castle for safekeeping purposes. Many of the most important discoveries from Předmostí u Přerova, Dolní Věstonice and the Mladeč caves were destroyed by the fire.
After an extensive reconstruction in the 1950s, the castle became the seat of the Regional Museum in Mikulov, housing art and historical collections, including artifacts relating to the history of local wine production. The Renaissance wine barrel, dating from 1643 and one of the largest, , wine barrels in Central Europe, is on display. |
https://en.wikipedia.org/wiki/Address%20munging | Address munging is the practice of disguising
an e-mail address to prevent it from being automatically collected by unsolicited bulk e-mail providers.
Address munging is intended to disguise an e-mail address in a way that prevents computer software from seeing the real address, or even any address at all, but still allows a human reader to reconstruct the original and contact the author: an email address such as, "no-one@example.com", becomes "no-one at example dot com", for instance.
Any e-mail address posted in public is likely to be automatically collected by computer software used by bulk emailers (a process known as e-mail address scavenging). Addresses posted on webpages, Usenet or chat rooms are particularly vulnerable to this. Private e-mail sent between individuals is highly unlikely to be collected, but e-mail sent to a mailing list that is archived and made available via the web, or passed onto a Usenet news server and made public, may eventually be scanned and collected.
Disadvantages
Disguising addresses makes it more difficult for people to send e-mail to each other. Many see it as an attempt to fix a symptom rather than solving the real problem of e-mail spam, at the expense of causing problems for innocent users. In addition, there are e-mail address harvesters who have found ways to read the munged email addresses.
The use of address munging on Usenet is contrary to the recommendations of RFC 1036 governing the format of Usenet posts, which requires a valid e-mail address be supplied in the From: field of the post. In practice, few people follow this recommendation strictly.
Disguising e-mail addresses in a systematic manner (for example, user[at]domain[dot]com) offers little protection.
Any impediment reduces the user's willingness to take the extra trouble to email the user. In contrast, well-maintained e-mail filtering on the user's end does not drive away potential correspondents. No spam filter is 100% immune to false positives, however |
https://en.wikipedia.org/wiki/Single%20customer%20view | A single customer view is an aggregated, consistent and holistic representation of the data held by an organisation about its customers that can be viewed in one place, such as a single page. The advantage to an organisation of attaining this unified view comes from the ability it gives to analyse past behaviour in order to better target and personalise future customer interactions. A single customer view is also considered especially relevant where organisations engage with customers through multichannel marketing, since customers expect those interactions to reflect a consistent understanding of their history and preferences. However, some commentators have challenged the idea that a single view of customers across an entire organisation is either natural or meaningful, proposing that the priority should instead be consistency between the multiple views that arise in different contexts.
Where representations of a customer are held in more than one data set, achieving a single customer view can be difficult: firstly because customer identity must be traceable between the records held in those systems, and secondly because anomalies or discrepancies in the customer data must be data cleansed for data quality. As such, the acquisition by an organisation of a single customer view is one potential outcome of successful master data management. Since 31 December, 2010, maintaining a single customer view, and submitting it within 72 hours, has become mandatory for financial institutions in the United Kingdom due to new rules introduced by the Financial Services Compensation Scheme.
See also
Data warehouse |
https://en.wikipedia.org/wiki/Balanced%20polygamma%20function | In mathematics, the generalized polygamma function or balanced negapolygamma function is a function introduced by Olivier Espinosa Aldunate and Victor Hugo Moll.
It generalizes the polygamma function to negative and fractional order, but remains equal to it for integer positive orders.
Definition
The generalized polygamma function is defined as follows:
or alternatively,
where is the polygamma function and , is the Hurwitz zeta function.
The function is balanced, in that it satisfies the conditions
.
Relations
Several special functions can be expressed in terms of generalized polygamma function.
where are the Bernoulli polynomials
where is the -function and is the Glaisher constant.
Special values
The balanced polygamma function can be expressed in a closed form at certain points (where is the Glaisher constant and is the Catalan constant): |
https://en.wikipedia.org/wiki/Computer%20architecture | In computer engineering, computer architecture is a description of the structure of a computer system made from component parts. It can sometimes be a high-level description that ignores details of the implementation. At a more detailed level, the description may include the instruction set architecture design, microarchitecture design, logic design, and implementation.
History
The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. While building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the stored-program concept. Two other early and important examples are:
John von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements; and
Alan Turing's more detailed Proposed Electronic Calculator for the Automatic Computing Engine, also 1945 and which cited John von Neumann's paper.
The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of "system architecture", a term that seemed more useful than "machine organization".
Subsequently, Brooks, a Stretch designer, opened Chapter 2 of a book called Planning a Computer System: Project Stretch by stating, "Computer architecture, like other architecture, is the art of determining the |
https://en.wikipedia.org/wiki/Hypoallergenic%20dog%20breed | A hypoallergenic dog breed is a dog breed (or crossbreed) that is purportedly more compatible with allergic people than are other breeds. However, prominent allergen researchers have determined that there is no basis to the claims that certain breeds are hypoallergenic and, while allergen levels vary among individual dogs, the breed is not a significant factor.
Scientific findings
Though some studies suggest the possible existence of hypoallergenic dog breeds, there is too much variability to conclude that such a breed exists. According to researchers, claims about the existence of hypoallergenic dog breeds may have been fueled by unsubstantiated articles on the internet. The significant allergens are proteins found in the dog's saliva and dander.
Some studies have suggested that the production of the allergen, and therefore human allergenic reaction, varies by breed, yet more recent scientific findings indicate that there are no significant differences between breeds in the generation of these allergens. One study found hypoallergenic breeds to have significantly more allergen in their coats than non-hypoallergenic breeds and no differences in the allergen levels in the air or on the floor.
Breeds that shed less are more likely to be hypoallergenic, since the dog's dander and saliva stick to the hair and are not released into the environment. However, protein expression levels play a major role and amount of shedding alone does not determine degree of allergic reaction. "Even if you get a hairless dog, it's still going to produce the allergen," states Dr. Wanda Phipatanakul, chair of the Indoor Allergen Committee for the American Academy of Allergy, Asthma, and Immunology.
If a person is allergic, they may be best able to tolerate a specific dog, possibly of one of the hypoallergenic breeds. Dr. Thomas A. Platts-Mills, head of the Asthma and Allergic Disease Center at the University of Virginia, explained that there are cases in which a specific dog (not br |
https://en.wikipedia.org/wiki/Spock%20%28testing%20framework%29 |
Spock is a Java testing framework capable of handling the complete life cycle of a computer program. It was initially created in 2008 by Peter Niederwieser, a software engineer with GradleWare. A second Spock committer is Luke Daley (also with Gradleware), the creator of the popular Geb functional testing framework.
See also
JUnit, unit testing framework for the Java programming language
Mockito, mocking extensions to JUnit
TestNG, test framework for Java |
https://en.wikipedia.org/wiki/Injective%20tensor%20product | In mathematics, the injective tensor product of two topological vector spaces (TVSs) was introduced by Alexander Grothendieck and was used by him to define nuclear spaces. An injective tensor product is in general not necessarily complete, so its completion is called the . Injective tensor products have applications outside of nuclear spaces. In particular, as described below, up to TVS-isomorphism, many TVSs that are defined for real or complex valued functions, for instance, the Schwartz space or the space of continuously differentiable functions, can be immediately extended to functions valued in a Hausdorff locally convex TVS with any need to extend definitions (such as "differentiable at a point") from real/complex-valued functions to -valued functions.
Preliminaries and notation
Throughout let and be topological vector spaces and be a linear map.
is a topological homomorphism or homomorphism, if it is linear, continuous, and is an open map, where has the subspace topology induced by
If is a subspace of then both the quotient map and the canonical injection are homomorphisms. In particular, any linear map can be canonically decomposed as follows: where defines a bijection.
The set of continuous linear maps (resp. continuous bilinear maps ) will be denoted by (resp. ) where if is the scalar field then we may instead write (resp. ).
The set of separately continuous bilinear maps (that is, continuous in each variable when the other variable is fixed) will be denoted by where if is the scalar field then we may instead write
We will denote the continuous dual space of by or and the algebraic dual space (which is the vector space of all linear functionals on whether continuous or not) by
To increase the clarity of the exposition, we use the common convention of writing elements of with a prime following the symbol (for example, denotes an element of and not, say, a derivative and the variables and need not be related in |
https://en.wikipedia.org/wiki/List%20of%20language%20bindings%20for%20Qt%205 | — Columns detailing the features covered by the binding are missing. —
See also
List of language bindings for Qt 4
List of language bindings for GTK+
List of language bindings for wxWidgets
List of Qt language bindings from the qt-project.org wiki |
https://en.wikipedia.org/wiki/Center%20for%20Biofilm%20Engineering | The Center for Biofilm Engineering (CBE) is an interdisciplinary research, education, and technology transfer institution located on the central campus of Montana State University in Bozeman, Montana. The center was founded in April 1990 as the Center for Interfacial Microbial Process Engineering with a grant from the Engineering Research Centers (ERC) program of the National Science Foundation (NSF). The CBE integrates faculty from multiple university departments to lead multidisciplinary research teams—including graduate and undergraduate students—to advance fundamental biofilm knowledge, develop beneficial uses for microbial biofilms, and find solutions to industrially relevant biofilm problems. The center tackles biofilm issues including chronic wounds, bioremediation, and microbial corrosion through cross-disciplinary research and education among engineers, microbiologists and industry.
History
The center originated as the Institute for Chemical and Biological Process Analysis (IPA) in 1983. In 1990, the center became a national ERC as the Center for Interfacial Microbial Process Engineering based on a $7.2 million grant from the NSF. In 1993 the center assumed its current name—Center for Biofilm Engineering. The original grants expired in 2001 and the center became self-sufficient.
Institute for Chemical and Biological Process Analysis (1979–1990)
In 1979 W.G. (Bill) Characklis came to Montana State University from Rice University as a professor in civil (environmental) and chemical engineering. He assembled a multidisciplinary team of engineers, microbiologists and chemists to study the processes and effects of microbial growth at interfaces He established a cross-disciplinary environmental biotechnology institute to address the needs of industry in the areas of biofouling, microbial corrosion and biofilm technology. The Institute for Chemical and Biological Process Analysis (IPA) was chartered by the Montana Board of Regents in 1983 within the Montana Stat |
https://en.wikipedia.org/wiki/Evolution%3A%20The%20Game%20of%20Intelligent%20Life | Evolution: The Game of Intelligent Life is a life simulation and real-time strategy computer game that allows players to experience, guide, and control evolution from an isometric view on either historical earth or on randomly generated worlds while racing against computer opponents to reach the top of the evolution chain, and gradually evolving the player's animals to reach the "grand goal of intelligent life". It was published by Interplay Entertainment and Discovery Channel Multimedia in 1997.
Gameplay
Players select different ages to play through, including the Labyrinthodontia, or the first amphibians through to the evolution of the Age of Mammals. Each species has points that players can spend on adapting or evolving their creature populations, which are represented by animated icons of that creature. The more points a player spends on a field for a species, the quicker it evolves, becomes better at feeding (and growing in number faster), or better at fighting off predators. When a player evolves a creature, one can pick a population to be upgraded to the evolution one chooses. Each creature has a different set of evolution paths, and some can evolve into six or more different creatures. The world grows as the game advances, with land masses drifting and terrain shifting. As the player evolves the selected creatures, the player slowly advances in the complexity of the animals, eventually reaching intelligent life. Usually, the first player to do this is the victor (standard rules).
The game starts from basal tetrapods, the very first land dwellers: amphibians. At the gain of the game, each player starts with one population of a species of prehistoric amphibian—e.g. Ichthyostega, Tulerpeton, and Acanthostega. Gradually, if the player monitors the species' progress and moves them to more appropriate habitats and climate zones, the selected species will feed, breed, and prosper. From that secure population who can then evolve more advanced and adaptable creatu |
https://en.wikipedia.org/wiki/Couepia%20polyandra | Couepia polyandra, also known as olosapo, zapote amarillo, baboon cap, and monkey cap, is a flowering tree in the family Chrysobalanaceae.
Distribution
Couepia polyandra is native to southern Mexico south to Panama and has been introduced to Florida. It grows wild in damp thickets, riverine forests, and low woodland up to in elevation.
Description
It is an evergreen shrub or small tree with a spreading crown that grows to in height. The leaves are dark green and are elliptic to ovate in shape and measure in length and in width. They are round to cuneate at the base and acuminate at the apex. The acumen measures 2–10 millimeters in length. They are glabrous above when mature and have a caducous pubescence when young. The underside is strongly arachnoid. The midrib is prominent above and is pubescent when young; primary veins are in pairs of 8–15 and are prominent on both surfaces. The stipules of the leaves measure 2–4 millimeters in length and are linear, membranous, and caducous. The petioles measure 4–7 millimeters in length and are terete with 2 inconspicuous medial glands. The inflorescences are terminal and axillary panicles. The rachis and branches have a short, light brown pubescence and the bracts and bracteoles measure 1–3.5 millimeters in length and are ovate and caducous. The receptacle is subcylindrical and measures around 4 millimeters in length and has a short, appressed pubescence on the exterior and is glabrous within except for the deflexed hairs at the throat. The calyx lobes are rounded and the petals number 5 and are white and glabrous but have ciliate margins. It has 11–21 stamens, which are inserted in an arc of 180–240 degrees with a few staminodes opposite. The ovary is villous and pubescent for half its length. The bark is brown in color and mostly smooth. The fruit is edible and is yellow to orange-yellow in color when ripe and is green when unripe. It is ovoid in shape and measures in length and in width. It contains one large seed |
https://en.wikipedia.org/wiki/VIA%20PadLock | VIA PadLock is a central processing unit (CPU) instruction set extension to the x86 microprocessor instruction set architecture (ISA) found on processors produced by VIA Technologies and Zhaoxin. Introduced in 2003 with the VIA Centaur CPUs, the additional instructions provide hardware-accelerated random number generation (RNG), Advanced Encryption Standard (AES), SHA-1, SHA256, and Montgomery modular multiplication.
Instructions
The PadLock instruction set can be divided into four subsets:
Random number generation (RNG)
XSTORE: Store Available Random Bytes (aka XSTORERNG)
REP XSTORE: Store ECX Random Bytes
Advanced cryptography engine (ACE) - for AES crypto; two versions
REP XCRYPTECB: Electronic code book
REP XCRYPTCBC: Cipher Block Chaining
REP XCRYPTCTR: Counter Mode (ACE2)
REP XCRYPTCFB: Cipher Feedback Mode
REP XCRYPTOFB: Output Feedback Mode
SHA hash engine (PHE)
REP XSHA1: Hash Function SHA-1
REP XSHA256: Hash Function SHA-256
Montgomery multiplier (PMM)
REP MONTMUL
The padlock capability is indicated via a CPUID instruction with EAX = 0xC0000000. If the resultant EAX >= 0xC0000001, the CPU is aware of Centaur features. An additional request with EAX = 0xC0000001 then returns PadLock support in EDX. The padlock capability can be toggled on or off with MSR 0X1107.
VIA PadLock found on some Zhaoxin CPUs have SM3 hashing and SM4 block cipher added.
CPUs with PadLock
All VIA Nano CPUs support SHA, AES, and RNG.
All VIA Eden CPUs since 2003 (C3 Nehemiah) support AES and RNG. All these released since 2006 support AES, RNG, SHA, and PMM.
All VIA C7 CPUs support AES, RNG, SHA, and PMM.
Supporting software
Linux kernel since 2.6.11 has PadLock AES. PadLock SHA was introduced in 2.6.19. These are handled as "hardware crypto devices".
OpenBSD and FreeBSD support PadLock.
OpenSSL supports PadLock AES and SHA since 2004 (0.9.7f/0.9.8a).
GNU assembler supports PadLock since 2004.
See also
AES instruction set
Block cipher mode of operati |
https://en.wikipedia.org/wiki/Harmonic%20spectrum | A harmonic spectrum is a spectrum containing only frequency components whose frequencies are whole number multiples of the fundamental frequency; such frequencies are known as harmonics. "The individual partials are not heard separately but are blended together by the ear into a single tone."
In other words, if is the fundamental frequency, then a harmonic spectrum has the form
A standard result of Fourier analysis is that a function has a harmonic spectrum if and only if it is periodic.
See also
Fourier series
Harmonic series (music)
Periodic function
Scale of harmonics
Undertone series |
https://en.wikipedia.org/wiki/Optiquest | Optiquest is a line of budget LCD (and formerly CRT) displays by ViewSonic.
They differ from the main ViewSonic product line in several ways, such as availability of parts to warranty length. |
https://en.wikipedia.org/wiki/Normal%20distributions%20transform | The normal distributions transform (NDT) is a point cloud registration algorithm introduced by Peter Biber and Wolfgang Straßer in 2003, while working at University of Tübingen.
The algorithm registers two point clouds by first associating a piecewise normal distribution to the first point cloud, that gives the probability of sampling a point belonging to the cloud at a given spatial coordinate, and then finding a transform that maps the second point cloud to the first by maximising the likelihood of the second point cloud on such distribution as a function of the transform parameters.
Originally introduced for 2D point cloud map matching in simultaneous localization and mapping (SLAM) and relative position tracking, the algorithm was extended to 3D point clouds and has wide applications in computer vision and robotics. NDT is very fast and accurate, making it suitable for application to large scale data, but it is also sensitive to initialisation, requiring a sufficiently accurate initial guess, and for this reason it is typically used in a coarse-to-fine alignment strategy.
Formulation
The NDT function associated to a point cloud is constructed by partitioning the space in regular cells. For each cell, it is possible to define the mean and covariance of the points of the cloud that fall within the cell. The probability density of sampling a point at a given spatial location within the cell is then given by the normal distribution
.
Two point clouds can be mapped by a Euclidean transformation with rotation matrix and translation vector
that maps from the second cloud to the first, parametrised by the rotation angles and translation components.
The algorithm registers the two point clouds by optimising the parameters of the transformation that maps the second cloud to the first, with respect to a loss function based on the NDT of the first point cloud, solving the following problem
where the loss function represents the negated likelihood, obtaine |
https://en.wikipedia.org/wiki/Retinalophototroph | A retinalophototroph is one of two different types of phototrophs, and are named for retinal-binding proteins (microbial rhodopsins) they utilize for cell signaling and converting light into energy. Like all photoautotrophs, retinalophototrophs absorb photons to initiate their cellular processes. However, unlike all photoautotrophs, retinalophototrophs do not use chlorophyll or an electron transport chain to power their chemical reactions. This means retinalophototrophs are incapable of traditional carbon fixation, a fundamental photosynthetic process that transforms inorganic carbon (carbon contained in molecular compounds like carbon dioxide) into organic compounds. For this reason, experts consider them to be less efficient than their chlorophyll-using counterparts, chlorophototrophs.
Energy conversion
Retinalophototrophs achieve adequate energy conversion via a proton-motive force. In retinalophototrophs, proton-motive force is generated from rhodopsin-like proteins, primarily bacteriorhodopsin and proteorhodopsin, acting as proton pumps along a cellular membrane.
To capture photons needed for activating a protein pump, retinalophototrophs employ organic pigments known as carotenoids, namely beta-carotenoids. Beta-carotenoids present in retinalophototrophs are unusual candidates for energy conversion, but they possess high Vitamin-A activity necessary for retinaldehyde, or retinal, formation. Retinal, a chromophore molecule configured from Vitamin A, is formed when bonds between carotenoids are disrupted in a process called cleavage. Due to its acute light sensitivity, retinal is ideal for activation of proton-motive force and imparts a unique purple coloration to retinalophototrophs. Once retinal absorbs enough light, it isomerizes, thereby forcing a conformational (i.e., structural) change among the covalent bonds of the rhodopsin-like proteins. Upon activation, these proteins mimic a gateway, allowing passage of ions to create an electrochemical gradient b |
https://en.wikipedia.org/wiki/Analytic%20proof | In mathematics, an analytic proof is a proof of a theorem in analysis that only makes use of methods from analysis, and which does not predominantly make use of algebraic or geometrical methods. The term was first used by Bernard Bolzano, who first provided a non-analytic proof of his intermediate value theorem and then, several years later provided a proof of the theorem that was free from intuitions concerning lines crossing each other at a point, and so he felt happy calling it analytic (Bolzano 1817).
Bolzano's philosophical work encouraged a more abstract reading of when a demonstration could be regarded as analytic, where a proof is analytic if it does not go beyond its subject matter (Sebastik 2007). In proof theory, an analytic proof has come to mean a proof whose structure is simple in a special way, due to conditions on the kind of inferences that ensure none of them go beyond what is contained in the assumptions and what is demonstrated.
Structural proof theory
In proof theory, the notion of analytic proof provides the fundamental concept that brings out the similarities between a number of essentially distinct proof calculi, so defining the subfield of structural proof theory. There is no uncontroversial general definition of analytic proof, but for several proof calculi there is an accepted notion. For example:
In Gerhard Gentzen's natural deduction calculus the analytic proofs are those in normal form; that is, no formula occurrence is both the principal premise of an elimination rule and the conclusion of an introduction rule;
In Gentzen's sequent calculus the analytic proofs are those that do not use the cut rule.
However, it is possible to extend the inference rules of both calculi so that there are proofs that satisfy the condition but are not analytic. For example, a particularly tricky example of this is the analytic cut rule, used widely in the tableau method, which is a special case of the cut rule where the cut formula is a subform |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.