text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Mantle (API)**
Mantle (API):
Mantle was a low-overhead rendering API targeted at 3D video games. AMD originally developed Mantle in cooperation with DICE, starting in 2013. Mantle was designed as an alternative to Direct3D and OpenGL, primarily for use on personal computers, although Mantle supports the GPUs present in the PlayStation 4 and in the Xbox One. In 2015, Mantle's public development was suspended and in 2019 completely discontinued, as DirectX 12 and the Mantle-derived Vulkan rose in popularity.
Overview:
The draw call improvements of Mantle help alleviate cases where the CPU is the bottleneck. The design goals of Mantle are to allow games and applications to utilize the CPUs and GPUs more efficiently, eliminate CPU bottlenecks by reducing API validation overhead and allowing more effective scaling on multiple CPU cores, provide faster draw routines, and allow greater control over the graphics pipeline by eliminating certain aspects of hardware abstraction inherent to both current prevailing graphics APIs OpenGL and Direct3D.
Overview:
CPU-bound scenarios With a basic implementation, Mantle was designed to improve performance in scenarios where the CPU is the limiting factor: Low-overhead validation and processing of API commands; Explicit command buffer control; Close to linear performance scaling from reordering command buffers onto multiple CPU cores; Reduced runtime shader compilation overhead; AMD claims that Mantle can generate up to 9 times more draw calls per second than comparable APIs by reducing CPU overhead; Multithreaded parallel CPU rendering support for at least 8 cores.
Overview:
GPU-bound scenarios Mantle was also designed to improve situations where high resolutions and "maximum detail" settings are used, although to a somewhat lesser degree, as these settings tax GPU resources in a way that is more difficult to improve at the API level. While Mantle provides some built-in features to improve GPU-bound performance, gains in these cases are largely dependent on how well Mantle features and optimizations are being utilized by the game engine. Some of those features include: Reduction of command buffers submissions Explicit control of resource compression, expands and synchronizations Asynchronous DMA queue for data uploads independent from the graphics engine Asynchronous compute queue for overlapping of compute and graphics workloads Data formats optimizations via flexible buffer/image access Advanced Anti-Aliasing features for MSAA/EQAA optimizations Native multi-GPU support Benchmarks Performance superior to Direct3D 11 Improved performance in Battlefield 4 and up to 319% faster in the Star Swarm demo in single GPU configuration in extremely CPU-limited situations.
Overview:
Other claims Easier to port from Mantle to Direct3D 12 than from Direct3D 11 to Direct3D 12 At GDC 14 Oxide Games employee Dan Baker stated that Mantle would address fundamental development challenges that could not be addressed by a retrofit of an existing API. It is hard to optimize for the graphics device driver.
At the AMD Developer Summit (APU) in November 2013 Johan Andersson, technical director of the Frostbite engine at DICE praised Mantle for making development easier and enabling developers to innovate.
Mantle targets 100K Monolithic Pipeline Pipeline saving and loading Hybrid Resource Model Generalized Resources Control over resource preparation Dynamic flow control without CPU intervention Direct GPU control Reduced runtime shader compilation overhead Better control over the hardware.
"All hardware capabilities are exposed through the API." Reduction of command buffer submissions Data formats optimizations via flexible buffer/image access Explicit control of resource compression, expansion, and synchronization Asynchronous DMA queue for data uploads independent from the graphics engine Asynchronous compute queue for overlapping of compute and graphics workloads New rendering techniques
Support:
The Mantle API was only available as part of AMD Catalyst prior to 19.5.1, which was available for Microsoft Windows. AMD promised to support their Mantle API only for their graphics cards and APUs which are based on their Graphics Core Next microarchitecture, but not older products based on the TeraScale microarchitecture. As of July 2014 the implementation of the Mantle API was available for the following hardware: certain Radeon HD 7000 Series GPUs certain Radeon HD 8000 Series GPUs certain AMD Radeon Rx 200 Series GPUs ("R7" and "R9") all Steamroller-based "Kaveri" APUs: AMD A10-7000 Series and AMD A8-7000 Series all Jaguar-based "Kabini" and "Temash" APUs: AMD E1-2000 Series, E2-3000 Series, A4-1200 Series, A4-1350, A4-5000 Series, A6-1450, A6-5200, Sempron 2650, Sempron 3850, Athlon 5150, Athlon 5350, etc.
Support:
all Puma-based "Beema" and "Mullins" APUs: E1 Micro-6200T, A4 Micro-6400T, A10 Micro-6700T, E1-6010, E2-6110, A4-6210, A6-6310, etc.Mantle was originally planned to be released on other platforms than Windows, including Linux, but it never happened.While the API was officially discontinued, Clément Guérin started a Mantle to Vulkan translation layer called GRVK in mid 2020. This allows the API and ultimately the games to live on even without Mantle supporting graphic drivers.
Support:
Game engines At GDC 2014, Crytek announced they will support Mantle in their CryEngine.
During a GPU 14 Tech Days presentation, an announcement was made that Frostbite 3 would include a Mantle backend.
The Nitrous game engine from Oxide Games, alongside DirectX 12. Mantle benchmark is still available in a free Star Swarm stress test.
Thief is based on a modified Unreal Engine 3 that supported Mantle.
LORE, a Civilization: Beyond Earth engine supported Mantle.
Asura, engine used by Sniper Elite III supported Mantle.
Video games Battlefield 4 Battlefield Hardline Thief Plants vs. Zombies: Garden Warfare Civilization: Beyond Earth Dragon Age: Inquisition Sniper Elite III Originally planned Star Citizen 15 Frostbite games after Battlefield 4 were planned to support Mantle, potentially including Need for Speed Rivals, Mass Effect: Andromeda, Mirror's Edge Catalyst, The Sims 4 and Star Wars Battlefront (2015).
There have been rumours about other games from that time, including Call of Duty: Advanced Warfare, Dying Light, Grand Theft Auto V and Rise of the Tomb Raider potentially supporting Mantle, but these reports were not confirmed.
Similar technologies:
A set of recent OpenGL 4.4 features, coupled with bindless texturing as an extension, can also substantially reduce driver overhead. This approach, termed by the Khronos Group as "AZDO" (Approaching Zero Driver Overhead) has been shown to achieve substantial performance improvements, approaching those stated for Mantle. Nvidia has extended OpenGL with a number of features that further reduce driver overhead.After details about DirectX 12 were made public, AMD has stated that they fully intend to support DirectX 12, but at the same time they claimed that Mantle "will [still] do some things faster." They have also claimed that due to similarities in the design philosophy of the two APIs, porting games from Mantle to DirectX 12 will be relatively straightforward, and easier than porting from DirectX 11 to 12.Ultimately, AMD discontinued Mantle as a game API due to the similar aims of DirectX 12 and glNext (later renamed Vulkan). AMD donated the Mantle API to the Khronos group, which developed it into the Vulkan API.
Comments:
Much of the work that drivers used to do on an application’s behalf is now the responsibility of the game engine. ... It also means that this work, which must still be done, is done by someone with considerably more information. Because the engine knows exactly what it will do and how it will do it, it is able to make design decisions that drivers could not.
Recording and FPS overlay software:
PC gamers and professionals traditionally used programs such as Fraps and Bandicam to record gameplay, measure game FPS and display FPS overlay, but because Mantle is new, most traditional recording software does not work with new titles while using the new API.
Recording and FPS overlay software:
In partnership with AMD, PC gaming community and game recording software maker Raptr have overhauled their client and have since re-branded it as the AMD Gaming Evolved client in conjunction with AMD's Gaming Evolved initiative in the PC gaming space. Out of the partnership, players who install and use the client while in-game can earn points to spend on digital items like games or computer hardware, chat with friends, keep their game library optimized, check for graphics card driver updates, stream their games to Twitch and record gameplay of their own with a built-in GVR, a feature similar to Nvidia Shadowplay software in its own GeForce Experience software that allows users to define a custom buffer length in their game for retroactive game recording with the push of a button so no moment gets missed and users typically do not need expensive hard drive setups to record to. In late 2014, AMD updated the client to support the recording and streaming of titles using Mantle. As of its initial update into the client, the Gaming Evolved software was the only software to officially support the recording and streaming of Mantle enabled games.
Recording and FPS overlay software:
Besides Raptr, D3DGear was the only other commercial game recording software that supported Mantle API based games. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Feed conversion ratio**
Feed conversion ratio:
In animal husbandry, feed conversion ratio (FCR) or feed conversion rate is a ratio or rate measuring of the efficiency with which the bodies of livestock convert animal feed into the desired output. For dairy cows, for example, the output is milk, whereas in animals raised for meat (such as beef cows, pigs, chickens, and fish) the output is the flesh, that is, the body mass gained by the animal, represented either in the final mass of the animal or the mass of the dressed output. FCR is the mass of the input divided by the output (thus mass of feed per mass of milk or meat). In some sectors, feed efficiency, which is the output divided by the input (i.e. the inverse of FCR), is used. These concepts are also closely related to efficiency of conversion of ingested foods (ECI).
Background:
Feed conversion ratio (FCR) is the ratio of inputs to outputs; it is the inverse of "feed efficiency" which is the ratio of outputs to inputs. FCR is widely used in hog and poultry production, while FE is used more commonly with cattle. Being a ratio the FCR is dimensionless, that is, it is not affected by the units of measurement used to determine the FCR.FCR a function of the animal's genetics and age, the quality and ingredients of the feed, and the conditions in which the animal is kept, and storage and use of the feed by the farmworkers.As a rule of thumb, the daily FCR is low for young animals (when relative growth is large) and increases for older animals (when relative growth tends to level out). However FCR is a poor basis to use for selecting animals to improve genetics, as that results in larger animals that cost more to feed; instead residual feed intake (RFI) is used which is independent of size. RFI uses for output the difference between actual intake and predicted intake based on an animal's body weight, weight gain, and composition.The outputs portion may be calculated based on weight gained, on the whole animal at sale, or on the dressed product; with milk it may be normalized for fat and protein content.As for the inputs portion, although FCR is commonly calculated using feed dry mass, it is sometimes calculated on an as-fed wet mass basis, (or in the case of grains and oilseeds, sometimes on a wet mass basis at standard moisture content), with feed moisture resulting in higher ratios.
Conversion ratios for livestock:
Animals that have a low FCR are considered efficient users of feed. However, comparisons of FCR among different species may be of little significance unless the feeds involved are of similar quality and suitability.
Conversion ratios for livestock:
Beef cattle As of 2013 in the US, an FCR calculated on live weight gain of 4.5–7.5 was in the normal range with an FCR above 6 being typical. Divided by an average carcass yield of 62.2%, the typical carcass weight FCR is above 10. As of 2013 FCRs had not changed much compared to other fields in the prior 30 years, especially compared to poultry which had improved feed efficiency by about 250% over the last 50 years.
Conversion ratios for livestock:
Dairy cattle The dairy industry traditionally didn't use FCR but in response to increasing concentration in the dairy industry and other livestock operations, the EPA updated its regulations in 2003 controlling manure and other waste releases produced by livestock operators.: 11–11 In response the USDA began issuing guidance to dairy farmers about how to control inputs to better minimize manure output and to minimize harmful contents, as well as optimizing milk output.In the US, the price of milk is based on the protein and fat content, so the FCR is often calculated to take that into account. Using an FCR calculated just on the weight of protein and fat, as of 2011 an FCR of 13 was poor, and an FCR of 8 was very good.Another method for dealing with pricing based on protein and fat, is using energy-corrected milk (ECM), which adds a factor to normalize assuming certain amounts of fat and protein in a final milk product; that formula is (0.327 x milk mass) + (12.95 x fat mass) + (7.2 x protein mass).In the dairy industry, feed efficiency (ECM/intake) is often used instead of FCR (intake/ECM); an FE less than 1.3 is considered problematic.FE based simply on the weight of milk is also used; an FE between 1.30 and 1.70 is normal.
Conversion ratios for livestock:
Pigs Pigs have been kept to produce meat for 5,000 to 9,000 years. As of 2011, pigs used commercially in the UK and Europe had an FCR, calculated using weight gain, of about 1 as piglets and ending about 3 at time of slaughter. As of 2012 in Australia and using dressed weight for the output, a FCR calculated using weight of dressed meat of 4.5 was fair, 4.0 was considered "good", and 3.8, "very good".The FCR of pigs is greatest up to the period, when pigs weigh 220 pounds. During this period, their FCR is 3.5. Their FCR begins decreasing gradually after this period. For instance, in the US as of 2012, commercial pigs had FCR calculated using weight gain, of 3.46 for while they weighed between 240 and 250 pounds, 3.65 between 250 and 260 pounds, 3.87 between 260 and 270 lbs, and 4.09 between 280 and 270 lbs.Because FCR calculated on the basis of weight gained gets worse after pigs mature, as it takes more and more feed to drive growth, countries that have a culture of slaughtering pigs at very high weights, like Japan and Korea, have poor FCRs.
Conversion ratios for livestock:
Sheep Some data for sheep illustrate variations in FCR. A FCR (kg feed dry matter intake per kg live mass gain) for lambs is often in the range of about 4 to 5 on high-concentrate rations, 5 to 6 on some forages of good quality, and more than 6 on feeds of lesser quality. On a diet of straw, which has a low metabolizable energy concentration, FCR of lambs may be as high as 40. Other things being equal, FCR tends to be higher for older lambs (e.g. 8 months) than younger lambs (e.g. 4 months).
Conversion ratios for livestock:
Poultry As of 2011 in the US, broiler chickens has an FCR of 1.6 based on body weight gain, and mature in 39 days. At around the same time the FCR based on weight gain for broilers in Brazil was 1.8. The global average in 2013 is around 2.0 for weight gain (live weight) and 2.8 for slaughtered meat (carcass weight).For hens used in egg production in the US, as of 2011 the FCR was about 2, with each hen laying about 330 eggs per year. When slaughtered, the world average layer flock as of 2013 yields a carcass FCR of 4.2, still much better than the average backyard chicken flock (FCR 9.2 for eggs, 14.6 for carcass).From the early 1960s to 2011 in the US broiler growth rates doubled and their FCRs halved, mostly due to improvements in genetics and rapid dissemination of the improved chickens. The improvement in genetics for growing meat created challenges for farmers who breed the chickens that are raised by the broiler industry, as the genetics that cause fast growth decreased reproductive abilities.
Conversion ratios for livestock:
Carnivorous fish The FIFO ratio (or Fish In – Fish Out ratio) is a conversion ratio applied to aquaculture, where the first number is the mass of harvested fish used to feed farmed fish, and the second number is the mass of the resulting farmed fish. FIFO is a way of expressing the contribution from harvested wild fish used in aquafeed compared with the amount of edible farmed fish, as a ratio. Fishmeal and fish oil inclusion rates in aquafeeds have shown a continual decline over time as aquaculture grows and more feed is produced, but with a finite annual supply of fishmeal and fish oil. Calculations have shown that the overall fed aquaculture FIFO declined from 0.63 in 2000 to 0.33 in 2010, and 0.22 in 2015. In 2015, therefore, approximately 4.55 kg of farmed fish was produced for every 1 kg of wild fish harvested and used in feed. The fish used in fishmeal and fish oil production are not used for human consumption, but with their use as fishmeal and fish oil in aquafeed they contribute to global food production.
Conversion ratios for livestock:
As of 2015 farm raised Atlantic salmon had a commodified feed supply with four main suppliers, and an FCR of around 1. Tilapia is about 1.5, and as of 2013 farmed catfish had a FCR of about 1.
Herbivorous and omnivorous fish For herbivorous and omnivorous fish like Chinese carp and tilapia, the plant-based feed yields much lower FCR compared to carnivorous kept on a partially fish-based diet, despite a decrease in overall resource use. The edible (fillet) FCR of tilapia is around 4.6 and the FCR of Chinese carp is around 4.9.
Rabbits In India, rabbits raised for meat had an FCR of 2.5 to 3.0 on high grain diet and 3.5 to 4.0 on natural forage diet, without animal-feed grain.
Conversion ratios for livestock:
Global averages by species and production systems In a global study published in the journal Global Food Security, FAO estimated various feed conversion ratios, taking into account the diversity of feed material consumed by livestock. At global level, ruminants require 133 kg of dry matter per kg of protein while monograstrics require 30. However, when considering human edible feed only, ruminants require 5.9 kg of feed to produce 1 kg of animal protein, while monogatrsics require 15.8 kg. When looking at meat only, ruminants consume an average of 2.8 kg of human edible feed per kg of meat produced, while monogastrics need 3.2. Finally, when accounting for the protein content of the feed, ruminant need an average of 0.6 kg of edible plant protein to produce 1 kg of animal protein while monogastric need 2. This means that ruminants make a positive net contribution to the supply of edible protein for humans at global level.
Feed conversion ratios of meat alternatives:
Many alternatives to conventional animal meat sources have been proposed for higher efficiency, including insects, meat analogues, and cultured meats.
Feed conversion ratios of meat alternatives:
Insects Although there are few studies of the feed conversion ratios of edible insects, the house cricket (Acheta domesticus) has been shown to have a FCR of 0.9 - 1.1 depending on diet composition. A more recent work gives an FCR of 1.9–2.4. Reasons contributing to such a low FCR including the whole body being used for food, the lack of internal temperature control (insects are poikilothermic), high fecundity and rate of maturation.
Feed conversion ratios of meat alternatives:
Meat analogue If one treats tofu as a meat, the FCR reaches as low as 0.29. The FCRs for less watery forms of meat analogues are unknown.
Cultured meat Although cultured meat has a potentially much lower land footprint required, its FCR is closer to poultry at around 4 (2-8). It has a high need for energy inputs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oxoguanine glycosylase**
Oxoguanine glycosylase:
8-Oxoguanine glycosylase, also known as OGG1, is a DNA glycosylase enzyme that, in humans, is encoded by the OGG1 gene. It is involved in base excision repair. It is found in bacterial, archaeal and eukaryotic species.
Function:
OGG1 is the primary enzyme responsible for the excision of 8-oxoguanine (8-oxoG), a mutagenic base byproduct that occurs as a result of exposure to reactive oxygen species (ROS). OGG1 is a bifunctional glycosylase, as it is able to both cleave the glycosidic bond of the mutagenic lesion and cause a strand break in the DNA backbone. Alternative splicing of the C-terminal region of this gene classifies splice variants into two major groups, type 1 and type 2, depending on the last exon of the sequence. Type 1 alternative splice variants end with exon 7 and type 2 end with exon 8. One set of spliced forms are designated 1a, 1b, 2a to 2e. All variants have the N-terminal region in common. Many alternative splice variants for this gene have been described, but the full-length nature for every variant has not been determined. In eukaryotes, the N-terminus of this gene contains a mitochondrial targeting signal, essential for mitochondrial localization. However, OGG1-1a also has a nuclear location signal at its C-terminal end that suppresses mitochondrial targeting and causes OGG1-1a to localize to the nucleus. The main form of OGG1 that localizes to the mitochondria is OGG1-2a. A conserved N-terminal domain contributes residues to the 8-oxoguanine binding pocket. This domain is organised into a single copy of a TBP-like fold.Despite the presumed importance of this enzyme, mice lacking Ogg1 have been generated and found to have a normal lifespan, and Ogg1 knockout mice have a higher probability to develop cancer, whereas MTH1 gene disruption concomitantly suppresses lung cancer development in Ogg1-/- mice. Mice lacking Ogg1 have been shown to be prone to increased body weight and obesity, as well as high-fat-diet-induced insulin resistance. There is some controversy as to whether deletion of Ogg1 actually leads to increased 8-Oxo-2'-deoxyguanosine (8-oxo-dG) levels: high performance liquid chromatography with electrochemical detection (HPLC-ECD) assay suggests the deletion can lead to an up to 6 fold higher level of 8-oxo-dG in nuclear DNA and a 20-fold higher level in mitochondrial DNA, whereas DNA-fapy glycosylase assay indicates no change in 8-oxo-dG levels.Increased oxidant stress temporarily inactivates OGG1, which recruits transcription factors such as NFkB and thereby activates expression of inflammatory genes.
OGG1 deficiency and increased 8-oxo-dG in mice:
Mice without a functional OGG1 gene have about a 5-fold increased level of 8-oxo-dG in their livers compared to mice with wild-type OGG1. Mice defective in OGG1 also have an increased risk for cancer. Kunisada et al. irradiated mice without a functional OGG1 gene (OGG1 knock-out mice) and wild-type mice three times a week for 40 weeks with UVB light at a relatively low dose (not enough to cause skin redness). Both types of mice had high levels of 8-oxo-dG in their epidermal cells three hours after irradiation. After 24 hours, over half of the initial amount of 8-oxo-dG was absent from the epidermal cells of the wild-type mice, but 8-oxo-dG remained elevated in the epidermal cells of the OGG1 knock-out mice. The irradiated OGG1 knock-out mice went on to develop more than twice the incidence of skin tumors compared to irradiated wild-type mice, and the rate of malignancy within the tumors was higher in the OGG1 knock-out mice (73%) than in the wild-type mice (50%).
OGG1 deficiency and increased 8-oxo-dG in mice:
As reviewed by Valavanidis et al., increased levels of 8-oxo-dG in a tissue can serve as a biomarker of oxidative stress. They also noted that increased levels of 8-oxo-dG are frequently found during carcinogenesis.
OGG1 deficiency and increased 8-oxo-dG in mice:
In the figure showing examples of mouse colonic epithelium, the colonic epithelium from a mouse on a normal diet was found to have a low level of 8-oxo-dG in its colonic crypts (panel A). However, a mouse likely undergoing colonic tumorigenesis (due to deoxycholate added to its diet) was found to have a high level of 8-oxo-dG in its colonic epithelium (panel B). Deoxycholate increases intracellular production of reactive oxygen resulting in increased oxidative stress,> and this can lead to tumorigenesis and carcinogenesis.
Epigenetic control:
In a breast cancer study, the methylation level of the OGG1 promoter was found to be negatively correlated with expression level of OGG1 messenger RNA. This means that hypermethylation was associated with low expression of OGG1 and hypomethylation was correlated with over-expression of OGG1. Thus, OGG1 expression is under epigenetic control. Breast cancers with methylation levels of the OGG1 promoter that were more than two standard deviations either above or below the normal were each associated with reduced patient survival.
In cancers:
OGG1 is the primary enzyme responsible for the excision of 8-oxo-dG. Even when OGG1 expression is normal, the presence of 8-oxo-dG is mutagenic, since OGG1 is not 100% effective. Yasui et al. examined the fate of 8-oxo-dG when this oxidized derivative of deoxyguanosine was inserted into a specific gene in 800 cells in culture. After replication of the cells, 8-oxo-dG was restored to G in 86% of the clones, probably reflecting accurate OGG1 base excision repair or translesion synthesis without mutation. G:C to T:A transversions occurred in 5.9% of the clones, single base deletions in 2.1% and G:C to C:G transversions in 1.2%. Together, these mutations were the most common, totalling 9.2% of the 14% of mutations generated at the site of the 8-oxo-dG insertion. Among the other mutations in the 800 clones analyzed, there were also 3 larger deletions, of sizes 6, 33 and 135 base pairs. Thus 8-oxo-dG can directly cause mutations, some of which may contribute to carcinogenesis.
In cancers:
If OGG1 expression is reduced in cells, increased mutagenesis, and therefore increased carcinogenesis, would be expected. The table below lists some cancers associated with reduced expression of OGG1.
OGG1 or OGG activity in blood, and cancer:
OGG1 methylation levels in blood cells were measured in a prospective study of 582 US military veterans, median age 72, and followed for 13 years. High OGG1 methylation at a particular promoter region was associated with increased risk for any cancer, and in particular for risk of prostate cancer.Enzymatic activity excising 8-oxoguanine from DNA (OGG activity) was reduced in peripheral blood mononuclear cells (PBMCs), and in paired lung tissue, from patients with non–small cell lung cancer. OGG activity was also reduced in PBMCs of patients with head and neck squamous cell carcinoma (HNSCC).
Interactions:
Oxoguanine glycosylase has been shown to interact with XRCC1 and PKC alpha.
Pathology:
OGG1 may be associated with cancer risk in BRCA1 and BRCA2 mutation carriers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Monster Hunter Stories**
Monster Hunter Stories:
Monster Hunter Stories is a role-playing video game developed by Capcom and Marvelous and published by Nintendo for the Nintendo 3DS. It is a spinoff title in the Monster Hunter series and features a drastically different gameplay focus. Unlike previous titles in the franchise, Monster Hunter Stories lets players take on the role of a Rider instead of a Hunter, and are able to take part in a traditional turn-based battle system. Major changes and additions featured in this title include hatching eggs and befriending monsters, battling alongside them, executing special kinship techniques, and customizing monsters' abilities and appearance. The game was released in Japan on October 8, 2016, and in North America, Europe and Australia in September 2017. Later, a high-definition mobile version of the game was released on December 4, 2017 in Japan and September 25, 2018 worldwide. The Nintendo 3DS version includes support with Amiibo figures, with a first set launching alongside the game, and a second set launching two months later. A similarly named anime series is a loose adaptation of this game, and a sequel for the Nintendo Switch and Microsoft Windows was released in 2021.
Gameplay:
Monster Hunter Stories features a completely new gameplay structure compared to the previous titles in the series. The player assumes the role of a Rider who befriends monsters by stealing eggs and hatching them. The player then has the ability to name their companion monsters, or "Monsties", ride them around in the overworld, and have them join the player in battles. The player will be able to explore different environments, encounter monsters in the field, battle them, collect items and steal eggs from monsters' nests.
Gameplay:
Unlike most games in the franchise, Monster Hunter Stories features a more traditional turn-based battle system. During the player's turn, both the rider and their monsters will get to attack the enemy. Attacks for both the player and the enemy come in three types: Power, Speed, and Technical. Each category is stronger against one in particular in a rock-paper-scissors fashion: Power will win against Technical, Technical will win against Speed, and Speed will win against Power. When an enemy monster is targeting someone and that character attempts to attack them, a Head-to-Head will occur which pits their two attack types against each other, with the dominant attack type prevailing in the exchange. If the player and their monster both use the same attack type while the enemy is targeting someone and have the type advantage, they will unleash a Double Attack and prevent the enemy from retaliating altogether. Winning battles will award the player with experience points and item drops. Beyond the main story, the player can engage in sidequests, called subquests, which provide items and experience points upon completion.
Gameplay:
The player only has a choice of four weapons from the core series to use in battle, those being the Great Sword, the Sword & Shield, the Hammer, and the Hunting Horn. The player will have access to different skills depending on the weapon and equipment they use, and the player will also be able to use items in battle. Companion monsters can be customized with the Rite of Channeling feature, in which the player can transfer Bond Genes into a monster's slots in order to unlock and awaken their stats and abilities. This allows for further customization and adjustment to the player's play style. The game features multiplayer battles via local or Internet connection. The Nintendo 3DS version is also compatible with Amiibo figures, with the figures from the Monster Hunter Stories line of Amiibo unlocking original and special monsters, among other bonuses.
Plot:
The game starts with the main character (whose gender can be selected by the player) and their friends, Cheval and Lilia, looking for a monster egg. They find a nest with one, and it hatches into a Rathalos that bonds with the Protagonist immediately. They nickname the creature Ratha. However, upon returning to Hakum Village, it is attacked by a rampaging Nargacuga controlled by a sickness called the Black Blight. Ratha tries to defend the village but is seemingly killed. Cheval's mother dies in the attack.
Plot:
A year later, the village recovered, but the threat of the Black Blight still remains. The main character takes the Rite of Kinship and is gifted by the village chief with a piece of Kinship Ore, a crystal with the power to tame wild monsters almost instantaneously. They then gain the title of Rider, which is unique to their village. They also meet a strange-looking wild felyne who names himself Navirou and demands to accompany them. Traveling outside the village to tame monsters, they soon come across the Black Blight, which they are able to purify by using their Kinship Ore to resonate with a larger, natural formation of the mineral.
Plot:
Lilia decides to join the Royal Scriveners, a group studying the Black Blight, while Cheval tames a Rathian and becomes fixated on slaying blighted monsters to get his revenge. Meanwhile, the Protagonist continues to purify the Blight. Eventually, they encounter Doctor Manelger, a mad scientist who is attempting to create artificial Kinship Ore and bend monsters to his will. The main character reunites with Ratha, who is revealed to have survived the attack, and flies to Manelger's laboratory, rescuing many kidnapped felyne slaves, as well as the Numbers, felyne test subjects of which Navirou was formerly a member.
Plot:
To stop the Black Blight, the Protagonist decides to look for the egg of a legendary white dragon, Versa Pietru, that was once tamed by the very first Rider, who is called Redan. They find the egg, but it is stolen by Manelger. Manelger gives the egg to Cheval, who is wielding the artificial ore, and the creature hatches and immediately grows to an enormous size. However, after Cheval loses control of the creature, it is infected by blight, turning into Makili Pietru. Cheval finally realizes his foolishness, while the Protagonist rushes to defeat Makili Pietru before it can infect the entire world.
Plot:
However, Ratha's Blight, given to him by a scar left by the Nargacuga, suddenly manifests and the player must defeat Ratha first. After Ratha comes to his senses, they face off against Makili Pietru.
After a long battle, the player and Ratha use Sky-High-Dive to try to finish off the Makili Pietru. During this time, Ratha is engulfed in a bright white light, then empowered by Naivoru to defeat the Makili Pietru, showing Ratha was really the White Dragon all along.
Development:
Series executive producer Ryozo Tsujimoto stated that the idea for a role-playing video game in the Monster Hunter universe was a response to players' interests in the world's setting and livelihood of the monsters and that these concepts were thought of since 2010. The reasoning behind the switch from the Hunter role to a Rider one was to give the spotlight to the monsters, along with the idea of petting them and seeing things from their point of view. Tsujimoto stated that there are currently no plans to bring the game to the Nintendo Switch.
Release:
Monster Hunter Stories was announced in Japan in April 2015 at the "Monster Hunter Fest '15 Finals" event by Capcom with a planned release in 2016. A demo of the game was released digitally in Japan on September 20, 2016, via the Nintendo eShop. The demo features two game modes: "Quest Mode", which allows the player to play part of the story, and "Tournament Mode", which lets the player partake in Rider battles. The game was subsequently released for Nintendo 3DS in Japan on October 8, 2016 and internationally the following year.
Release:
The game was subsequently ported to iOS and Android devices with improved graphics, an updated user interface, and other features. The mobile version launched in Japan in late 2017 and reached Western markets by September 2018.Similar to previous Monster Hunter games, the player can obtain armor, items, and clothing themed to other franchises through collaborations. Stories has items based on Puzzle & Dragons X, Chibi Maruko-chan, Shōnen Jump, and The Legend of Zelda, the last of which is exclusive to the Nintendo 3DS version.
Reception:
On its first week of launch, according to 4Gamer.net and Media Create, Monster Hunter Stories sold 140,603 copies, making it the top-selling title of the week of October 3 through October 9, 2016.
Accolades The game was nominated for "Best Handheld Game" at The Game Awards 2017, and for "Best 3DS Game" and "Best RPG" at IGN's Best of 2017 Awards. It was also nominated for "Handheld Game of the Year" at the 21st Annual D.I.C.E. Awards.
Sequel:
A sequel for Nintendo Switch and Microsoft Windows, Monster Hunter Stories 2: Wings of Ruin, was announced during a Nintendo Direct Mini: Partner Showcase broadcast on September 17, 2020. This game features a new cast of characters and an original story. The game was released on July 9, 2021. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voiced retroflex nasal**
Voiced retroflex nasal:
The voiced retroflex nasal is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨ɳ ⟩, and the equivalent X-SAMPA symbol is n`.
Voiced retroflex nasal:
Like all the retroflex consonants, the IPA symbol is formed by adding a rightward-pointing hook extending from the bottom of an en (the letter used for the corresponding alveolar consonant). It is similar to ⟨ɲ⟩, the letter for the palatal nasal, which has a leftward-pointing hook extending from the bottom of the left stem, and to ⟨ŋ⟩, the letter for the velar nasal, which has a leftward-pointing hook extending from the bottom of the right stem.
Features:
Features of the voiced retroflex nasal: Its manner of articulation is occlusive, which means it is produced by obstructing airflow in the vocal tract. Because the consonant is also nasal, the blocked airflow is redirected through the nose.
Its place of articulation is retroflex, which prototypically means it is articulated subapical (with the tip of the tongue curled up), but more generally, it means that it is postalveolar without being palatalized. That is, besides the prototypical subapical articulation, the tongue contact can be apical (pointed) or laminal (flat).
Its phonation is voiced, which means the vocal cords vibrate during the articulation.
It is a nasal consonant, which means air is allowed to escape through the nose, either exclusively (nasal stops) or in addition to through the mouth.
It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides.
The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
Retroflex nasal flap:
Features Features of the retroflex nasal tap or flap: Its manner of articulation is tap or flap, which means it is produced with a single contraction of the muscles so that one articulator (usually the tongue) is thrown against another.
Its place of articulation is retroflex, which prototypically means it is articulated subapical (with the tip of the tongue curled up), but more generally, it means that it is postalveolar without being palatalized. That is, besides the prototypical subapical articulation, the tongue contact can be apical (pointed) or laminal (flat).
Its phonation is voiced, which means the vocal cords vibrate during the articulation.
It is a nasal consonant, which means air is allowed to escape through the nose, either exclusively (nasal stops) or in addition to through the mouth.
It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides.
The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
Occurrence | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**James Angus (scientist)**
James Angus (scientist):
James Alexander Angus (born 15 February 1949 in Sydney) is an Australian pharmacologist, who has served as the Lieutenant-Governor of Victoria since 12 November 2021.He held the Chair in Pharmacology at the University of Melbourne from 1993 to 2003, and was the Dean of the Faculty of Medicine, Dentistry and Health Sciences at the University of Melbourne from 2003 to 2013.
James Angus (scientist):
He was elected a Fellow of the Australian Academy of Science (FAA) in 1996, awarded the Centenary Medal in 2001, and appointed an Officer of the Order of Australia in 2010. Angus was made an Honorary Fellow of the Australian Academy of Health and Medical Sciences in 2015.After Linda Dessau finished her term in June 2023, he stood in as acting Governor until Margaret Gardner assumed office on 9 August 2023. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Delay insensitive circuit**
Delay insensitive circuit:
A delay-insensitive circuit is a type of asynchronous circuit which performs a digital logic operation often within a computing processor chip. Instead of using clock signals or other global control signals, the sequencing of computation in delay-insensitive circuit is determined by the data flow.
Delay insensitive circuit:
Data flows from one circuit element to another using "handshakes", or sequences of voltage transitions to indicate readiness to receive data, or readiness to offer data. Typically, inputs of a circuit module will indicate their readiness to receive, which will be "acknowledged" by the connected output by sending data (encoded in such a way that the receiver can detect the validity directly), and once that data has been safely received, the receiver will explicitly acknowledge it, allowing the sender to remove the data, thus completing the handshake, and allowing another datum to be transmitted.
Delay insensitive circuit:
In a delay-insensitive circuit, there is therefore no need to provide a clock signal to determine a starting time for a computation. Instead, the arrival of data to the input of a sub-circuit triggers the computation to start. Consequently, the next computation can be initiated immediately when the result of the first computation is completed.
The main advantage of such circuits is their ability to optimize processing of activities that can take arbitrary periods of time depending on the data or requested function. An example of a process with a variable time for completion would be mathematical division or recovery of data where such data might be in a cache.
Delay insensitive circuit:
The Delay-Insensitive (DI) class is the most robust of all asynchronous circuit delay models. It makes no assumptions on the delay of wires or gates. In this model all transitions on gates or wires must be acknowledged before transitioning again. This condition stops unseen transitions from occurring. In DI circuits any transition on an input to a gate must be seen on the output of the gate before a subsequent transition on that input is allowed to happen. This forces some input states or sequences to become illegal. For example OR gates must never go into the state where both inputs are one, as the entry and exit from this state will not be seen on the output of the gate. Although this model is very robust, no practical circuits are possible due to the lack of expressible conditionals in DI circuits. Instead the Quasi-Delay-Insensitive model is the smallest compromise model yet capable of generating useful computing circuits. For this reason circuits are often incorrectly referred to as Delay-Insensitive when they are Quasi Delay-Insensitive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circumpolar!**
Circumpolar!:
Circumpolar! is a novel by Richard A. Lupoff published in 1984.
Plot summary:
Circumpolar! is a novel in which an alternative history is depicted with alternate geography, involving a Great Air Race with von Richthofens vs Charles Lindbergh and Howard Hughes.
Reception:
Dave Langford reviewed Circumpolar! for White Dwarf #70, and stated that "It goes on a bit long and it grossly libels the Red Baron, but it's amusing."In his Science Fact and Science Fiction: An Encyclopedia, Brian M. Stableford cited the book as one of the relatively rare examples of science fiction dealing with the Flat Earth concept.
Reviews:
Review by Faren Miller (1984) in Locus, #280 May 1984 Review by Joe Sanders (1985) in Fantasy Review, July 1985 Review by L. J. Hurst (1985) in Vector 128 Review by David Pringle (1985) in Interzone, #14 Winter 1985/86 Review by Tom Easton (1986) in Analog Science Fiction/Science Fact, March 1986 Review by Don D'Ammassa (1986) in Science Fiction Chronicle, #80 May 1986 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Global Digital Exemplar**
Global Digital Exemplar:
The Global Digital Exemplar (GDE) programme is an NHS England initiative to achieve digital transformation in selected exemplar organisations and to create a knowledge sharing ecosystem to spread learning from these exemplars. The programme is to enable "digitally advanced" NHS trusts to share knowledge with other NHS trusts, specifically knowledge gained during the implementation of IT systems, and especially experience from introducing electronic health record (EHR) systems. The GDE project is expected to last two to three and a half years; with the most digitally advanced trusts on the shorter time scale.Four rounds of exemplars have been announced so far — two waves of acute trust GDEs, and one wave each of ambulance trusts, and mental health trusts. In addition, eighteen acute trust "fast followers" have been partnered with the acute trusts.The programme involves the investment of £395 million. Each GDE will receive "up to £10 million" to spend on digital projects. The funding must be matched locally, but not necessarily in cash.
Programme elements:
Each Global Digital Exemplar (GDE) received £10 million and their matched Fast Followers (FFs) received £5 million (£5 million for GDEs and £3 million for FFs in mental health); and they were required to secure matched funding internally. The Healthcare Information and Management Systems Society (HIMSS) Electronic Medical Record Adoption Model (EMRAM) was chosen as a guide for programme outputs, with GDEs expect to obtain HIMSS Level 7 and FFs, HIMSS Level 5.The partnerships between GDEs and FFs constitute a formal mechanism to support knowledge transfer. The programme also introduced the idea of "Blueprints", documents describing how to implement digital technologies in healthcare.
Exemplars:
Acute exemplars The first twelve exemplars were announced in 2016. A second wave added another four in 2017.Although NHS England refers to this grouping of exemplars as "acute", a number of the hospitals operated by trusts within this group are specialised hospitals. Examples include, Alder Hey Children's Hospital and Western Eye Hospital.
Exemplars:
North Alder Hey Children's Hospital NHS Foundation Trust City Hospitals Sunderland NHS Foundation Trust Newcastle upon Tyne Hospitals NHS Foundation Trust Royal Liverpool and Broadgreen University Hospitals NHS Trust Salford Royal NHS Foundation Trust Wirral University Teaching Hospital NHS Foundation Trust Midlands and East Cambridge University Hospitals NHS Foundation Trust University Hospitals Birmingham NHS Foundation Trust Luton and Dunstable University Hospital NHS Foundation Trust West Suffolk NHS Foundation Trust London Royal Free London NHS Foundation Trust Imperial College Healthcare NHS Trust and Chelsea and Westminster Hospital NHS Foundation Trust (as a joint Exemplar) South Oxford University Hospitals NHS Foundation Trust Taunton and Somerset NHS Foundation Trust University Hospitals Bristol NHS Foundation Trust University Hospital Southampton NHS Foundation Trust Fast followers There are eighteen acute "fast follower" trusts, each of which has been partnered with an acute GDE.
Exemplars:
Ambulance exemplars As of July 2018, there are three ambulance trust exemplars.
South Central Ambulance Service NHS Foundation Trust West Midlands Ambulance Service NHS Foundation Trust North East Ambulance Service NHS Foundation Trust Mental health exemplars There are currently seven mental health trust GDEs.
Berkshire Healthcare NHS Foundation Trust Birmingham and Solihull Mental Health NHS Foundation Trust Mersey Care NHS Foundation Trust Northumberland, Tyne and Wear NHS Foundation Trust Oxford Health NHS Foundation Trust South London and Maudsley NHS Foundation Trust Worcestershire Health and Care NHS Trust | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Time pressure gauge**
Time pressure gauge:
A time pressure gauge is an instrument that digitally displays pressure data divided into appropriate time intervals. While a pressure gauge indicates a general unit amount, only a time pressure gauge accounts for varying consumption and capacity in relation to time remaining.
Applications:
Welders using oxygen and acetylene can plan more efficiently if they know the energy duration due to varying consumption in cutting techniques. A nurse concerned that a patient may run out of oxygen can monitor the workload more efficiently by knowing how much time is remaining rather that how much pressure is left.
Scuba divers could determine the length of time they could remain submerged. A pilot could manage supplemental oxygen flow rates of an aircraft to determine possible altitudes for maximizing fuel efficiency. Ultimately any activity that uses pressurized contents is applicable.
Safety:
Using a time pressure gauge is also very valuable in dangerous scenarios. Imagine a firefighter inside of a burning building contemplating returning for oxygen or pressing further to the next room. He or she could understand both the time currently remaining, as well as time remaining if they altered their breathing pattern.
A pilot could determine oxygen time remaining for descent in the event of a decompression. This information is highly important considering the multiple contingencies that arise in daily air travel (i.e. – consumption rates of oxygen per minute include multiple variables such as number of passengers and individual consumption rates).
Reducing carbon footprint:
The use of a time pressure gauge provides for better planning with any instrument that emits carbon gas through varying consumption rates determined by pressurized contents. Efficiency is maximized by understanding energy requirements in time. One such example would be something as simple as a gas grill. Operation of a gas grill with all burners on can observe tangible results (time reduction) by turning the gas control to low or by shutting off one burner. Seeing the increase in time will automatically indicate an increase in energy saving. Most notably the time pressure gauge could reduce carbon emissions in all air travel through increased fuel efficiency, while also reducing fuel cost. Furthermore, recently the airline industry is under pressure to reduce carbon emissions globally, and instruments such as the time pressure gauge could spearhead this movement.
Current software technology:
Time pressure gauge technology is relatively new, and not fully in widespread use. However, the software technology it uses is more integrated. A comprehensive oxygen planning program developed by Aeronautical Data Systems Inc. for the airline industry exists and is in use with over 20 corporate flight departments. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Copy constructor (C++)**
Copy constructor (C++):
In the C++ programming language, a copy constructor is a special constructor for creating a new object as a copy of an existing object. Copy constructors are the standard way of copying objects in C++, as opposed to cloning, and have C++-specific nuances.
The first argument of such a constructor is a reference to an object of the same type as is being constructed (const or non-const), which might be followed by parameters of any type (all having default values).
Copy constructor (C++):
Normally the compiler automatically creates a copy constructor for each class (known as an implicit copy constructor) but for special cases the programmer creates the copy constructor, known as a user-defined copy constructor. In such cases, the compiler does not create one. Hence, there is always one copy constructor that is either defined by the user or by the system.
Copy constructor (C++):
A user-defined copy constructor is generally needed when an object owns pointers or non-shareable references, such as to a file, in which case a destructor and an assignment operator should also be written (see Rule of three).
Definition:
Copying of objects is achieved by the use of a copy constructor and an assignment operator. A copy constructor has as its first parameter a (possibly const or volatile) reference to its own class type. It can have more arguments, but the rest must have default values associated with them. The following would be valid copy constructors for class X: The first one should be used unless there is a good reason to use one of the others. One of the differences between the first and the second is that temporaries can be copied with the first. For example: A similar difference applies when directly attempting to copy a const object: The X& form of the copy constructor is used when it is necessary to modify the copied object. This is very rare but it can be seen used in the standard library's std::auto_ptr. A reference must be provided: The following are invalid copy constructors because copy_from_me is not passed as reference (&) : because the call to those constructors would require a copy as well, which would result in an infinitely recursive call.
Definition:
The following cases may result in a call to a copy constructor: When an object is returned by value When an object is passed (to a function) by value as an argument When an object is thrown When an object is caught by value When an object is placed in a brace-enclosed initializer listThese cases are collectively called copy-initialization and are equivalent to:T x = a; It is however, not guaranteed that a copy constructor will be called in these cases, because the C++ Standard allows the compiler to optimize the copy away in certain cases, one example being the return value optimization (sometimes referred to as RVO).
Operation:
An object can be assigned value using one of the two techniques: Explicit assignment in an expression Initialization Explicit assignment in an expression Initialization An object can be initialized by any one of the following ways.
a. Through declaration b. Through function arguments c. Through function return value The copy constructor is used only for initializations, and does not apply to assignments where the assignment operator is used instead.
Operation:
The implicit copy constructor of a class calls base copy constructors and copies its members by means appropriate to their type. If it is a class type, the copy constructor is called. If it is a scalar type, the built-in assignment operator is used. Finally, if it is an array, each element is copied in the manner appropriate to its type.By using a user-defined copy constructor the programmer can define the behavior to be performed when an object is copied.
Examples:
These examples illustrate how copy constructors work and why they are sometimes required.
Implicit copy constructor Consider the following example: Output 10 15 10 23 15 10 As expected, timmy has been copied to the new object, timmy_clone. While timmy's age was changed, timmy_clone's age remained the same. This is because they are totally different objects.
The compiler has generated a copy constructor for us, and it could be written like this: So, when do we really need a user-defined copy constructor? The next section will explore that question.
Examples:
User-defined copy constructor Consider a very simple dynamic array class like the following: Output 25 25 Segmentation fault Since we did not specify a copy constructor, the compiler generated one for us. The generated constructor would look something like: The problem with this constructor is that it performs a shallow copy of the data pointer. It only copies the address of the original data member; this means they both share a pointer to the same chunk of memory, which is not what we want. When the program reaches line (1), copy's destructor gets called (because objects on the stack are destroyed automatically when their scope ends). Array's destructor deletes the data array of the original, therefore, when it deleted copy's data, because they share the same pointer, it also deleted first's data. Line (2) now accesses invalid data and writes to it. This produces a segmentation fault.
Examples:
If we write our own copy constructor that performs a deep copy then this problem goes away. Here, we are creating a new int array and copying the contents to it. Now, other's destructor deletes only its data, and not first's data. Line (2) will not produce a segmentation fault anymore.
Examples:
Instead of doing a deep copy right away, there are some optimization strategies that can be used. These allow you to safely share the same data between several objects, thus saving space. The copy-on-write strategy makes a copy of the data only when it is written to. Reference counting keeps the count of how many objects are referencing the data, and will delete it only when this count reaches zero (e.g. boost::shared_ptr).
Bitwise copy constructor:
There is no such thing as "bitwise copy constructor" in C++. However, the default generated copy constructor copies by invoking copy constructors on members, and for a raw pointer member this will copy the raw pointer (i.e. not a deep copy).
Logical copy constructor:
A logical copy constructor makes a true copy of the structure as well as its dynamic structures. Logical copy constructors come into the picture mainly when there are pointers or complex objects within the object being copied.
Explicit copy constructor:
An explicit copy constructor is one that is declared explicit by using the explicit keyword. For example: It is used to prevent copying of objects at function calls or with the copy-initialization syntax. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Morita therapy**
Morita therapy:
Morita therapy is a therapy developed by Shoma Morita.The goal of Morita therapy is to have the patient accept life as it is and places an emphasis on letting nature take its course. Morita therapy views feeling emotions as part of the laws of nature.Morita therapy was originally developed to address shinkeishitsu, an outdated term used in Japan to describe patients who have various types of anxiety. Morita therapy was designed not to completely rid the patient of shinkeishitsu but to lessen the damaging effects.Morita therapy has been described as cognate to Albert Ellis's rational-emotive therapy. It also has commonalities with existential and cognitive behavioral therapy.
Background:
Shoma Morita (1874–1938) was a psychiatrist, researcher, philosopher, and academic department chair at Jikei University School of Medicine in Tokyo. Morita's training in Zen influenced his teachings, though Morita therapy is not a Zen practice.
Underlying philosophy:
Morita therapy focuses on cultivating awareness and decentralizing the self. Aspects of mindfulness are contained in knowing what is controllable and what is not controllable, and seeing what is so without attachment to expectations. Feelings are acknowledged even when one does not act on them. The individual can focus on the full scope of the present moment and determine what needs to be done.Morita therapy seeks to have patients learn to accept fluctuations of thoughts and feelings and ground their behavior in reality. Cure is not defined by the alleviation of discomfort (which the philosophy of this approach opposes), but by taking action in one's life to not be ruled by one's emotional state.
Morita's four stages:
Morita is a four-stage process of therapy involving: Absolute bed rest Occupational therapy (light) Occupational therapy (heavy) Complex activitiesThe first stage, seclusion and rest, lasts from four to seven days. The patient is ordered to stay on absolute bed rest, even to take meals, only rising to use the restroom. When the patient expresses boredom and wishes to rise and be productive, then they may move to the second stage.During the second stage, patients are introduced to light and monotonous work that is conducted in silence. The second stage takes three to seven days. Patients may wash their face in the morning and evening, read aloud from the Kojiki, and write in a journal. In this phase, patients are also required to go outside, with a goal of a re-connection with nature. No strenuous physical work is allowed, such as climbing stairs and sweeping.In the third stage, patients are allowed to engage in moderate physical work, but not social interaction. This stage lasts from three to seven days. For people with physical injuries, it is the phase where they move from passive treatment given to them by others (such as chiropractic, massage and pain medicine) to treating themselves through physical therapy. This third stage can become a part of daily life for some patients. The patient is encouraged to spend time in creating art, such as by writing, painting, or wood carving. The purpose of this stage is to instill confidence, empowerment, and patience through work.The fourth stage is the stage where patients are reintroduced into society. It can last from one to two weeks. The patient integrates meditation and physical activity. The patient may return to the previous stages and their teacher to find coping skills that will allow them to further recover.
Methods (Western):
Shoma Morita's work was first published in Japan in 1928. Morita Therapy Methods (MTM) adapted the therapy to modern western culture. For example, the original Morita treatment process has the patient spend their first week of treatment isolated in a room without any outside stimulation, which has been modified in MTM.The shinkeishitsu concept has also been broadened to consider not just anxiety, but life situations in which modern westerners may find themselves, involving stress, pain and the aftermath of trauma. MTM is also designed to help patients deal with shyness.As with Morita therapy proper, MTM is roughly divided into four basic areas of treatment.
Research:
A Cochrane review conducted in 2015 assessed the effectiveness of Morita based therapy for anxiety disorder in adults. It is important to indicate up front that in this review, they defined Morita therapy as any care practice defined as Morita therapy by the carers and involving at least two of the four phases described in How the intervention might work (Wu_et_al, 2015, p. 7). In addition, it is also very important to state that the review also does not include a single case of Classic Morita Therapy being used, all studies were conducted in hospitals in the People's Republic of China between about 1994 and 2007 with nearly all participants also performing pharmacological therapy in addition to the Morita based therapy, or outpatient therapy with modified versions of Morita therapy, or heavily modified versions of inpatient therapy, especially the first phase being heavily modified (Wu_et_al, 2015, pp. 25-32). With the aforementioned established, the review states there is very low evidence available and it is not possible to draw a conclusion based on the included studies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Visual design elements and principles**
Visual design elements and principles:
Visual design elements and principles describe fundamental ideas about the practice of visual design.
Elements of the design of the art:
Design elements are the basic units of any visual design which form its structure and convey visual messages. Painter and design theorist Maitland E. Graves (1902-1978), who attempted to gestate the fundamental principles of aesthetic order in visual design, in his book, The Art of Color and Design (1941), defined the elements of design as line, direction, shape, size, texture, value, and color, concluding that "these elements are the materials from which all designs are built." Color Color is the result of light reflecting back from an object to our eyes. The color that our eyes perceive is determined by the pigment of the object itself. Color theory and the color wheel are often referred to when studying color combinations in visual design. Color is often deemed to be an important element of design as it is a universal language which presents the countless possibilities of visual communication.Hue, saturation, and brightness are the three characteristics that describe color.
Elements of the design of the art:
Hue can simply be referred to as "color" as in red, yellow, or green.
Saturation gives a color brightness or dullness, which impacts the vibrance of the color.
Values, tints and shades of colors are created by adding black to a color for a shade and white for a tint. Creating a tint or shade of color reduces the saturation.
Elements of the design of the art:
Color theory in visual design Color theory studies color mixing and color combinations. It is one of the first things that marked a progressive design approach. In visual design, designers refer to color theory as a body of practical guidance to achieving certain visual impacts with specific color combinations. Theoretical color knowledge is implemented in designs in order to achieve a successful color design.
Elements of the design of the art:
Color harmonyColor harmony, often referred to as a "measure of aesthetics", studies which color combinations are harmonious and pleasing to the eye, and which color combinations are not. Color harmony is a main concern for designers given that colors always exist in the presence of other colors in form or space.When a designer harmonizes colors, the relationships among a set of colors are enhanced to increase the way they complement one another. Colors are harmonized to achieve a balanced, unified, and aesthetically pleasing effect for the viewer.Color harmony is achieved in a variety of ways, some of which consist of combining a set of colors that share the same hue, or a set of colors that share the same values for two of the three color characteristics (hue, saturation, brightness). Color harmony can also be achieved by simply combining colors that are considered compatible to one another as represented in the color wheel.
Elements of the design of the art:
Color contrastsColor contrasts are studied with a pair of colors, as opposed to color harmony, which studies a set of colors. In color contrasting, two colors with perceivable differences in aspects such as luminance, or saturation, are placed side by side to create contrast.Johannes Itten presented seven kinds of color contrasts: contrast of light and dark, contrast of hue, contrast of temperature, contrast of saturation, simultaneous contrast, contrast of sizes, and contrast of complementary. These seven kinds of color contrasts have inspired past works involving color schemes in design.
Elements of the design of the art:
Color schemesColor schemes are defined as the set of colors chosen for a design. They are often made up of two or more colors that look appealing beside one another, and that create an aesthetic feeling when used together. Color schemes depend on color harmony as they point to which colors look pleasing beside one another.A satisfactory design product is often accompanied by a successful color scheme. Over time, color design tools with the function of generating color schemes were developed to facilitate color harmonizing for designers.
Elements of the design of the art:
Use of color in visual design Color is used to create harmony, balance, and visual comfort in a design Color is used to evoke the desired mood and emotion upon the viewer Color is used to create a theme in the design Color holds meaning and can be symbolic. In certain cultures, different colors can have different meanings.
Elements of the design of the art:
Color is used to put emphasis on desired elements and create visual hierarchy in a piece of art Color can create identity for a certain brand or design product Color allows viewers to have different interpretations of visual designs. The same color can evoke different emotions, or have various meanings to different individuals and cultures Color strategies are used for organization and consistency in a design product In the architectural design of a retail environment, colors affect decision making which motivates consumers to buy particular products Line The line is an element of art defined by a point moving in space. Lines can be vertical, horizontal, diagonal, or curved. They can be any width or texture, and can be continuous, implied, or broken. On top of that, there are different types of lines aside from the ones previously mentioned. For example, you could have a line that is horizontal and zigzagged or a line that is vertical and zigzagged. Different lines create different moods, it all depends on what mood you are using line to create.
Elements of the design of the art:
Point A point is basically the beginning of “something” in “nothing”. It forces the mind to think upon its position and gives something to build upon in both imagination and space. Some abstract points in a group can provoke human imagination to link it with familiar shapes or forms.
Elements of the design of the art:
Shape A shape is defined as a two dimensional area that stands out from the space next to or around it due to a defined or implied boundary, or because of differences of value, color, or texture. Shapes are recognizable objects and forms and are usually composed of other elements of design.For example, a square that is drawn on a piece of paper is considered a shape. It is created with a series of lines which serve as a boundary that shapes the square and separates it from the space around it that is not part of the square.
Elements of the design of the art:
Types of shapes Geometric shapes or mechanical shapes are shapes that can be drawn using a ruler or compass, such as squares, circles, triangles, ellipses, parallelograms, stars, and so on. Mechanical shapes, whether simple or complex, produce a feeling of control and order.Organic shapes are irregular shapes that are often complex and resemble shapes that are found in nature. Organic shapes can be drawn by hand, which is why they are sometimes subjective and only exist in the imagination of the artist.Curvilinear shapes are composed of curved lines and smooth edges. They give off a more natural feeling to the shape. In contrast, rectilinear shapes are composed of sharp edges and right angles, and give off a sense of order in the composition. They look more human-made, structured, and artificial. Artists can choose to create a composition that revolves mainly around one of these styles of shape, or they can choose to combine both.
Elements of the design of the art:
Texture Texture refers to the physical and visual qualities of a surface.
Uses of texture in design Texture can be used to attract or repel interest to an element, depending on how pleasant the texture is perceived to be.
Texture can also be used to add complex detail into the composition of a design.
Elements of the design of the art:
In theatrical design, the surface qualities of a costume sculpt the look and feel of a character, which influences the way the audience reacts to the character.Types of textureTactile texture, also known as "actual texture", refers to the physical three-dimensional texture of an object. Tactile texture can be perceived by the sense of touch. A person can feel the tactile texture of a sculpture by running their hand over its surface and feelings its ridges and dents.
Elements of the design of the art:
Painters use impasto to build peaks and create texture in their painting.
Texture can be created through collage. This is when artists assemble three dimensional objects and apply them onto a two-dimensional surface, like a piece of paper or canvas, to create one final composition.
Papier collé is another collaging technique in which artists glue paper to a surface to create different textures on its surface.
Elements of the design of the art:
Assemblage is a technique that consists of assembling various three-dimensional objects into a sculpture, which can also reveal textures to the viewer.Visual texture, also referred to as "implied texture", is not detectable by our sense of touch, but by our sense of sight. Visual texture is the illusion of a real texture on a two-dimensional surface. Any texture perceived in an image or photograph is a visual texture. A photograph of rough tree bark is considered a visual texture. It creates the impression of a real texture on a two-dimensional surface which would remain smooth to the touch no matter how rough the represented texture is.In painting, different paints are used to achieve different types of textures. Paints such as oil, acrylic, and encaustic are thicker and more opaque and are used to create three-dimensional impressions on the surface. Other paints, such as watercolor, tend to be used for visual textures, because they are thinner and have transparency, and do not leave much tactile texture on the surface.
Elements of the design of the art:
Pattern Many textures appear to repeat the same motif. When a motif is repeated over and over again in a surface, it results in a pattern. Patterns are frequently used in fashion design or textile design, where motifs are repeated to create decorative patterns on fabric or other textile materials. Patterns are also used in architectural design, where decorative structural elements such as windows, columns, or pediments, are incorporated into building design.
Elements of the design of the art:
Space In design, space is concerned with the area deep within the moment of designated design, the design will take place on. For a two-dimensional design, space concerns creating the illusion of a third dimension on a flat surface: Overlap is the effect where objects appear to be on top of each other. This illusion makes the top element look closer to the observer. There is no way to determine the depth of the space, only the order of closeness.
Elements of the design of the art:
Shading adds gradation marks to make an object of a two-dimensional surface seem three-dimensional.
Highlight, Transitional Light, Core of the Shadow, Reflected Light, and Cast Shadow give an object a three-dimensional look.
Linear Perspective is the concept relating to how an object seems smaller the farther away it gets.
Atmospheric Perspective is based on how air acts as a filter to change the appearance of distant objects.
Elements of the design of the art:
Form In visual design, form is described as the way an artist arranges elements in the entirety of a composition. It may also be described as any three-dimensional object. Form can be measured, from top to bottom (height), side to side (width), and from back to front (depth). Form is also defined by light and dark. It can be defined by the presence of shadows on surfaces or faces of an object. There are two types of form, geometric (artificial) and natural (organic form). Form may be created by the combining of two or more shapes. It may be enhanced by tone, texture or color. It can be illustrated or constructed.
Principles of the design of the art:
Principles applied to the elements of design that bring them together into one design. How one applies these principles determines how successful a design may be.
Principles of the design of the art:
Unity/harmony According to Alex White, author of The Elements of Graphic Design, to achieve visual unity is a main goal of graphic design. When all elements are in agreement, a design is considered unified. No individual part is viewed as more important than the whole design. A good balance between unity and variety must be established to avoid a chaotic or a lifeless design.
Principles of the design of the art:
Methods Perspective: sense of distance between elements.
Similarity: ability to seem repeatable with other elements.
Continuation: the sense of having a line or pattern extend.
Repetition: elements being copied or mimicked numerous times.
Rhythm: is achieved when recurring position, size, color, and use of a graphic element has a focal point interruption.
Altering the basic theme achieves unity and helps keep interest.
Balance It is a state of equalized tension and equilibrium, which may not always be calm.
Types of balance in visual design Symmetry Asymmetrical balance produces an informal balance that is attention attracting and dynamic.
Radial balance is arranged around a central element. The elements placed in a radial balance seem to 'radiate' out from a central point in a circular fashion.
Overall is a mosaic form of balance which normally arises from too many elements being put on a page. Due to the lack of hierarchy and contrast, this form of balance can look noisy but sometimes quiet.
Principles of the design of the art:
Hierarchy/Dominance/Emphasis A good design contains elements that lead the reader through each element in order of its significance. The type and images should be expressed starting from most important to the least important. Dominance is created by contrasting size, positioning, color, style, or shape. The focal point should dominate the design with scale and contrast without sacrificing the unity of the whole.
Principles of the design of the art:
Scale/proportion Using the relative size of elements against each other can attract attention to a focal point. When elements are designed larger than life, the scale is being used to show drama.
Similarity and contrast Planning a consistent and similar design is an important aspect of a designer's work to make their focal point visible. Too much similarity is boring but without similarity important elements will not exist and an image without contrast is uneventful so the key is to find the balance between similarity and contrast.
Similar environment There are several ways to develop a similar environment: Build a unique internal organization structure.
Manipulate shapes of images and text to correlate together.
Express continuity from page to page in publications. Items to watch include headers, themes, borders, and spaces.
Develop a style manual and adhere to it.
Principles of the design of the art:
Contrasts Space Filled / Empty Near / Far 2-D / 3-D Position Left / Right Isolated / Grouped Centered / Off-Center Top / Bottom Form Simple / Complex Beauty / Ugly Whole / Broken Direction Stability / Movement Structure Organized / Chaotic Mechanical / Hand-Drawn Size Large / Small Deep / Shallow Fat / Thin Color Grey scale / Color Black & White / Color Light / Dark Texture Fine / Coarse Smooth / Rough Sharp / Dull Density Transparent / Opaque Thick / Thin Liquid / Solid Gravity Light / Heavy Stable / UnstableMovement is the path the viewer’s eye takes through the artwork, often to focal areas. Such movement can be directed along lines edges, shape and color within the artwork, and more. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Critical Assessment of Genome Interpretation**
Critical Assessment of Genome Interpretation:
The Critical Assessment of Genome Interpretation (CAGI) is an annual bioinformatics competition focused on interpretation of genome variation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tactical recognition flash**
Tactical recognition flash:
Tactical recognition flash (TRF) is the British military term for a coloured patch worn on the right arm of combat clothing by members of the British Army, Royal Navy and Royal Air Force. A TRF serves to quickly identify the regiment or corps of the wearer, in the absence of a cap badge. It is similar to, but distinct from, the DZ Flashes worn by members of Airborne Forces.
Tactical recognition flash:
TRFs should not be confused with formation signs or insignia, which are used to denote the formation (usually brigade or division or a higher headquarters) and are worn in addition to TRFs by a member of any regiment or corps serving in that formation.
Army:
Royal Armoured Corps Army Air Corps Infantry Adjutant General's Corps Colonial Forces Historic
Cadet Forces:
Tactical Recognition Flashes are not to be worn by Cadet Force Adult Volunteers (CFAVs) or cadets of the Army Cadet Force and army section of the Combined Cadet Force irrespective of any affiliation to a Corps or Regiment. Cadets and CFAVs do wear county and contingent flashes of the Army Cadet Force and Combined Cadet Force respectively. Officers of the Air Training Corps and the RAF Section of the Combined Cadet Force wear the RAF tactical recognition flash, Adult Warrant Officers and Senior Non-Commissioned Officers wear the RAFAC Staff formation flash, and cadets wear RAFAC Cadet formation flash. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HD 169142**
HD 169142:
HD 169142 is a single Herbig Ae/Be star. Its surface temperature is 7650±150 K. HD 169142 is depleted of heavy elements compared to the Sun, with a metallicity Fe/H index of −0.375±0.125, but is much younger at an age of 7.5±4.5 million years. The star is rotating slowly and has relatively low stellar activity for a Herbig Ae/Be star.
Planetary system:
The star is surrounded by a complex, rapidly evolving protoplanetary disk with two gaps. In the 1995-2005 period the disk inner edge has moved inward by 0.3 AU. The dust of the disk is rich in polycyclic aromatic hydrocarbons and carbon monoxide.The annular gap and inner cavity observed in this protoplanetary disk both suggested the presence of embedded planets. Several protoplanet candidates have been suggested in the literature starting from 2014.Nonetheless, a particular protoplanet candidate detected in 2015 and 2017 with the SPHERE instrument on the VLT appears to stand out, hereafter HD 169142 b. A paper from 2023 confirmed that the motion of this protoplanet candidate was consistent with Keplerian motion. The object shifted with a change of the position angle of 10.2±2.8° between 2015 and 2019. The researchers point out three lines of evidence arguing in favour of this object being a protoplanet: The object is found in annular gap separating the two bright rings of the disc, as predicted in theory The protoplanet moved between 2015, 2017 and 2019 consistent with Keplerian motion of an object at an distance of about 37 astronomical units from its star.
Planetary system:
A spiral-shaped signal consistent with the expected outer spiral wake triggered by a planet in the gap, based on simulations of the system.The researchers also found the near-infrared colors of the object are consistent with starlight scattered by dust around the protoplanet. This dust could be a circumplanetary disk or a dusty envelope around the protoplanet.A study from June 2023, using archived ALMA data found sulfur monoxide and silicon monosulfide in the disk at the position of planet b. The paper also found compact 12CO and 13CO emission at the position of the planet. Carbon monoxide and sulfur monoxide were detected in other disks in the past and they are thought to be connected to protoplanets. Silicon monosulfide on the other hand was never before detected in any other disk and can only be detected if silicates are released from nearby dust grains in massive shock waves caused by gas travelling at high velocities. It is thought that planet b is driving an outflow causing these high velocities. Outflows from proto-jovian planets were hypothesised since 1998.Outflows are known around isolated young proto-brown dwarfs, but HD 169142 b could be the first confirmed protoplanet around a star showing clear evidence of an outflow. Evidence for inflow or outflows suspected to be caused by planets exist for other disks, such as a signature in the CI gas of HD 163296. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Corner reflector**
Corner reflector:
A corner reflector is a retroreflector consisting of three mutually perpendicular, intersecting flat surfaces, which reflects waves directly towards the source, but translated. The three intersecting surfaces often have square shapes. Radar corner reflectors made of metal are used to reflect radio waves from radar sets. Optical corner reflectors, called corner cubes or cube corners, made of three-sided glass prisms, are used in surveying and laser ranging.
Principle:
The incoming ray is reflected three times, once by each surface, which results in a reversal of direction. To see this, the three corresponding normal vectors of the corner's perpendicular sides can be considered to form a basis (a rectangular coordinate system) (x, y, z) in which to represent the direction of an arbitrary incoming ray, [a, b, c]. When the ray reflects from the first side, say x, the ray's x component, a, is reversed to −a while the y and z components are unchanged, resulting in a direction of [−a, b, c]. Similarly, when reflected from side y and finally from side z, the b and c components are reversed. Therefore, the ray direction goes from [a, b, c] to [−a, b, c] to [−a, −b, c] to [−a, −b, −c], and it leaves the corner reflector with all three components of direction exactly reversed. The distance travelled, relative to a plane normal to the direction of the rays, is also equal for any ray entering the reflector, regardless of the location where it first reflects.
In radar:
Radar corner reflectors are designed to reflect the microwave radio waves emitted by radar sets back toward the radar antenna. This causes them to show a strong "return" on radar screens. A simple corner reflector consists of three conducting sheet metal or screen surfaces at 90° angles to each other, attached to one another at the edges, forming a "corner". These reflect radio waves coming from in front of them back parallel to the incoming beam. To create a corner reflector that will reflect radar waves coming from any direction, 8 corner reflectors are placed back-to-back in an octahedron (diamond) shape. The reflecting surfaces must be larger than several wavelengths of the radio waves to function.In maritime navigation they are placed on bridge abutments, buoys, ships and, especially, lifeboats, to ensure that these show up strongly on ship radar screens. Corner reflectors are placed on the vessel's masts at a height of at least 4.6 m (15 feet) above sea level (giving them an approximate minimum horizon distance of 8 kilometers or 4.5 nautical miles). Marine radar uses X-band microwaves with wavelengths of 2.5–3.75 cm (1–1.5 inches), so small reflectors less than 30 cm (12 inches) across are used. In aircraft navigation, corner reflectors are installed on rural runways, to make them show up on aircraft radar.
In optics:
In optics, corner reflectors typically consist of three mirrors or reflective prism faces which return an incident light beam in the opposite direction. In surveying, retroreflector prisms are commonly used as targets for long-range electronic distance measurement using a total station.
Five arrays of optical corner reflectors have been placed on the Moon for use by Lunar Laser Ranging experiments observing a laser's time-of-flight to measure the Moon's orbit more precisely than was possible before. The three largest were placed by NASA as part of the Apollo program, and the Soviet Union built two smaller ones into the Lunokhod rovers.
Automobile and bicycle tail lights are molded with arrays of small corner reflectors, with different sections oriented for viewing from different angles. Reflective paint for visibility at night usually contains retroreflective spherical beads.
Thin plastic with microscopic corner reflector structures can be used as tape, on signs, or sewn or molded onto clothing.
Other examples:
Corner reflectors can also occur accidentally. Tower blocks with balconies are often accidental corner reflectors for sound and return a distinctive echo to an observer making a sharp noise, such as a hand clap, nearby. Similarly, in radar interpretation, an object that has multiple reflections from smooth surfaces produces a radar return of greater magnitude than might be expected from the physical size of the object. This effect was put to use on the ADM-20 Quail, a small missile which had the same radar cross section as a B-52. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Water distribution system**
Water distribution system:
A water distribution system is a part of water supply network with components that carry potable water from a centralized treatment plant or wells to consumers to satisfy residential, commercial, industrial and fire fighting requirements.
Definitions:
Water distribution network is the term for the portion of a water distribution system up to the service points of bulk water consumers or demand nodes where many consumers are lumped together. The World Health Organization (WHO) uses the term water transmission system for a network of pipes, generally in a tree-like structure, that is used to convey water from water treatment plants to service reservoirs, and uses the term water distribution system for a network of pipes that generally has a loop structure to supply water from the service reservoirs and balancing reservoirs to consumers.
Components:
A water distribution system consists of pipelines, storage facilities, pumps, and other accessories.Pipelines laid within public right of way called water mains are used to transport water within a distribution system. Large diameter water mains called primary feeders are used to connect between water treatment plants and service areas. Secondary feeders are connected between primary feeders and distributors. Distributors are water mains that are located near the water users, which also supply water to individual fire hydrants. A service line is a small diameter pipe used to connect from a water main through a small tap to a water meter at user's location. There is a service valve (also known as curb stop) on the service line located near street curb to shut off water to the user's location.Storage facilities, or distribution reservoirs, provide clean drinking water storage (after required water treatment process) to ensure the system has enough water to service in response to fluctuating demands (service reservoirs), or to equalize the operating pressure (balancing reservoirs). They can also be temporarily used to serve fire fighting demands during a power outage. The following are types of distribution reservoirs: Underground storage reservoir or covered finished water reservoir: An underground storage facility or large ground-excavated reservoir that is fully covered. The walls and the bottom of these reservoirs may be lined with impermeable materials to prevent ground water intrusion.
Components:
Uncovered finished water reservoir: A large ground-excavated reservoir that has adequate measures or lining to prevent surface water runoff and ground water intrusion but does not have a top cover. This type of reservoir is less desirable as the water will not be further treated before distribution and is susceptible to contaminants such as bird waste, animal and human activities, algal bloom, and airborne deposition.
Components:
Surface reservoir (also known as ground storage tank and ground storage reservoir): A storage facility built on the ground with the wall lined with concrete, shotcrete, asphalt, or membrane. A surface reservoir is usually covered to prevent contamination. They are typically located in high elevation areas that have enough hydraulic head for distribution. When a surface reservoir at ground level cannot provide a sufficient hydraulic head to the distribution system, booster pumps will be required.
Components:
Water tower (also known as elevated surface reservoir): An elevated water tank. A few common types are spheroid elevated storage tank, a steel spheroid tank on top of a small-diameter steel column; composite elevated storage tank, a steel tank on a large-diameter concrete column; and hydropillar elevated storage tanks, a steel tank on a large-diameter steel column. The space within the large column below the water tank can be used for other purposes such as multi-story office space and storage space. A main concern for using water towers in the water distribution system is the aesthetic of the area.
Components:
Standpipe: A water tank that is a combination of ground storage tank and water tower. It is slightly different from an elevated water tower in that the standpipe allows water storage from the ground level to the top of the tank. The bottom storage area is called supporting storage, and the upper part which would be at the similar height of an elevated water tower is called useful storage.
Components:
Sump: This is a contingency water storage facility that is not used to distribute water directly. It is typically built underground in a circular shape with a dome top above ground. The water from a sump will be pumped to a service reservoir when it is needed.Storage facilities are typically located at the center of the service locations. Being at the central location reduces the length of the water mains to the services locations. This reduces the friction loss when water is transported over a water main.
Topologies:
In general, a water distribution system can be classified as having a grid, ring, radial or dead end layout.A grid system follows the general layout of the road grid with water mains and branches connected in rectangles. With this topology, water can be supplied from several directions allowing good water circulation and redundancy if a section of the network has broken down. Drawbacks of this topology include difficulty sizing the system.A ring system has a water main for each road, and there is a sub-main branched off the main to provide circulation to customers. This topology has some of the advantages of a grid system, but it is easier to determine sizing.A radial system delivers water into multiple zones. At the center of each zone, water is delivered radially to the customers.A dead end system has water mains along roads without a rectangular pattern. It is used for communities whose road networks are not regular. As there are no cross-connections between the mains, water can have less circulation and therefore stagnation may be a problem.
Integrity of the systems:
The integrity of the systems are broken down into physical, hydraulic, and water quality.The physical integrity includes concerns on the ability of the barriers to prevents contaminations from the external sources to get into water distribution systems. The deterioration can be caused by physical or chemical factors.The hydraulic integrity is an ability to maintain adequate water pressure inside the pipes throughout distribution systems. It also includes the circulation and length of time that the water travels within a distribution system which has impacts on the effectiveness of the disinfectants.The water quality integrity is a control of degradations as the water travels through distribution systems. The impacts of water quality can be caused by physical or hydraulic integrity factors. The water quality degradations can also take place within the distribution systems such as microorganism growth, nitrification, and internal corrosion of the pipes.
Integrity of the systems:
Network analysis and optimization Analyses are done to assist in design, operation, maintenance and optimization of water distribution systems. There are two main types of analyses: hydraulic, and water quality behavior as it flows through a water distribution system. Optimizing the design of water distribution networks is a complex task. However, a large number of methods have already been proposed, mainly based on metaheuristics. Employing mathematical optimization techniques can lead to substantial construction savings in these kinds of infrastructures.
Hazards:
Hazards in water distribution systems can be in the forms of microbial, chemical and physical.Most microorganisms are harmless within water distribution systems. However, when infectious microorganisms enter the systems, they form biofilms and create microbial hazards to the users. Biofilms are usually formed near the end of the distribution where the water circulation is low. This supports their growth and makes disinfection agents less effective. Common microbial hazards in distribution systems come from contamination of human faecal pathogens and parasites which enter the systems through cross-connections, breaks, water main works, and open storage tanks.Chemical hazards are those of disinfection by-products, leaching of piping materials and fittings, and water treatment chemicals.Physical hazards include turbidity of water, odors, colors, scales which are buildups of materials inside the pipes from corrosions, and sediment resuspension.There are several bodies around the world that create standards to limit hazards in the distribution systems: NSF International in North America; European Committee for Standardization, British Standards Institution and Umweltbundesamt in Europe; Japanese Standards Association in Asia; Standards Australia in Australia; and Brazilian National Standards Organization in Brazil.
Hazards:
Lead service lines Lead contamination in drinking water can be from leaching of lead that was used in old water mains, service lines, pipe joints, plumbing fittings and fixtures. According to WHO, the most significant contributor of lead in water in many countries is the lead service line.
Maintenance:
Internal corrosion control Water quality deteriorate due to corrosion of metal pipe surfaces and connections in distribution systems. Pipe corrosion shows in water as color, taste and odor, any of which may cause health concerns.Health issues relate to releases of trace metals such as lead, copper or cadmium into the water. Lead exposure can cause delays in physical and mental development in children. Long term exposure to copper may cause liver and kidney damage. High or long term exposure of cadmium may cause damage to various organs. Corrosion of iron pipes causes rusty or red water. Corrosion of zinc and iron pipes can cause metallic taste.Various techniques can be used to control internal corrosion, for example, pH level adjustment, adjustment of carbonate and calcium to create calcium carbonate as pipe surface coating, and applying a corrosion inhibitor. For example, phosphate products that form films over pipe surfaces is a type of corrosion inhibitor. This reduces the chance of leaching of trace metals from the pipe materials into the water.
Maintenance:
Hydrant flushing Hydrant flushing is the scheduled release of water from fire hydrants or special flushing hydrants to purge iron and other mineral deposits from a water main. Another benefit of using fire hydrants for water main flushing is to test whether water is supplied to fire hydrants at adequate pressure for fire fighting. During hydrant flushing, consumers may notice rust color in their water as iron and mineral deposits are stirred up in the process.
Maintenance:
Water main renewals After water mains are in service for a long time, there will be deterioration in structural, water quality, and hydraulic performance. Structural deterioration may be caused by many factors. Metal-based pipes develop internal and external corrosion, causing the pipe walls to thin or degrade. They can eventually leak or burst. Cement-based pipes are subject to cement matrix and reinforced steel deterioration. All pipes are subject to joint failures. Water quality deterioration includes scaling, sedimentation, and biofilm formation. Scaling is the formation of hard deposits on the interior wall of pipes. This can be a by-product of pipe corrosion combined with calcium in the water, which is called tuberculation. Sedimentation is when solids settle within the pipes, usually at recesses between scaling build-ups. When there is a change in the velocity of water flow (such as sudden use of a fire hydrant), the settled solids will be stirred up, causing water to be discolored. Biofilms can develop in highly scaled and thus rough-surfaced pipes where bacteria are allowed to grow, as the higher the roughness of the interior wall, the harder it is for disinfectant to kill the bacteria on the surface of the pipe wall. Hydraulic deterioration that affects pressures and flows can be a result of other deterioration that obstructs the water flow.When it is time for water main renewal, there are many considerations in choosing the method of renewal. This can be open-trench replacement or one of the pipeline rehabilitation methods. A few pipeline rehabilitation methods are pipe bursting, sliplining, and pipe lining.
Maintenance:
Water main renewal methods When an in-situ rehabilitation method is used, one benefit is the lower cost, as there is no need to excavate along the entire water main pipeline. Only small pits are excavated to access the existing water main. The unavailability of the water main during the rehabilitation, however, requires building a temporary water bypass system to serve as the water main in the affected area. A temporary water bypass system (known as temporary bypass piping) should be carefully designed to ensure an adequate water supply to the customers in the project area. Water is taken from a feed hydrant into a temporary pipe. When the pipe crosses a driveway or a road, a cover or a cold patch should be put in place to allow cars to cross the temporary pipe. Temporary service connections to homes can be made to the temporary pipe. Among many ways to make a temporary connection, a common one is to connect the temporary service connection to a garden hose. The temporary pipe should also add temporary fire hydrants for fire protection.As water main work can disturb lead service lines, which can result in elevated lead levels in drinking water, it is recommended that when a water utility plans a water main renewal project, it should work with property owners to replace lead service lines as part of the project. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hexagonal sampling**
Hexagonal sampling:
A multidimensional signal is a function of M independent variables where M≥2 . Real world signals, which are generally continuous time signals, have to be discretized (sampled) in order to ensure that digital systems can be used to process the signals. It is during this process of discretization where sampling comes into picture. Although there are many ways of obtaining a discrete representation of a continuous time signal, periodic sampling is by far the simplest scheme. Theoretically, sampling can be performed with respect to any set of points. But practically, sampling is carried out with respect to a set of points that have a certain algebraic structure. Such structures are called lattices. Mathematically, the process of sampling an N -dimensional signal can be written as: w(t^)=w(V.n^) where t^ is continuous domain M-dimensional vector (M-D) that is being sampled, n^ is an M-dimensional integer vector corresponding to indices of a sample, and V is an N×N sampling matrix.
Motivation:
Multidimensional sampling provides the opportunity to look at digital methods to process signals. Some of the advantages of processing signals in the digital domain include flexibility via programmable DSP operations, signal storage without the loss of fidelity, opportunity for encryption in communication, lower sensitivity to hardware tolerances. Thus, digital methods are simultaneously both powerful and flexible. In many applications, they act as less expensive alternatives to their analog counterparts. Sometimes, the algorithms implemented using digital hardware are so complex that they have no analog counterparts. Multidimensional digital signal processing deals with processing signals represented as multidimensional arrays such as 2-D sequences or sampled images.[1] Processing these signals in the digital domain permits the use of digital hardware where in signal processing operations are specified by algorithms. As real world signals are continuous time signals, multidimensional sampling plays a crucial role in discretizing the real world signals. The discrete time signals are in turn processed using digital hardware to extract information from the signal.
Preliminaries:
Region of Support The region outside of which the samples of the signal take zero values is known as the Region of support (ROS). From the definition, it is clear that the region of support of a signal is not unique.
Preliminaries:
Fourier transform The Fourier transform is a tool that allows us to simplify mathematical operations performed on the signal. The transform basically represents any signal as a weighted combination of sinusoids. The Fourier and the inverse Fourier transform of an M-dimensional signal can be defined as follows: Xa(Ω^)=∫−∞+∞xa(t^)e−jΩ^Tt^dt^ xa(t^)=12πM∫−∞+∞X(Ω^)e(jΩ^Tt^)dΩ^ The cap symbol ^ indicates that the operation is performed on vectors. The Fourier transform of the sampled signal is observed to be a periodic extension of the continuous time Fourier transform of the signal. This is mathematically represented as: X(ω)=1|det(V)|∑kXa(Ω^−Uk) where ω=V~Ω and U=2πV~ is the periodicity matrix where ~ denotes matrix transposition.Thus sampling in the spatial domain results in periodicity in the Fourier domain.
Preliminaries:
Aliasing A band limited signal may be periodically replicated in many ways. If the replication results in an overlap between replicated regions, the signal suffers from aliasing. Under such conditions, a continuous time signal cannot be perfectly recovered from its samples. Thus in order to ensure perfect recovery of the continuous signal, there must be zero overlap multidimensional sampling of the replicated regions in the transformed domain. As in the case of 1-dimensional signals, aliasing can be prevented if the continuous time signal is sampled at an adequate sufficiently high rate.
Preliminaries:
Sampling density It is a measure of the number of samples per unit area. It is defined as: S.D=1|det(V)|=|det(U)|4π2 .The minimum number of samples per unit area required to completely recover the continuous time signal is termed as optimal sampling density. In applications where memory or processing time are limited, emphasis must be given to minimizing the number of samples required to represent the signal completely.
Existing approaches:
For a bandlimited waveform, there are infinitely many ways the signal can be sampled without producing aliases in the Fourier domain. But only two strategies are commonly used: rectangular sampling and hexagonal sampling.
Existing approaches:
Rectangular and Hexagonal sampling In rectangular sampling, a 2-dimensional signal, for example, is sampled according to the following V matrix: Vrect=[T100T2] where T1 and T2 are the sampling periods along the horizontal and vertical direction respectively.In hexagonal sampling, the V matrix assumes the following general form: Vhex=[T1T1−T2T2] The difference in the efficiency of the two schemes is highlighted using a bandlimited signal with a circular region of support of radius R. The circle can be inscribed in a square of length 2R or a regular hexagon of length 2R3 . Consequently, the region of support is now transformed into a square and a hexagon respectively.
Existing approaches:
If these regions are periodically replicated in the frequency domain such that there is zero overlap between any two regions, then by periodically replicating the square region of support, we effectively sample the continuous signal on a rectangular lattice. Similarly periodic replication of the hexagonal region of support maps to sampling the continuous signal on a hexagonal lattice.
Existing approaches:
From U, the periodicity matrix, we can calculate the optimal sampling density for both the rectangular and hexagonal schemes. It is found that in order to completely recover the circularly band-limited signal, the hexagonal sampling scheme requires 13.4% fewer samples than the rectangular sampling scheme. The reduction may appear to be of little significance for a 2-dimensional signal. But as the dimensionality of the signal increases, the efficiency of the hexagonal sampling scheme will become far more evident. For instance, the reduction achieved for an 8-dimensional signal is 93.8%. To highlight the importance of the obtained result [2], try and visualize an image as a collection of infinite number of samples. The primary entity responsible for vision, i.e. the photoreceptors (rods and cones) are present on the retina of all mammals. These cells are not arranged in rows and columns. By adapting a hexagonal sampling scheme, our eyes are able to process images much more efficiently. The importance of hexagonal sampling lies in the fact that the photoreceptors of the human vision system lie on a hexagonal sampling lattice and, thus, perform hexagonal sampling.[3] In fact, it can be shown that the hexagonal sampling scheme is the optimal sampling scheme for a circularly band-limited signal.
Applications:
Aliasing effects minimized by the use of optimal sampling grids Recent advances in the CCD technology has made hexagonal sampling feasible for real life applications. Historically, because of technology constraints, detector arrays were implemented only on 2-dimensional rectangular sampling lattices with rectangular shape detectors. But the super [CCD] detector introduced by Fuji has an octagonal shaped pixel in a hexagonal grid. Theoretically, the performance of the detector was greatly increased by introducing an octagonal pixel. The number of pixels required to represent the sample was reduced and there was significant improvement in the Signal-to-Noise Ratio (SNR) when compared with that of a rectangular pixel. But the drawback of using hexagonal pixels is that the associated fill factor will be less than 82%. An alternative method would be to interpolate hexagonal pixels in such a manner that we ultimately end up with a rectangular grid. The Spot 5 satellite incorporates a similar technique where two identical linear CCD's transmit two quasi-identical images that are shifted by half a pixel. On interpolating the two images and processing them, the functioning of a detector with a hexagonal pixel is mimicked.
Applications:
Hexagonal structure for Intelligent vision One of the major challenges encountered in the field of computer graphics is to represent the real world continuous signal as a discrete set of points on the physical screen. It has been long known that hexagonal sampling grids have several benefits compared to rectangular grids. Peterson and Middleton investigated sampling and reconstruction of wave number limited M dimensional functions and came to the conclusion that the optimal sampling lattice, in general, is not hexagonal. Russell M. Mersereau developed hexagonal discrete Fourier transform (DFT) and hexagonal finite extent impulse response filters. He was able to show that for circularly bandlimited signals, hexagonal sampling is more efficient than rectangular sampling. Cramblitt and Allebach developed methods for designing optimal hexagonal time-sequential sampling patterns and discussed their merits relative to those designed for a rectangular sampling grid. One of the unique features of a hexagonal sampling grid is that its Fourier transform is still hexagonal. There is also an inverse relationship between the distance between successive rows and columns (assuming the samples are located at the centre of the hexagon). This inverse relationship plays a huge role in minimizing aliasing and maximizing the minimum sampling density. Quantization error is bound to be present when discretizing continuous real world signals. Experiments have been performed to determine which detector configuration will yield the least quantization error. Hexagonal spatial sampling was found to yield the least quantization error for a given resolution of the sensor.
Applications:
Consistent connectivity of hexagonal grids: In a hexagonal grid, we can define only a background of 6 neighborhood samples. However, in a square grid, we can define a background of 4 or 8 neighborhood samples [4] (if diagonal connectivity is permitted). Because of the absence of such a choice in Hexagonal grids, efficient algorithms can be designed. Consistent connectivity is also responsible for better angular resolution. This is why hexagonal lattice is much better at representing curved objects than the rectangular lattice. Despite of these several advantages, hexagonal grids have not been used practically in computer vision to its maximum potential because of the lack of hardware to process, capture and display hexagonal based images. As highlighted earlier with the Spot 5 satellite, one of the methods being looked at to overcome this hardware difficulty is to mimic hexagonal pixels using square pixels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N,N-Diisopropylaminoethanol**
N,N-Diisopropylaminoethanol:
N,N-Diisopropylaminoethanol (DIPA) is a processor for production of various chemicals and also an intermediate in the production of the nerve agents VX and NX. It is a colorless liquid, although aged samples can appear yellow.
Health effects:
Inhalation and skin contact are expected to be the primary ways of occupational exposure to this chemical. Based on single exposure animal tests, it is considered to be slightly toxic if swallowed or inhaled, moderately toxic if absorbed through skin as well as being corrosive to eyes and skin. Vapor may be irritating to the eyes and upper respiratory tract. Temporary and reversible visual disturbances characterized by mildly blurred vision, a blue-gray discolorization of sight (blue haze) or halo vision (appearance of a halo when looking at light sources) may also occur. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Purposeful omission**
Purposeful omission:
Purposeful omission is the leaving out of particular nonessential details that can be assumed by the reader (if used in literature), according to the context and attitudes/gestures made by the characters in the stories. It allows for the reader to make their own abstract representation of the situation at hand.In the book Why We Fought: America's Wars in Film and History, author Peter Rollins mentions that war movies in the US have purposely omitted some facts so as to make it acceptable to the Pentagon. In their book Representing Lives: Women and Auto/biography, Alison Donnell and Pauline Polkey discuss the difficulty of judging the authenticity of accounts of violence against women when these accounts are made by women in position of prestige and power as such women are likely to omit some details for their own image.According to some authors, purposeful omissions are allowed to carry out the law in spirit and action. In the context of technology, the term is used to denote the avoidance of unwanted or unnecessary feedback. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RingCube vDesk**
RingCube vDesk:
RingCube vDesk is a Desktop virtualization product from RingCube Technologies. vDesk is a client virtualization or virtual workspace platform which virtualizes the entire desktop at an operating system level. The platform can be deployed in four different modes: on a local PC, on an external USB device, streamed across a network, or in conjunction with existing VDI solutions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chess opening theory table**
Chess opening theory table:
A chess opening theory table or ECO table (Encyclopaedia of Chess Openings) presents lines of moves, typically (but not always) from the starting position. Notated chess moves are presented in the table from left to right. Variations on a given line are given horizontally below the parent line.
Arrangement:
Chess opening theory tables are commonly published in opening books with annotations by experienced chess players. These tables are typically arranged in a compact manner to allow experienced players to see variations from a position quickly. Usually, the table indicates that either White or Black has equal, slightly better, or better chances at the end of the variation. Often, this information is distilled down to mere symbols ("Σ" for example) or the percentage of games (usually tournament games) where White won – no information is usually given on what the assessment is based on or how to proceed in the game.
Shortcomings:
Chess opening theory books that provide these tables are usually quite large and difficult for beginners to use. Because the table entries typically do not include the themes or goals involved in a given line, beginners will either try to memorize the tables or simply drown in the detail. The Wikibook Chess Opening Theory aims to bridge this gap by providing this type of information at the end of each line.
Notation:
Typically, each table has a heading indicating the moves required to reach the position for which the table provides an analysis. The example below is for the opening position, so no moves are shown in the heading. The first row provides the move numbers with subsequent rows representing different variations. Since the initial position is not always the opening position, these numbers will not always start at "1." White half-moves are shown above black half-moves. Ellipses (...) represent moves that, for the variation, are identical to the variation above. Bold type indicates that another variation is considered elsewhere – usually in another table. A hyphen (-) or en dash (–) indicates that the variation transposes to a variation elsewhere. Transpositions are common in chess – a given position can often be reached by different move orders – even move orders with more or fewer moves. The table may also provide percentage of games won by white for each variation, based on the results of the games considered in creating the table.
Development:
Chess openings are studied in great depth by serious players. "Novelties", or new, previously unexplored variations are often discovered and played by professional players. These new lines can refute lines that were previously thought to be sound. The games that represent this discovery process are represented in these ever-changing and expanding tables. With the advent of computer databases, even the most casual player can explore an opening line deeply, looking for novelties to spring on their opponents.
External sources of chess opening theory tables:
John Nunn (editor), Graham Burgess, John Emms, Joe Gallagher (1999), Nunn's Chess Openings. ISBN 1-85744-221-0.
Nick de Firmian, Walter Korn (1999), Modern Chess Openings: MCO-14. ISBN 0-8129-3084-3.
Aleksandar Matanović (editor), Encyclopaedia of Chess Openings, 5 volumes (Belgrade: Šahovski informator) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Frequency averaging**
Frequency averaging:
In telecommunication, the term frequency averaging has the following meanings: The process by which the relative phases of precision clocks are compared for the purpose of defining a single time standard.
Frequency averaging:
A process in which network synchronization is achieved by use, at all nodes, of oscillators that adjust their frequencies to the average frequency of the digital bit streams received from connected nodes.In frequency averaging, all oscillators are assigned equal weight in determining the ultimate network frequency. In terms of musical note frequency, the averaging of the frequency of low or high notes in a solo instrumental piece is a technique used to match different instruments together so they may be played together. The musical note frequency calculation formula is used: F=(2^12/n)*440, where n equals the number of positive or negative steps away from the base note of A4(440 hertz) and F equals the frequency. The formula is used in calculating the frequency of each note in the piece. The values are then added together and divided by the number of notes. This is the average frequency of those notes. It is said that such techniques were used by classical composers, especially those who involved mathematics heavily in their music. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microsoft Intune**
Microsoft Intune:
Microsoft Intune (formerly Windows Intune) is a Microsoft cloud-based unified endpoint management service for both corporate and BYOD devices. It extends some of the "on-premises" functionality of Microsoft Endpoint Configuration Manager to the Microsoft Azure cloud.
Distribution:
No on-premises infrastructure is required for clients to use Intune, and management is accomplished using a web-based portal. Distribution is through a subscription system in which a fixed monthly cost is incurred per user. It is also to use Endpoint Manager in co management with Microsoft Endpoint Configuration Manager.
It is included in Microsoft Enterprise Mobility + Security (EMS) suite and Microsoft Office 365 Enterprise E5, which were both succeeded by Microsoft 365 in July 2017. Microsoft 365 Business Premium licenses also include Intune and EMS.
Function:
Intune supports Android, iOS, MacOS and Windows Operating Systems. Administration is done via a web browser. The administration console allows Intune to invoke remote tasks such as malware scans. Since version 2.0, installation of software packages in .exe, .msi and .msp format are supported. Installations are encrypted and compressed on Microsoft Azure Storage. Software installation can begin upon login. It can record and administer volume, retail and OEM licenses, and licenses which are administered by third parties. Upgrades to newer versions of the Intune software are also controlled.Information about inventory is recorded automatically. Managed computers can be grouped together when problems occur. Intune notifies support staff as well as notifying an external dealer via e-mail.
Intune plans:
Since March 2023 Microsoft Intune is available in 3 versions: Intune Plan 1, Intune Plan 2 and Intune Suite. Unfortunately, Plan 2 or Suite do not include Plan 1. Microsoft Intune P1 is included in Microsoft 365 Business Premium and Mobility + Security E3/E5.
Reception:
Der Standard praised the application, saying "the cloud service Intune promises to be a simple PC Management tool via Web console. The interface provides a quick overview of the system of state enterprise." German PC World positively evaluated "usability" saying that it "kept the interface simple." Business Computing World criticized the program, saying "Although Windows Intune worked well in our tests and did everything expected of it, we didn't find it all that easy to get to grips with", blaming the unintuitive "deceptively simple" management interface. ITespresso rated it "good", but noted connection issues with the remote assistance feature and that changes to firewall settings could take upwards of a full day to push out to clients.
History:
Microsoft Intune was originally introduced as Windows Intune in April 2010. Microsoft announced plans to extend the service to other platforms and rename it to Microsoft Intune on 8 October 2014. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cirrhosis**
Cirrhosis:
Cirrhosis, also known as liver cirrhosis or hepatic cirrhosis, and end-stage liver disease, is the impaired liver function caused by the formation of scar tissue known as fibrosis due to damage caused by liver disease. Damage to the liver leads to repair of liver tissue and subsequent formation of scar tissue. Over time, scar tissue can replace normal functioning tissue, leading to the impaired liver function of cirrhosis. The disease typically develops slowly over months or years. Early symptoms may include tiredness, weakness, loss of appetite, unexplained weight loss, nausea and vomiting, and discomfort in the right upper quadrant of the abdomen. As the disease worsens, symptoms may include itchiness, swelling in the lower legs, fluid build-up in the abdomen, jaundice, bruising easily, and the development of spider-like blood vessels in the skin. The fluid build-up in the abdomen develop spontaneous infections. More serious complications include hepatic encephalopathy, bleeding from dilated veins in the esophagus, stomach, or intestines, and liver cancer.Cirrhosis is most commonly caused by alcoholic liver disease, non-alcoholic steatohepatitis (NASH – the progressive form of non-alcoholic fatty liver disease), heroin abuse, chronic hepatitis B, and chronic hepatitis C. Heavy drinking over a number of years can cause alcoholic liver disease. Liver damage has also been attributed to heroin usage over an extended period of time as well. NASH has a number of causes, including obesity, high blood pressure, abnormal levels of cholesterol, type 2 diabetes, and metabolic syndrome. Less common causes of cirrhosis include autoimmune hepatitis, primary biliary cholangitis, and primary sclerosing cholangitis that disrupts bile duct function, genetic disorders such as Wilson's disease and hereditary hemochromatosis, and chronic heart failure with liver congestion.Diagnosis is based on blood tests, medical imaging, and liver biopsy.Hepatitis B vaccine can prevent hepatitis B and the development of cirrhosis, but no vaccination against hepatitis C is available. No specific treatment for cirrhosis is known, but many of the underlying causes may be treated by a number of medications that may slow or prevent worsening of the condition. Hepatitis B and C may be treatable with antiviral medications. Avoiding alcohol is recommended in all cases. Autoimmune hepatitis may be treated with steroid medications. Ursodiol may be useful if the disease is due to blockage of the bile duct. Other medications may be useful for complications such as abdominal or leg swelling, hepatic encephalopathy, and dilated esophageal veins. If cirrhosis leads to liver failure, a liver transplant may be an option.Cirrhosis affected about 2.8 million people and resulted in 1.3 million deaths in 2015. Of these deaths, alcohol caused 348,000, hepatitis C caused 326,000, and hepatitis B caused 371,000. In the United States, more men die of cirrhosis than women. The first known description of the condition is by Hippocrates in the fifth century BCE. The term "cirrhosis" was derived in 1819 from the Greek word "kirrhos", which describes the yellowish color of a diseased liver.
Signs and symptoms:
Cirrhosis can take quite a long time to develop, and symptoms may be slow to emerge. Some early symptoms include tiredness, weakness, loss of appetite, weight loss, and nausea. People may also feel discomfort in the right upper abdomen around the liver.As cirrhosis progresses, symptoms can include neurological changes. This can consist of cognitive impairments, confusion, memory loss, sleep disorders, and personality changes.Worsening cirrhosis can cause a build-up of fluid in different parts of the body such as the legs (edema) and abdomen (ascites). Other signs of advancing disease include itchy skin, bruising easily, dark urine, and yellowing of the skin.
Signs and symptoms:
Liver dysfunction These features are a direct consequence of liver cells not functioning: Spider angiomata or spider nevi happen when there is dilatation of vasculature beneath the skin surface. There is a central, red spot with reddish extensions that radiate outward. This creates a visual effect that resembles a spider. It occurs in about one-third of cases. The likely cause is an increase in estrogen. Cirrhosis causes a rise of estrogen due to increased conversion of androgens into estrogen.
Signs and symptoms:
Palmar erythema, a reddening of the palm below the thumb and little finger, is seen in about 23% of cirrhosis cases, and results from increased circulatng estrogen levels.
Gynecomastia, or the increase of breast size in men, is caused by increased estradiol (a potent type of estrogen). This can occur in up to two-thirds of cases.
Hypogonadism signifies a decreased functionality of the gonads. This can result in impotence, infertility, loss of sexual drive, and testicular atrophy. A swollen scrotum may also be evident.
Liver size can be enlarged, normal, or shrunken in people with cirrhosis. As the disease progresses, the liver will typically shrink due to the result of scarring.
Jaundice is the yellowing of the skin. It can additionally cause yellowing of mucous membranes notably of the white of the eyes. This phenomenon is due to increased levels of bilirubin, which may also cause the urine to be dark-colored.
Signs and symptoms:
Portal hypertension Liver cirrhosis makes it hard for blood to flow in the portal venous system. This resistance creates a backup of blood and increases pressure. This results in portal hypertension. Effects of portal hypertension include: Ascites is a build-up of fluid in the peritoneal cavity in the abdomen An enlarged spleen in 35–50% of cases Esophageal varices and gastric varices result from collateral circulation in the esophagus and stomach (a process called portacaval anastomosis). When the blood vessels in this circulation become enlarged, they are called varices. Varices are more likely to rupture at this point. Variceal rupture often leads to severe bleeding, which can be fatal.
Signs and symptoms:
Caput medusae are dilated paraumbilical collateral veins due to portal hypertension. Blood from the portal venous system may be forced through the paraumbilical veins and ultimately to the abdominal wall veins. The created pattern resembles the head of Medusa, hence the name.
Cruveilhier-Baumgarten bruit is bruit in the epigastric region (on examination by stethoscope). It is due to extra connections forming between the portal system and the paraumbilical veins.
Other nonspecific signs Some signs that may be present include changes in the nails (such as Muehrcke's lines, Terry's nails, and nail clubbing). Additional changes may be seen in the hands (Dupuytren's contracture) as well as the skin/bones (hypertrophic osteoarthropathy).
Advanced disease As the disease progresses, complications may develop. In some people, these may be the first signs of the disease.
Signs and symptoms:
Bruising and bleeding can result from decreased production of clotting factors Hepatic encephalopathy (HE) occurs when ammonia and related substances build up in the blood. This build-up affects brain function when they are not cleared from the blood by the liver. Symptoms can include unresponsiveness, forgetfulness, trouble concentrating, changes in sleep habits, or psychosis. One classic physical examination finding is asterixis. This is the asynchronous flapping of outstretched, dorsiflexed hands. Fetor hepaticus is a musty breath odor resulting from increased dimethyl sulfide and is a feature of HE.
Signs and symptoms:
Sensitivity to medication can be caused by decreased metabolism of the active compounds Acute kidney injury (particularly hepatorenal syndrome) Cachexia associated with muscle wasting and weakness
Causes:
Cirrhosis has many possible causes, and more than one cause may be present. History taking is of importance in trying to determine the most likely cause. Globally, 57% of cirrhosis is attributable to either hepatitis B (30%) or hepatitis C (27%). Alcohol use disorder is another major cause, accounting for about 20–40% of the cases.
Causes:
Common causes Alcoholic liver disease (ALD, or alcoholic cirrhosis) develops for 10–20% of individuals who drink heavily for a decade or more. Alcohol seems to injure the liver by blocking the normal metabolism of protein, fats, and carbohydrates. This injury happens through the formation of acetaldehyde from alcohol. Acetaldehyde is reactive and leads to the accumulation of other reactive products in the liver. People with ALD may also have concurrent alcoholic hepatitis. Associated symptoms are fever, hepatomegaly, jaundice, and anorexia. AST and ALT blood levels are both elevated, but at less than 300 IU/liter, with an AST:ALT ratio > 2.0, a value rarely seen in other liver diseases. In the United States, 40% of cirrhosis-related deaths are due to alcohol.
Causes:
In non-alcoholic fatty liver disease (NAFLD), fat builds up in the liver and eventually causes scar tissue. This type of disorder can be caused by obesity, diabetes, malnutrition, coronary artery disease, and steroids. Though similar in signs to alcoholic liver disease, no history of notable alcohol use is found. Blood tests and medical imaging are used to diagnose NAFLD and NASH, and sometimes a liver biopsy is needed.
Causes:
Chronic hepatitis C, an infection with the hepatitis C virus, causes inflammation of the liver and a variable grade of damage to the organ. Over several decades, this inflammation and damage can lead to cirrhosis. Among people with chronic hepatitis C, 20–30% develop cirrhosis. Cirrhosis caused by hepatitis C and alcoholic liver disease are the most common reasons for liver transplant. Both hepatitis C and hepatitis B–related cirrhosis can also be attributed with heroin addiction.
Causes:
Chronic hepatitis B causes liver inflammation and injury that over several decades can lead to cirrhosis. Hepatitis D is dependent on the presence of hepatitis B and accelerates cirrhosis in co-infection.
Causes:
Less common causes In primary biliary cholangitis (previously known as primary biliary cirrhosis), the bile ducts become damaged by an autoimmune process. This leads to liver damage. Some people may have no symptoms. While other people could present with fatigue, pruritus, or skin hyperpigmentation. The liver is typically enlarged which is referred to as hepatomegaly. Rises in alkaline phosphatase, cholesterol, and bilirubin levels occur. Patients are usually positive for anti-mitochondrial antibodies.
Causes:
Primary sclerosing cholangitis is a disorder of the bile ducts that presents with pruritus, steatorrhea, fat-soluble vitamin deficiencies, and metabolic bone disease. A strong association with inflammatory bowel disease is seen, especially ulcerative colitis.
Autoimmune hepatitis is caused by an attack of the liver by lymphocytes. This causes inflammation and eventually scarring as well as cirrhosis. Findings include elevations in serum globulins, especially gamma globulins.
Hereditary hemochromatosis usually presents with skin hyperpigmentation, diabetes mellitus, pseudogout, or cardiomyopathy, All of these are due to signs of iron overload. Family history of cirrhosis is common as well.
Wilson's disease is an autosomal recessive disorder characterized by low ceruloplasmin in the blood and increased copper of the liver. Copper in the urine is also elevated. People with Wilson's disease may also have Kayser-Fleischer rings in the cornea and altered mental status.
Causes:
Indian childhood cirrhosis is a form of neonatal cholestasis characterized by deposition of copper in the liver Alpha-1 antitrypsin deficiency is an autosomal co-dominant disorder of low levels of the enzyme alpha-1 antitrypsin Cardiac cirrhosis is due to chronic right-sided heart failure, which leads to liver congestion Galactosemia Glycogen storage disease type IV Cystic fibrosis Hepatotoxic drugs or toxins, such as acetaminophen (paracetamol), methotrexate, or amiodarone
Pathophysiology:
The liver plays a vital role in the synthesis of proteins (for example, albumin, clotting factors and complement), detoxification, and storage (for example, of vitamin A and glycogen). In addition, it participates in the metabolism of lipids and carbohydrates.Cirrhosis is often preceded by hepatitis and fatty liver (steatosis), independent of the cause. If the cause is removed at this stage, the changes are fully reversible.The pathological hallmark of cirrhosis is the development of scar tissue that replaces normal tissue. This scar tissue blocks the portal flow of blood through the organ, raising the blood pressure and disturbing normal function. Research has shown the pivotal role of the stellate cell, that normally stores vitamin A, in the development of cirrhosis. Damage to the liver tissue from inflammation leads to the activation of stellate cells, which increases fibrosis through the production of myofibroblasts, and obstructs hepatic blood flow. In addition, stellate cells secrete TGF beta 1, which leads to a fibrotic response and proliferation of connective tissue. TGF-β1 have been implicated in the process of activating hepatic stellate cells (HSCs) with the magnitude of fibrosis being in proportion to increase in TGF β levels. ACTA2 is associated with TGF β pathway that enhances contractile properties of HSCs leading to fibrosis. Furthermore, HSCs secrete TIMP1 and TIMP2, naturally occurring inhibitors of matrix metalloproteinases (MMPs), which prevent MMPs from breaking down the fibrotic material in the extracellular matrix.As this cascade of processes continues, fibrous tissue bands (septa) separate hepatocyte nodules, which eventually replace the entire liver architecture, leading to decreased blood flow throughout. The spleen becomes congested, and enlarged, resulting in its retention of platelets, which are needed for normal blood clotting. Portal hypertension is responsible for the most severe complications of cirrhosis.
Diagnosis:
The diagnosis of cirrhosis in an individual is based on multiple factors. Cirrhosis may be suspected from laboratory findings, physical exam, and the person's medical history. Imaging is generally obtained to evaluate the liver. A liver biopsy will confirm the diagnosis; however, is generally not required.
Diagnosis:
Imaging Ultrasound is routinely used in the evaluation of cirrhosis. It may show a small and shrunken liver in advanced disease. On ultrasound, there is increased echogenicity with irregular appearing areas. Other suggestive findings are an enlarged caudate lobe, widening of the fissures and enlargement of the spleen. An enlarged spleen, which normally measures less than 11–12 cm (4.3–4.7 in) in adults, may suggest underlying portal hypertension. Ultrasound may also screen for hepatocellular carcinoma and portal hypertension. This is done by assessing flow in the hepatic vein. An increased portal vein pulsatility may be seen. However, this may be a sign of elevated right atrial pressure. Portal vein pulsatility are usually measured by a pulsatility indices (PI). A number above a certain values indicates cirrhosis (see table below).
Diagnosis:
Other scans include CT of the abdomen and MRI. A CT scan is non-invasive and may be helpful in the diagnosis. Compared to the ultrasound, CT scans tend to be more expensive. MRI provides excellent evaluation; however, is a high expense.Cirrhosis is also diagnosable through a variety of new elastography techniques. When a liver becomes cirrhotic it will generally become stiffer. Determining the stiffness through imaging can determine the location and severity of disease. Techniques include transient elastography, acoustic radiation force impulse imaging, supersonic shear imaging and magnetic resonance elastography. Transient elastography and magnetic resonance elastography can help identify the stage of fibrosis. Compared to a biopsy, elastography can sample a much larger area and is painless. It shows a reasonable correlation with the severity of cirrhosis. Other modalities have been introduced which are incorporated into ultrasonagraphy systems. These include 2-dimensional shear wave elastography and point shear wave elastography which uses acoustic radiation force impulse imaging.Rarely are diseases of the bile ducts, such as primary sclerosing cholangitis, causes of cirrhosis. Imaging of the bile ducts, such as ERCP or MRCP (MRI of biliary tract and pancreas) may aid in the diagnosis.
Diagnosis:
Lab findings The best predictors of cirrhosis are ascites, platelet count < 160,000/mm3, spider angiomata, and a Bonacini cirrhosis discriminant score greater than 7 (as the sum of scores for platelet count, ALT/AST ratio and INR as per table).
These findings are typical in cirrhosis: Thrombocytopenia, typically multifactorial, is due to alcoholic marrow suppression, sepsis, lack of folate, platelet sequestering in the spleen, and decreased thrombopoietin. However, this rarely results in a platelet count < 50,000/mL.
Aminotransferases AST and ALT are moderately elevated, with AST > ALT. However, normal aminotransferase levels do not preclude cirrhosis.
Alkaline phosphatase – slightly elevated but less than 2–3 times the upper limit of normal.
Gamma-glutamyl transferase – correlates with AP levels. Typically much higher in chronic liver disease from alcohol.
Bilirubin levels are normal when compensated, but may elevate as cirrhosis progresses.
Albumin levels fall as the synthetic function of the liver declines with worsening cirrhosis since albumin is exclusively synthesized in the liver.
Prothrombin time increases, since the liver synthesizes clotting factors.
Globulins increase due to shunting of bacterial antigens away from the liver to lymphoid tissue.
Serum sodium levels fall(hyponatremia) due to inability to excrete free water resulting from high levels of ADH and aldosterone.
Leukopenia and neutropenia are due to splenomegaly with splenic margination.
Coagulation defects occur, as the liver produces most of the coagulation factors, thus coagulopathy correlates with worsening liver disease.
Glucagon is increased in cirrhosis.
Vasoactive intestinal peptide is increased as blood is shunted into the intestinal system because of portal hypertension.
Vasodilators are increased (such as nitric oxide and carbon monoxide) reducing afterload with compensatory increase in cardiac output, mixed venous oxygen saturation.
Diagnosis:
Renin is increased (as well as sodium retention in kidneys) secondary to a fall in systemic vascular resistance.FibroTest is a biomarker for fibrosis that may be used instead of a biopsy.Other laboratory studies performed in newly diagnosed cirrhosis may include: Serology for hepatitis viruses, autoantibodies (ANA, anti-smooth muscle, antimitochondria, anti-LKM) Ferritin and transferrin saturation: markers of iron overload as in hemochromatosis, copper and ceruloplasmin: markers of copper overload as in Wilson's disease Immunoglobulin levels (IgG, IgM, IgA) – these immunoglobins are nonspecific, but may help in distinguishing various causes.
Diagnosis:
Cholesterol and glucose Alpha 1-antitrypsinMarkers of inflammation and immune cell activation are typically elevated in cirrhotic patients, especially in the decompensated disease stage: C-reactive protein (CRP) Procalcitonin (PCT) Presepsin soluble CD14 soluble CD163 soluble CD206 (mannose receptor) soluble TREM-1A recent study identified15 microbial biomarkers from the gut microbiota. These could potentially be used to discriminate patients with liver cirrhosis from healthy individuals.
Diagnosis:
Pathology The gold standard for diagnosis of cirrhosis is a liver biopsy. This is usually carried out as a fine-needle approach, through the skin (percutaneous), or internal jugular vein (transjugular). Endoscopic ultrasound-guided liver biopsy (EUS), using the percutaneous or transjugular route, has become a good alternative to use. EUS can target liver areas that are widely separated, and can deliver bi-lobar biopsies. A biopsy is not necessary if the clinical, laboratory, and radiologic data suggest cirrhosis. Furthermore, a small but significant risk of complications is associated with liver biopsy, and cirrhosis itself predisposes for complications caused by liver biopsy.Once the biopsy is obtained, a pathologist will study the sample. Cirrhosis is defined by its features on microscopy: (1) the presence of regenerating nodules of hepatocytes and (2) the presence of fibrosis, or the deposition of connective tissue between these nodules. The pattern of fibrosis seen can depend on the underlying insult that led to cirrhosis. Fibrosis can also proliferate even if the underlying process that caused it has resolved or ceased. The fibrosis in cirrhosis can lead to destruction of other normal tissues in the liver: including the sinusoids, the space of Disse, and other vascular structures, which leads to altered resistance to blood flow in the liver, and portal hypertension.
Diagnosis:
As cirrhosis can be caused by many different entities which injure the liver in different ways, cause-specific abnormalities may be seen. For example, in chronic hepatitis B, there is infiltration of the liver parenchyma with lymphocytes. In congestive hepatopathy there are erythrocytes and a greater amount of fibrosis in the tissue surrounding the hepatic veins. In primary biliary cholangitis, there is fibrosis around the bile duct, the presence of granulomas and pooling of bile. Lastly in alcoholic cirrhosis, there is infiltration of the liver with neutrophils.Macroscopically, the liver is initially enlarged, but with the progression of the disease, it becomes smaller. Its surface is irregular, the consistency is firm, and if associated with steatosis the color is yellow. Depending on the size of the nodules, there are three macroscopic types: micronodular, macronodular, and mixed cirrhosis. In the micronodular form (Laennec's cirrhosis or portal cirrhosis), regenerating nodules are under 3 mm. In macronodular cirrhosis (post-necrotic cirrhosis), the nodules are larger than 3 mm. Mixed cirrhosis consists of nodules of different sizes.
Grading:
The severity of cirrhosis is commonly classified with the Child–Pugh score (also known as the Child–Pugh–Turcotte score). This system was devised in 1964 by Child and Turcotte, and modified in 1973 by Pugh and others. It was first established to determine who would benefit from elective surgery for portal decompression. This scoring system uses multiple lab values including bilirubin, albumin, and INR. The presence of ascites and severity of encephalopathy is also included in the scoring. The classification system includes class A, B, or C. Class A has a favorable prognosis while class C is at high risk of death.
Grading:
The Child-Pugh score is a validated predictor of mortality after a major surgery. For example, Child class A patients have a 10% mortality rate and Child class B patients have a 30% mortality rate while Child class C patients have a 70–80% mortality rate after abdominal surgery. Elective surgery is usually reserved for those in Child class A patients. There is an increased risk for child class B individuals and they may require medical optimization. Overall, it is not recommended for Child class C patients to undergo elective surgery.In the past, the Child-Pugh classification was used to determine people who were candidates for a liver transplant. Child-Pugh class B is usually an indication for evaluation for transplant. However, there were many issues when applying this score to liver transplant eligibility. Thus, the MELD score was created.
Grading:
The Model for End-Stage Liver Disease (MELD) score was later developed and approved in 2002. It was approved by the United Network for Organ Sharing (UNOS) as a way to determine the allocation of liver transplants to awaiting people in the United States. It is also used as a validated survival predictor of cirrhosis, alcoholic hepatitis, acute liver failure, and acute hepatitis. The variables included bilirubin, INR, creatinine, and dialysis frequency. In 2016, sodium was added to the variables and the score is often referred to as MELD-Na.MELD-Plus is a further risk score to assess severity of chronic liver disease. It was developed in 2017 as a result of a collaboration between Massachusetts General Hospital and IBM. Nine variables were identified as effective predictors for 90-day mortality after a discharge from a cirrhosis-related hospital admission. The variables include all Model for End-Stage Liver Disease (MELD)'s components, as well as sodium, albumin, total cholesterol, white blood cell count, age, and length of stay.The hepatic venous pressure gradient (difference in venous pressure between incoming and outgoing blood to the liver) also determines the severity of cirrhosis, although it is hard to measure. A value of 16 mm or more means a greatly increased risk of death.
Prevention:
Key prevention strategies for cirrhosis are population-wide interventions to reduce alcohol intake (through pricing strategies, public health campaigns, and personal counseling), programs to reduce the transmission of viral hepatitis, and screening of relatives of people with hereditary liver diseases.Little is known about factors affecting cirrhosis risk and progression. However, many studies have provided increasing evidence for the protective effects of coffee consumption against the progression of liver disease. These effects are more noticeable in liver disease that is associated with alcohol use disorder. Coffee has antioxidant and antifibrotic effects. Caffeine may not be the important component; polyphenols may be more important. Drinking two or more cups of coffee a day is associated with improvements in the liver enzymes ALT, AST, and GGT. Even in those with liver disease, coffee consumption can lower fibrosis and cirrhosis.
Treatment:
Generally, liver damage from cirrhosis cannot be reversed, but treatment can stop or delay further progression and reduce complications. A healthy diet is encouraged, as cirrhosis may be an energy-consuming process. A recommended diet consists of high-protein, high-fiber diet plus supplementation with branched-chain amino acids. Close follow-up is often necessary. Antibiotics are prescribed for infections, and various medications can help with itching. Laxatives, such as lactulose, decrease the risk of constipation. Carvedilol increases survival benefit for people with cirrhosis and portal hypertension.Alcoholic cirrhosis caused by alcohol use disorder is treated by abstaining from alcohol. Treatment for hepatitis-related cirrhosis involves medications used to treat the different types of hepatitis, such as interferon for viral hepatitis and corticosteroids for autoimmune hepatitis.Cirrhosis caused by Wilson's disease is treated by removing the copper which builds up in organs. This is carried out using chelation therapy such as penicillamine. When the cause is an iron overload, iron is removed using a chelation agent such as deferoxamine or by bloodletting.As of 2021, there are recent studies studying drugs to prevent cirrhosis caused by non-alcoholic fatty liver disease (NAFLD or NASH). A drug called semaglutide was shown to provide greater NASH resolution versus placebo. No improvement in fibrosis was observed. A combination of cilofexor/firsocostat was studied in people with bridging fibrosis and cirrhosis. It was observed to have led to improvements in NASH activity with a potential antifibrotic effect. Lanifibranor is also shown to prevent worsening fibrosis.
Treatment:
Preventing further liver damage Regardless of the underlying cause of cirrhosis, consumption of alcohol and other potentially damaging substances are discouraged. There is no evidence that supports the avoidance or dose reduction of paracetamol in people with compensated cirrhosis; it is thus considered a safe analgesic for said individuals.People who are at risk of being exposed to hepatitis A and hepatitis B should consider vaccines against these diseases.Treating the cause of cirrhosis prevents further damage; for example, giving oral antivirals such as entecavir and tenofovir where cirrhosis is due to hepatitis B prevents progression of cirrhosis. Similarly, control of weight and diabetes prevents deterioration in cirrhosis due to non-alcoholic fatty liver disease.People with cirrhosis or liver damage are often advised to avoid drugs that could further harm the liver. These include several drugs such as anti-depressants, certain antibiotics, and NSAIDs (like ibuprofen). These agents are hepatotoxic as they are metabolized by the liver. If a medication that harms the liver is still recommended by a doctor, the dosage can be adjusted to aim for minimal stress on the liver.
Treatment:
Lifestyle According to a 2018 systematic review based on studies that implemented 8 to 14 week-long exercise programs, there is currently insufficient scientific evidence regarding either the beneficial or harmful effects of physical exercise in people with cirrhosis on all-cause mortality, morbidity (including both serious and non-serious adverse events), health-related quality of life, exercise capacity and anthropomorphic measures. These conclusions were based on low to very low quality research, which imposes the need to develop further research with higher quality, especially to evaluate its effects on clinical outcomes.
Treatment:
Transplantation If complications cannot be controlled or when the liver ceases functioning, liver transplantation is necessary. Survival from liver transplantation has been improving over the 1990s, and the five-year survival rate is now around 80%. The survival rate depends largely on the severity of disease and other medical risk factors in the recipient. In the United States, the MELD score is used to prioritize patients for transplantation. Transplantation necessitates the use of immune suppressants (ciclosporin or tacrolimus).
Treatment:
Decompensated cirrhosis Manifestations of decompensation in cirrhosis include gastrointestinal bleeding, hepatic encephalopathy, jaundice or ascites. In patients with previously stable cirrhosis, decompensation may occur due to various causes, such as constipation, infection (of any source), increased alcohol intake, medication, bleeding from esophageal varices or dehydration. It may take the form of any of the complications of cirrhosis listed below.
Treatment:
People with decompensated cirrhosis generally require admission to a hospital, with close monitoring of the fluid balance, mental status, and emphasis on adequate nutrition and medical treatment – often with diuretics, antibiotics, laxatives or enemas, thiamine and occasionally steroids, acetylcysteine and pentoxifylline. Administration of saline is avoided, as it would add to the already high total body sodium content that typically occurs in cirrhosis. Life expectancy without liver transplant is low, at most three years.
Treatment:
Palliative care Palliative care is specialized medical care that focuses on providing patients with relief from the symptoms, pain, and stress of a serious illness, such as cirrhosis. The goal of palliative care is to improve quality of life for both the patient and the patient's family and it is appropriate at any stage and for any type of cirrhosis.Especially in the later stages, people with cirrhosis experience significant symptoms such as abdominal swelling, itching, leg edema, and chronic abdominal pain which would be amenable for treatment through palliative care. Because the disease is not curable without a transplant, palliative care can also help with discussions regarding the person's wishes concerning health care power of attorney, do not resuscitate decisions and life support, and potentially hospice. Despite proven benefit, people with cirrhosis are rarely referred to palliative care.
Treatment:
Immunity Cirrhosis is known to cause immune dysfunction in numerous ways. It impedes the immune system from working normally.
Treatment:
Bleeding and blood clot risk Cirrhosis can increase the risk of bleeding. The liver produces various proteins in the coagulation cascade (coagulation factors II, VII, IX, X, V, and VI). When damaged, the liver is impaired in its production of these proteins. This will ultimately increase bleeding as clotting factors are diminished. Clotting function is estimated by lab values, mainly platelet count, prothrombin time (PT), and international normalized ratio (INR).
Treatment:
The American Gastroenterological Association (AGA) provided recommendations in 2021 in regards to coagulopathy management of cirrhotic patients in certain scenarios.
The AGA does not recommend for extensive pre-procedural testing, including repeated measurements of PT/INR or platelet count before patients with stable cirrhosis undergo common gastrointestinal procedures. Nor do they suggest the routine use of blood products, such as platelets, for bleeding prevention. Cirrhosis is stable when there are no changes in baseline abnormalities of coagulation lab values.
For patients with stable cirrhosis and low platelet count undergoing common low-risk procedures, the AGA does not recommend the routine use of thrombopoietin receptor agonists for bleeding prevention.
In hospitalized patients who meet standard guidelines for clot prevention, the AGA suggests standard prevention.
The AGA does not recommend in routine screening for portal vein thrombosis. If there is a portal vein thrombosis, the AGA suggests treatment by anticoagulation.
In the case of cirrhosis with atrial fibrillation, the AGA recommends using anticoagulation over no anticoagulation.
Complications:
Ascites Salt restriction is often necessary, as cirrhosis leads to accumulation of salt (sodium retention). Diuretics may be necessary to suppress ascites. Diuretic options for inpatient treatment include aldosterone antagonists (spironolactone) and loop diuretics. Aldosterone antagonists are preferred for people who can take oral medications and are not in need of an urgent volume reduction. Loop diuretics can be added as additional therapy.Where salt restriction and the use of diuretics are ineffective then paracentesis may be the preferred option. This procedure requires the insertion of a plastic tube into the peritoneal cavity. Human serum albumin solution is usually given to prevent complications from the rapid volume reduction. In addition to being more rapid than diuretics, 4–5 liters of paracentesis is more successful in comparison to diuretic therapy.
Complications:
Esophageal and gastric variceal bleeding For portal hypertension, nonselective beta blockers such as propranolol or nadolol are commonly used to lower blood pressure over the portal system. In severe complications from portal hypertension, transjugular intrahepatic portosystemic shunting (TIPS) is occasionally indicated to relieve pressure on the portal vein. As this shunting can worsen hepatic encephalopathy, it is reserved for those patients at low risk of encephalopathy. TIPS is generally regarded only as a bridge to liver transplantation or as a palliative measure. Balloon-occluded retrograde transvenous obliteration can be used to treat gastric variceal bleeding.Gastroscopy (endoscopic examination of the esophagus, stomach, and duodenum) is performed in cases of established cirrhosis. If esophageal varices are found, prophylactic local therapy may be applied such as sclerotherapy or banding, and beta blockers may be used.
Complications:
Hepatic encephalopathy Hepatic encephalopathy is a potential complication of cirrhosis. It may lead to functional neurological impairment ranging from mild confusion to coma. Goal of treatment is reducing ammonia. This can be achieved by administering lactulose or lactitol to increase potassium. Hydration and nutritional support is also provided. Protein uptake is encouraged. The underlying cause may also need to be identified and treated. Causes include alcohol use, excess protein, gastrointestinal bleeding, infection, constipation, and vomiting/diarrhea. Drugs like benzodiazepines, diuretics, or narcotics can also precipitate hepatic encephalopathy. A low protein diet is recommended with gastrointestinal bleeding. Rifaximin is administered if mental state does not improve in 48 hours. Antibiotic treatment may need to be continued for at least three months. The grading or severity of hepatic encephalopathy is determined by mental status.
Complications:
Hepatorenal syndrome Hepatorenal syndrome is a serious complication of end-stage cirrhosis when kidney damage is also involved.
Spontaneous bacterial peritonitis People with ascites due to cirrhosis are at risk of spontaneous bacterial peritonitis.
Portal hypertensive gastropathy Portal hypertensive gastropathy refers to changes in the mucosa of the stomach in people with portal hypertension, and is associated with cirrhosis severity.
Infection Cirrhosis can cause immune system dysfunction, leading to infection. Signs and symptoms of infection may be nonspecific and are more difficult to recognize (for example, worsening encephalopathy but no fever). Moreover, infections in cirrhosis are major triggers for other complications (ascites, variceal bleeding, hepatic encephalopathy, organ failures, death).
Hepatocellular carcinoma Hepatocellular carcinoma is the most common primary liver cancer, and the most common cause of death in people with cirrhosis. Screening using an MRI scan can detect this cancer and is often carried out for early signs which has been shown to improve outcomes.
Epidemiology:
Each year, approximately one million deaths are due to complications of cirrhosis, making cirrhosis the 11th most common cause of death globally. Cirrhosis and chronic liver disease were the tenth leading cause of death for men and the twelfth for women in the United States in 2001, killing about 27,000 people each year.The cause of cirrhosis can vary; alcohol and non-alcoholic fatty liver disease are main causes in western and industrialized countries, whereas viral hepatitis is the predominant cause in low and middle-income countries. Cirrhosis is more common in men than in women. The cost of cirrhosis in terms of human suffering, hospital costs, and lost productivity is high.
Epidemiology:
Globally, age-standardized disability-adjusted life year (DALY) rates have decreased from 1990 to 2017, with the values going from 656.4 years per 100,000 people to 510.7 years per 100,000 people. In males DALY rates have decreased from 903.1 years per 100,000 population in 1990, to 719.3 years per 100,000 population in 2017; in females the DALY rates have decreased from 415.5 years per 100,000 population in 1990, to 307.6 years per 100,000 population in 2017. However, globally the total number of DALYs have increased by 10.9 million from 1990 to 2017, reaching the value of 41.4 million DALYs.
Etymology:
The word "cirrhosis" is a neologism derived from Greek: κίρρωσις; kirrhos κιρρός, meaning "yellowish, tawny" (the orange-yellow colour of the diseased liver) and the suffix -osis, i.e. "condition" in medical terminology. While the clinical entity was known before, René Laennec gave it this name in an 1819 paper. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Girth (functional analysis)**
Girth (functional analysis):
In functional analysis, the girth of a Banach space is the infimum of lengths of centrally symmetric simple closed curves in the unit sphere of the space. Equivalently, it is twice the infimum of distances between opposite points of the sphere, as measured within the sphere.Every finite-dimensional Banach space has a pair of opposite points on the unit sphere that achieves the minimum distance, and a centrally symmetric simple closed curve that achieves the minimum length. However, such a curve may not always exist in infinite-dimensional spaces.The girth is always at least four, because the shortest path on the unit sphere between two opposite points cannot be shorter than the length-two line segment connecting them through the origin of the space. A Banach space for which it is exactly four is said to be flat. There exist flat Banach spaces of infinite dimension in which the girth is achieved by a minimum-length curve; an example is the space C[0,1] of continuous functions from the unit interval to the real numbers, with the sup norm. The unit sphere of such a space has the counterintuitive property that certain pairs of opposite points have the same distance within the sphere that they do in the whole space.The girth is a continuous function on the Banach–Mazur compactum, a space whose points correspond to the normed vector spaces of a given dimension. The girth of the dual space of a normed vector space is always equal to the girth of the original space. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Square degree**
Square degree:
A square degree (deg2) is a non-SI unit measure of solid angle. Other denotations include sq. deg. and (°)2. Just as degrees are used to measure parts of a circle, square degrees are used to measure parts of a sphere. Analogous to one degree being equal to π/180 radians, a square degree is equal to (π/180)2 steradians (sr), or about 1/3283 sr or about 3.046×10−4 sr.
Square degree:
The whole sphere has a solid angle of 4πsr which is approximately 41253 deg2: 180 deg 360 deg 129 600 deg 41 252.96 deg 2
Examples:
The full moon covers only about 0.2 deg2 of the sky when viewed from the surface of the Earth. The Moon is only a half degree across (i.e. a circular diameter of roughly 0.5°), so the moon's disk covers a circular area of: π(0.5°/2)2, or 0.2 square degrees. The moon varies from 0.188 to 0.244 deg2 depending on its distance from the Earth.
Examples:
Viewed from Earth, the Sun is roughly half a degree across (the same as the full moon) and covers only 0.2 deg2 as well.
It would take 210100 times the full moon (or the Sun) to cover the entire celestial sphere.
Conversely, an average full moon (or the Sun) covers a 2 / 210100 fraction, or less than 1/1000 of a percent (0.00000952381) of the celestial hemisphere, or above-the-horizon sky.
Assuming the Earth to be a sphere with a surface area of 510 million km2, the area of Northern Ireland (14130 km2) represents a solid angle of 1.14 deg2, Connecticut (14357 km2) represents a solid angle of 1.16 deg2, Equatorial Guinea (28050 km2) represents a solid angle of 2 deg2.
The largest constellation, Hydra, covers a solid angle of 1303 deg2, whereas the smallest, Crux, covers only 68 deg2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Closed-form expression**
Closed-form expression:
In mathematics, an expression is in closed form if it is formed with constants, variables and a finite set of basic functions connected by arithmetic operations (+, −, ×, ÷, and integer powers) and function composition. Commonly, the allowed functions are nth root, exponential function, logarithm, and trigonometric functions . However, the set of basic functions depends on the context.
Closed-form expression:
The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object, that is, an expression of this object in terms of previous ways of specifying it.
Example: roots of polynomials:
The quadratic formula x=−b±b2−4ac2a.
is a closed form of the solutions of the general quadratic equation 0.
Example: roots of polynomials:
More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed basic functions are reduced to only nth-roots. In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions.There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). However, they are rarely written explicitly because they are too complicated for being useful.
Example: roots of polynomials:
In higher degrees, Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. The simplest example is the equation 1.
Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals.
Symbolic integration:
Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expressions. In this context, the basic functions used for defining closed forms are commonly logarithms, exponential function and polynomial roots. Functions that have a closed form for these basic functions are called elementary functions and include trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic functions.
Symbolic integration:
The fundamental problem of symbolic integration is thus, given an elementary function specified by a closed-form expression, to decide wheter its antiderivative is an elementary function, and, if it is, to find a closed-form expression for this antiderivative.
Symbolic integration:
For rational functions; that is, for fractions of two polynomial functions; antiderivatives are not always rational fractions, but are always elementary functions that may involve logarithms and polynomial roots. This is usually proved with partial fraction decomposition. The need for logarithms and polynomial roots is illustrated by the formula Roots ln (x−α), which is valid if f and g are coprime polynomials such that g is square free and deg deg g.
Alternative definitions:
Changing the definition of "well known" to include additional functions can change the set of equations with closed-form solutions. Many cumulative distribution functions cannot be expressed in closed form, unless one considers special functions such as the error function or gamma function to be well known. It is possible to solve the quintic equation if general hypergeometric functions are included, although the solution is far too complicated algebraically to be useful. For many practical computer applications, it is entirely reasonable to assume that the gamma function and other special functions are well known since numerical implementations are widely available.
Analytic expression:
An analytic expression (also known as expression in analytic form or analytic formula) is a mathematical expression constructed using well-known operations that lend themselves readily to calculation. Similar to closed-form expressions, the set of well-known functions allowed can vary according to context but always includes the basic arithmetic operations (addition, subtraction, multiplication, and division), exponentiation to a real exponent (which includes extraction of the nth root), logarithms, and trigonometric functions.
Analytic expression:
However, the class of expressions considered to be analytic expressions tends to be wider than that for closed-form expressions. In particular, special functions such as the Bessel functions and the gamma function are usually allowed, and often so are infinite series and continued fractions. On the other hand, limits in general, and integrals in particular, are typically excluded.If an analytic expression involves only the algebraic operations (addition, subtraction, multiplication, division, and exponentiation to a rational exponent) and rational constants then it is more specifically referred to as an algebraic expression.
Comparison of different classes of expressions:
Closed-form expressions are an important sub-class of analytic expressions, which contain a finite number of applications of well-known functions. Unlike the broader analytic expressions, the closed-form expressions do not include infinite series or continued fractions; neither includes integrals or limits. Indeed, by the Stone–Weierstrass theorem, any continuous function on the unit interval can be expressed as a limit of polynomials, so any class of functions containing the polynomials and closed under limits will necessarily include all continuous functions.
Comparison of different classes of expressions:
Similarly, an equation or system of equations is said to have a closed-form solution if, and only if, at least one solution can be expressed as a closed-form expression; and it is said to have an analytic solution if and only if at least one solution can be expressed as an analytic expression. There is a subtle distinction between a "closed-form function" and a "closed-form number" in the discussion of a "closed-form solution", discussed in (Chow 1999) and below. A closed-form or analytic solution is sometimes referred to as an explicit solution.
Dealing with non-closed-form expressions:
Transformation into closed-form expressions The expression: is not in closed form because the summation entails an infinite number of elementary operations. However, by summing a geometric series this expression can be expressed in the closed form: Differential Galois theory The integral of a closed-form expression may or may not itself be expressible as a closed-form expression. This study is referred to as differential Galois theory, by analogy with algebraic Galois theory.
Dealing with non-closed-form expressions:
The basic theorem of differential Galois theory is due to Joseph Liouville in the 1830s and 1840s and hence referred to as Liouville's theorem.
A standard example of an elementary function whose antiderivative does not have a closed-form expression is: whose one antiderivative is (up to a multiplicative constant) the error function: Mathematical modelling and computer simulation Equations or systems too complex for closed-form or analytic solutions can often be analysed by mathematical modelling and computer simulation (for an example in physics, see).
Closed-form number:
Three subfields of the complex numbers C have been suggested as encoding the notion of a "closed-form number"; in increasing order of generality, these are the Liouvillian numbers (not to be confused with Liouville numbers in the sense of rational approximation), EL numbers and elementary numbers. The Liouvillian numbers, denoted L, form the smallest algebraically closed subfield of C closed under exponentiation and logarithm (formally, intersection of all such subfields)—that is, numbers which involve explicit exponentiation and logarithms, but allow explicit and implicit polynomials (roots of polynomials); this is defined in (Ritt 1948, p. 60). L was originally referred to as elementary numbers, but this term is now used more broadly to refer to numbers defined explicitly or implicitly in terms of algebraic operations, exponentials, and logarithms. A narrower definition proposed in (Chow 1999, pp. 441–442), denoted E, and referred to as EL numbers, is the smallest subfield of C closed under exponentiation and logarithm—this need not be algebraically closed, and correspond to explicit algebraic, exponential, and logarithmic operations. "EL" stands both for "exponential–logarithmic" and as an abbreviation for "elementary".
Closed-form number:
Whether a number is a closed-form number is related to whether a number is transcendental. Formally, Liouvillian numbers and elementary numbers contain the algebraic numbers, and they include some but not all transcendental numbers. In contrast, EL numbers do not contain all algebraic numbers, but do include some transcendental numbers. Closed-form numbers can be studied via transcendental number theory, in which a major result is the Gelfond–Schneider theorem, and a major open question is Schanuel's conjecture.
Numerical computations:
For purposes of numeric computations, being in closed form is not in general necessary, as many limits and integrals can be efficiently computed. Some equations have no closed form solution, such as those that represent the Three-body problem or the Hodgkin–Huxley model. Therefore, the future states of these systems must be computed numerically.
Conversion from numerical forms:
There is software that attempts to find closed-form expressions for numerical values, including RIES, identify in Maple and SymPy, Plouffe's Inverter, and the Inverse Symbolic Calculator. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MIBTel**
MIBTel:
The MIBTel was a stock market index for the Borsa Italiana, the main stock exchange of Italy. It has been replaced in 2009 by the FTSE Italia All-Share. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bromopride**
Bromopride:
Bromopride (INN) is a dopamine antagonist with prokinetic properties widely used as an antiemetic, closely related to metoclopramide. It is not available in the United States.
Bromopride appears to be safe and effective for use in pregnancy.
Indications:
Bromopride is indicated in the treatment of nausea and vomiting, including postoperative nausea and vomiting (PONV); gastroesophageal reflux disease (GERD/GORD); and as preparation for endoscopy and radiographic studies of the gastrointestinal tract. The manufacturer also claims it is valuable in, among other indications, hiccups and gastrointestinal adverse effects of radiation therapy.
Adverse effects:
Bromopride is generally well tolerated; the most common adverse effects of its use are somnolence and fatigue. Bromopride may rarely cause extrapyramidal symptoms and, as with metoclopramide, may increase prolactin levels.
Chemistry:
Bromopride is a substituted benzamide, closely related to metoclopramide. It is identical to metoclopramide except for the presence of a bromine atom where metoclopramide has a chlorine substituent.
Availability:
Bromopride is not available in the United States or the United Kingdom. It is marketed in Brazil by Sanofi-Synthélabo under the trade name Digesan, by LIBBS under the name Plamet, and as a generic drug. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fieldstone**
Fieldstone:
Fieldstone is a naturally occurring type of stone, which lies at or near the surface of the Earth. Fieldstone is a nuisance for farmers seeking to expand their land under cultivation, but at some point it began to be used as a construction material. Strictly speaking, it is stone collected from the surface of fields where it occurs naturally. Collections of fieldstones which have been removed from arable land or pasture to allow for more effective agriculture are called clearance cairns.
Fieldstone:
In practice, fieldstone is any architectural stone used in its natural shape and can be applied to stones recovered from the topsoil or subsoil. Although fieldstone is generally used to describe such material when used for exterior walls, it has come to include its use in other ways including garden features and interiors. It is sometimes cut or split for use in architecture.
Glacial deposition:
Fieldstone is common in soils throughout temperate latitudes due to glacial deposition. The type of field stones left through glaciation, are known as glacial erratics. In Canada and the northern United States, the advance of the Laurentide Ice Sheet pulverized bedrock, and its retreat deposited several dozen meters of unsorted till in previously glaciated areas as far south as New England and the Upper Midwest.
Glacial deposition:
Although a coarse layer of glacial ablation would settle on top of the deeper lodgment till, it was these more deeply set stones that would prove a persistent challenge for settled human agriculture because they would be frost-churned into surface soils during harsh winters.Large collections of fieldstone can be found at the edge of the last glacial period, also known as the Wisconsin Glaciation. These edges are known as terminal moraines. Large deposits are found at the end of these glacial advances. In New England in the United States, field stone buildings and walls abound.
Fieldstones and human settlement:
Settled agriculture requires relatively fine and uniform soils for intensive use, and large rocks pose additional risks for agricultural machinery, which they can damage if not removed. Because the stones are widely disseminated, removing fieldstone is a widespread and costly activity in early agricultural settlement. To prepare fields for cultivation, farmers need to remove these stones, which requires significant manual labor. Until the 19th century, fieldstone was removed exclusively by hand, often with whole families participating in this task. Depending on the harshness of winters, this task needed to be repeated whenever frost levels churned new stones into soil surfaces. Thus, land with many fieldstones was and is considered marginal and is assessed for tax purposes well below land that is considered stone-free.
Fieldstones and human settlement:
In mechanized agriculture, fieldstone is usually removed by a tractor attachment called a rock picker. A chain-driven wheel rotates a graded scoop, picking surface rocks from the soil, and shakes off excess soil. A hydraulic lift then tilts and empties the rock bucket, usually along the perimeter of the farm. Washed and split, field rock is considered an attractive landscape and building material, and can be expensive at building supply stores.
Fieldstones and human settlement:
In New England Fieldstone became abundant throughout New England and Eastern Canada as European settlers began to clearcut forests for timber, wood fuel, and agricultural expansion. Although settled agriculture and timber extraction began as early as 1620 in coastal areas, large-scale clear-cutting began in the late 18th century with increased immigration and inland settlement. Fuel and material demands led to the near-complete deforestation of the region. Cleared soils were subject to deeper freezing, which caused frost-churned stones to rise to the soil surfaces. When the virgin land was tilled, the fields were littered with rocks. Abundant and not as portable or versatile as other fencing materials, these stones were moved to the edges of fields and stacked into stone walls, for which New England is now well known.
Fieldstones and human settlement:
Each spring, the stone walls were extended when the fields were plowed, as more stones were brought to the surface following the winter freeze and the spring thaw. Most such walls were stacked between 1775 and 1825, but efforts to repair and extend them continued throughout the 19th century. According to an 1871 agricultural census, more than 380,000 kilometres (240,000 mi) of fieldstone walls were constructed throughout the region, representing 40 million days of human labor. As agricultural production moved westward, areas of New England have since reforested.
Fieldstones and human settlement:
On the High Plains Fieldstone occurs extensively on the High Plains. On or near the surface, fieldstones come in many colors, and are limited in size to about 4 feet in diameter, although larger rocks are sometimes recovered. Pretty and colorful, fieldstones are used occasionally as building materials; some of the more stately homes on the Prairies are constructed of fieldstone and are over a century old. However, fieldstone as a building material is very much underused. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Iron(II) fumarate**
Iron(II) fumarate:
Iron(II) fumarate, also known as ferrous fumarate, is the iron(II) salt of fumaric acid, occurring as a reddish-orange powder, used to supplement iron intake. It has the chemical formula C4H2FeO4. Pure ferrous fumarate has an iron content of 32.87%, therefore one tablet of 300 mg iron fumarate will contain 98.6 mg of iron (548% Daily Value based on 18 mg RDI).
Iron supplement:
Ferrous fumarate is often taken orally as an iron supplement to treat or prevent iron deficiency anaemia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kin recognition**
Kin recognition:
Kin recognition, also called kin detection, is an organism's ability to distinguish between close genetic kin and non-kin. In evolutionary biology and psychology, such an ability is presumed to have evolved for inbreeding avoidance, though animals do not typically avoid inbreeding.An additional adaptive function sometimes posited for kin recognition is a role in kin selection. There is debate over this, since in strict theoretical terms kin recognition is not necessary for kin selection or the cooperation associated with it. Rather, social behaviour can emerge by kin selection in the demographic conditions of 'viscous populations' with organisms interacting in their natal context, without active kin discrimination, since social participants by default typically share recent common origin. Since kin selection theory emerged, much research has been produced investigating the possible role of kin recognition mechanisms in mediating altruism. Taken as a whole, this research suggests that active powers of recognition play a negligible role in mediating social cooperation relative to less elaborate cue-based and context-based mechanisms, such as familiarity, imprinting and phenotype matching.
Kin recognition:
Because cue-based 'recognition' predominates in social mammals, outcomes are non-deterministic in relation to actual genetic kinship, instead outcomes simply reliably correlate with genetic kinship in an organism's typical conditions. A well-known human example of an inbreeding avoidance mechanism is the Westermarck effect, in which unrelated individuals who happen to spend their childhood in the same household find each other sexually unattractive. Similarly, due to the cue-based mechanisms that mediate social bonding and cooperation, unrelated individuals who grow up together in this way are also likely to demonstrate strong social and emotional ties, and enduring altruism.
Theoretical background:
The English evolutionary biologist W. D. Hamilton's theory of inclusive fitness, and the related theory of kin selection, were formalized in the 1960s and 1970s to explain the evolution of social behaviours. Hamilton's early papers, as well as giving a mathematical account of the selection pressure, discussed possible implications and behavioural manifestations. Hamilton considered potential roles of cue-based mechanisms mediating altruism versus 'positive powers' of kin discrimination: The selective advantage which makes behaviour conditional in the right sense on the discrimination of factors which correlate with the relationship of the individual concerned is therefore obvious. It may be, for instance, that in respect of a certain social action performed towards neighbours indiscriminately, an individual is only just breaking even in terms of inclusive fitness. If he could learn to recognise those of his neighbours who really were close relatives and could devote his beneficial actions to them alone an advantage to inclusive fitness would at once appear. Thus, a mutation causing such discriminatory behaviour itself benefits inclusive fitness and would be selected. In fact, the individual may not need to perform any discrimination so sophisticated as we suggest here; a difference in the generosity of his behaviour according to whether the situations evoking it were encountered near to, or far from, his own home might occasion an advantage of a similar kind." (1996 [1964], 51) These two possibilities, altruism mediated via 'passive situation' or via 'sophisticated discrimination', stimulated a generation of researchers to look for evidence of any 'sophisticated' kin discrimination. However, Hamilton later (1987) developed his thinking to consider that "an innate kin recognition adaptation" was unlikely to play a role in mediating altruistic behaviours: But once again, we do not expect anything describable as an innate kin recognition adaptation, used for social behaviour other than mating, for the reasons already given in the hypothetical case of the trees.(Hamilton 1987, 425) The implication that the inclusive fitness criterion can be met by mediating mechanisms of cooperative behaviour that are context and location-based has been clarified by recent work by West et al.: In his original papers on inclusive fitness theory, Hamilton pointed out a sufficiently high relatedness to favour altruistic behaviours could accrue in two ways—kin discrimination or limited dispersal (Hamilton, 1964, 1971, 1972, 1975). There is a huge theoretical literature on the possible role of limited dispersal reviewed by Platt & Bever (2009) and West et al. (2002a), as well as experimental evolution tests of these models (Diggle et al., 2007; Griffin et al., 2004; Kümmerli et al., 2009 ). However, despite this, it is still sometimes claimed that kin selection requires kin discrimination (Oates & Wilson, 2001; Silk, 2002 ). Furthermore, a large number of authors appear to have implicitly or explicitly assumed that kin discrimination is the only mechanism by which altruistic behaviours can be directed towards relatives... [T]here is a huge industry of papers reinventing limited dispersal as an explanation for cooperation. The mistakes in these areas seem to stem from the incorrect assumption that kin selection or indirect fitness benefits require kin discrimination (misconception 5), despite the fact that Hamilton pointed out the potential role of limited dispersal in his earliest papers on inclusive fitness theory (Hamilton, 1964; Hamilton, 1971; Hamilton, 1972; Hamilton, 1975). (West et al. 2010, p. 243 and supplement) For a recent review of the debates around kin recognition and their role in the wider debates about how to interpret inclusive fitness theory, including its compatibility with ethnographic data on human kinship, see Holland (2012).
Criticism:
Leading inclusive fitness theorists such as Alan Grafen have argued that the whole research program around kin recognition is somewhat misguided: Do animals really recognise kin in a way that is different from the way they recognise mates, neighbours, and other organisms and objects?. Certainly animals use recognition systems to recognise their offspring, their siblings and their parents. But to the extent that they do so in the same way that they recognise their mates and their neighbours, I feel it is unhelpful to say they have a kin recognition system." (Grafen 1991, 1095) Others have cast similar doubts over the enterprise: [T]he fact that animals benefit from engaging in spatially mediated behaviors is not evidence that these animals can recognize their kin, nor does it support the conclusion that spatially based differential behaviors represent a kin recognition mechanism (see also discussions by Blaustein, 1983; Waldman, 1987; Halpin 1991). In other words, from an evolutionary perspective it may well be advantageous for kin to aggregate and for individuals to behave preferentially towards nearby kin, whether or not this behaviour is the result of kin recognition per se" (Tang-Martinez 2001, 25)
Experimental evidence:
Kin recognition is a behavioral adaptation noted in many species but proximate level mechanisms are not well documented. Recent studies have shown that kin recognition can result from a multitude of sensory input. Jill Mateo notes that there are three components prominent in kin recognition. First, "production of unique phenotypic cues or labels". Second, "perception of these labels and the degree of correspondence of these labels with a 'recognition template'", and finally the recognition of the phenotypes should lead to "action taken by the animal as a function of the perceived similarity between its template and an encountered phenotype".The three components allow for several possible mechanisms of kin recognition. Sensory information gathered from visual, olfactory and auditory stimuli are the most prevalent. The Belding's ground squirrel kin produce similar odors in comparison to non-kin. Mateo notes that the squirrels spent longer investigating non-kin scents suggesting recognition of kin odor. It's also noted that Belding's ground squirrels produce at least two scents arising from dorsal and oral secretions, giving two opportunities for kin recognition. In addition, the Black Rock Skink is also able to use olfactory stimuli as a mechanism of kin recognition. Egernia saxatilis have been found to discriminate kin from non-kin based on scent. Egernia striolata also use some form of scent, most likely through skin secretions. However, Black Rock Skinks discriminate based on familiarity rather than genotypic similarity. Juvenile E. saxatilis can recognize the difference between the scent of adults from their own family group and unrelated adults. Black Rock Skink recognize their family groups based on prior association and not how genetically related the other lizards are to themselves. Auditory distinctions have been noted among avian species. Long-tailed tits (Aegithalos caudatus) are capable of discriminating kin and non-kin based on contact calls. Distinguishing calls are often learned from adults during the nestling period. Studies suggest that the bald-faced hornet, Dolichovespula maculata, can recognize nest mates by their cuticular hydrocarbon profile, which produces a distinct smell.Kin recognition in some species may also be mediated by immunogenetic similarity of the major histocompatibility complex (MHC). For a discussion of the interaction of these social and biological kin recognition factors see Lieberman, Tooby, and Cosmides (2007). Some have suggested that, as applied to humans, this nature-nurture interactionist perspective allows a synthesis between theories and evidence of social bonding and cooperation across the fields of evolutionary biology, psychology (attachment theory) and cultural anthropology (nurture kinship).
In plants:
Kin recognition is an adaptive behavior observed in living beings to prevent inbreeding, and increase fitness of populations, individuals and genes. Kin recognition is the key to successful reciprocal altruism, a behavior that increases reproductive success of both organisms involved. Reciprocal altruism as a product of kin recognition has been observed and studied in many animals, and more recently, plants. Due to the nature of plant reproduction and growth, plants are more likely than animals to live in close proximity to family members, and therefore stand to gain more from the ability to differentiate kin from strangers.In recent years, botanists have been conducting studies to determine which plant species can recognize kin, and discover the responses of plants to neighboring kin. Murphy and Dudley (2009) shows that Impatiens pallida has the ability to recognize individuals closely related to them and those not related to them. The physiological response to this recognition is increasingly interesting. I. pallida responds to kin by increasing branchiness and stem elongation, to prevent shading relatives, and responds to strangers by increasing leaf to root allocation, as a form of competition.Root allocation has been a very common trait shown through research in plants. Limited amounts of biomass can cause trade-offs among the construction of leaves, stems, and roots overall. But, in plants that recognize kin, the movement of resources in the plant has been shown to be affected by proximity to related individuals. It is well documented that roots can emit volatile compounds in the soil and that interactions also occur below-ground between plant roots and soil organisms. This has mainly focused on organisms in the kingdom Animalia, however.
In plants:
Regarding this, root systems are known to exchange carbon and defense related molecular signals via connected mycorrhizal networks. For instance, it has been demonstrated that tobacco plants can detect the volatile chemical ethylene in order to form a “shade-avoidance phenotype.” Barley plants were also shown to allocate biomass to their roots when exposed to chemical signals from members of the same species, showing that, if they can recognize those signals for competition, recognition of kin in the plant could be likely via a similar chemical response.
In plants:
Similarly, Bhatt et al. (2010) show that Cakile edentula, the American sea rocket, has the ability to allocate more energy to root growth, and competition, in response to growing next to a stranger, and allocates less energy to root growth when planted next to a sibling. This reduces competition between siblings and increases fitness of relatives growing next to each other, while still allowing competition between non-relative plants.Little is known about the mechanisms involved in kin recognition. They most likely vary between species as well as within species. A study by Bierdrzycki et al. (2010) shows that root secretions are necessary for Arabidopsis thaliana to recognize kin vs. strangers, but not necessary to recognize self vs. non-self roots. This study was performed using secretion inhibitors, which disabled the mechanism responsible for kin recognition in this species, and showed similar growth patterns to Bhatt et al., (2010) and Murphy and Dudley (2009) in control groups. The most interesting result of this study was that inhibiting root secretions did not reduce the ability of Arabidopsis to recognize their own roots, which implicates a separate mechanism for self/non-self recognition than that for kin/stranger recognition.While this mechanism in the roots responds to exudates and involves competition over resources like nitrogen and phosphorus, another mechanism has been recently proposed, which involves competition over light, in which kin recognition takes place in leaves. In their 2014 study, Crepy and Casal conducted multiple experiments on different accessions of A. thaliana. These experiments showed that Arabidopsis accessions have distinct R:FR and blue light signatures, and that these signatures can be detected by photoreceptors, which allows the plant to recognize its neighbor as a relative or non-relative. Not much is known about the pathway that Arabidopsis uses to associate these light patterns with kin, however, researchers ascertained that photoreceptors phyB, cry 1, cry 2, phot1, and phot2 are involved in the process by performing a series of experiments with knock-out mutants. Researchers also concluded that the auxin-synthesis gene TAA1 is involved in the process, downstream of the photoreceptors, by performing a similar experiments using Sav3 knock-out mutants. This mechanism leads to altered leaf direction to prevent shading of related neighbors and to reduce competition for sunlight.
Inbreeding avoidance:
When mice inbreed with close relatives in their natural habitat, there is a significant detrimental effect on progeny survival. Since inbreeding can be detrimental, it tends to be avoided by many species. In the house mouse, the major urinary protein (MUP) gene cluster provides a highly polymorphic scent signal of genetic identity that appears to underlie kin recognition and inbreeding avoidance. Thus there are fewer matings between mice sharing MUP haplotypes than would be expected if there were random mating. Another mechanism for avoiding inbreeding is evident when a female house mouse mates with multiple males. In such a case, there appears to be egg-driven sperm selection against sperm from related males.In toads, male advertisement vocalizations may serve as cues by which females recognize their kin and thus avoid inbreeding.In dioecious plants, the stigma may receive pollen from several different potential donors. As multiple pollen tubes from the different donors grow through the stigma to reach the ovary, the receiving maternal plant may carry out pollen selection favoring pollen from less related donor plants. Thus, kin recognition at the level of the pollen tube apparently leads to post-pollination selection to avoid inbreeding depression. Also, seeds may be aborted selectively depending on donor–recipient relatedness. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cementicle**
Cementicle:
A cementicle is a small, spherical or ovoid calcified mass embedded within or attached to the cementum layer on the root surface of a tooth, or lying free within the periodontal ligament. They tend to occur in elderly individuals.There are 3 types: Free cementicle – not attached to cementum Attached (sessile) cementicle – attached to the cementum surface (also termed exocementosis) Embedded (interstitial) cementicle – with advancing age the cementum thickens, and the cementicle may become incorporated into the cementum layerThey may be visible on a radiograph (x-ray). They may appear singly or in groups, and are most commonly found at the tip of the root. Their size is variable, but generally they are small (about 0.2 mm – 0.3 mm in diameter).Cementicles are usually acellular, and may contain either fibrillar or afibrillar cementum, or a mixture of both. Cementicles are the result of dystrophic calcification, but the reason why this takes place is unclear. Cementicles are thought to form when calcification occurs around a nidus, a precipitating center. Around this nidus they slowly enlarge by further deposition of calcium salts. Examples of how cementicles are thought to form include: Calcification due to degenerative changes in the epithelial cell rests of Malassez Calcification of thrombosed (blocked) capillaries in the periodontal ligament (i.e. a phlebolith) Microtrauma to Sharpey's fibres causes small spicules of cementum or alveolar bone to splinter into the periodontal membrane Some do not consider these as true cementicles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bilateral sound**
Bilateral sound:
Bilateral sound is a type of bilateral stimulation used in eye movement desensitization and reprocessing (EMDR) in the same manner as eye movement. It has been reported to enhance visualization and hypnosis, but this has received little attention in research. Essentially, the sound moves back and forth across the stereo field at a steady rhythm. In this regard, bilateral sound has been used in commercial recordings, and has been applied manually with the use of an electronic metronome or other means. Controversies regarding this and other forms of bilateral stimulation are discussed in the article on EMDR.
Other fields:
In other fields, the words bilateral and sound may be found together, but not necesserality referring to a pattern of sound as in the above use of the phrase.
In medicine, bilateral sound can refer to a type of sound coming from both sides of the body, as in crepitus from temporomandibular joint (TMJ) syndrome when it occurs on both sides of the TMJ. In this case, bilateral is an anatomical term.
In cochlear implant technology, bilateral sound refers to the provision of a different sound input for each ear to help the patient localize and interpret the sound.
In analog optical sound recording technology applications such as that used to provide audio for film, bilateral sound production makes use of two mirror-image tracks of light and dark patterns that are designed so as to avoid sound distortion where light may not adequately illuminate the full width of an area of the tracks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**European BEST Engineering Competition**
European BEST Engineering Competition:
European BEST Engineering Competition (EBEC) is an annual engineering competition organised by the Board of European Students of Technology (BEST). EBEC spreads in 32 countries with the mission to develop students by offering them the opportunity to challenge themselves in solving a theoretical or a practical problem. Students form teams of four and are called upon to solve an interdisciplinary Team Design or Case Study task, thus addressing students from all the fields of engineering.
European BEST Engineering Competition:
Bringing together students, universities, companies, institutions and NGOs, EBEC aims at taking out students’ full range of multidisciplinary knowledge and personal skills and applying this potential into solving real-life problems by working in teams.
European BEST Engineering Competition:
EBEC is under the core service of BEST to provide complementary education. During the competition, active and inquisitive students have the chance to apply the knowledge gained through university, to challenge themselves, to broaden their horizons, to develop their creativity and communication skills. These being fundamental elements of the competition, EBEC contributes in the support and advancement of the technological education, as well as in the promotion of a collaboration in a multicultural environment.
European BEST Engineering Competition:
In 2019, EBEC Final Round will be held in Turin.
History:
The idea of competitions was introduced in BEST through the Canadian Engineering Competitions (CEC) organized by Canadian Federation of Engineering Students (CFES). Members of BEST visited CEC in 2002 as guests and the idea of organising such competitions was discussed in that same year during BEST General Assembly. This is when the story of BEST Competitions begins, with the first BEST European Engineering Competition (BEEC) being organised in Eindhoven in 2003, the first National Round taking place in Portugal in 2006 and the very first EBEC Final being organised in Ghent in 2009 with finalists selected among 2300 participants from 51 universities in 18 countries, marking the completion of EBEC Pyramid.
Structure:
EBEC develops through three levels of competitions that form the EBEC Pyramid. With 84 Local Rounds, 15 National/Regional Rounds and 1 Final Round, EBEC is one of the largest engineering competitions organized by students for students in Europe, with nearly 7,000 students participating every year.
Local Rounds Local Rounds (LRs) are held within one University with an established Local BEST Group (LBG). The winning team of each category proceeds to the next level.
National/Regional Rounds National/Regional Rounds (NRRs) are held within one country or a multinational region and are organised by one LBG of that country/region. The teams that won the local rounds compete in the same category, claiming a position in the EBEC Final. Currently, there is a total of 15 National/Regional Rounds with more than 700 students participating.
Structure:
EBEC Final The Final Round of the European BEST Engineering Competition, EBEC Final, is one of the most prominent BEST events, organised by one LBG. Leading students, representing more than 80 of the greatest European universities, are gathered for 10 days to work on multiple tasks in an international environment. During the event, contestants also have the chance to meet people from different cultural backgrounds, to get a taste of the hosting city and also to come in touch with high-profile companies being present at the Job Fair held on the last day of the event.
Categories:
Since the advent of BEST Competitions, different competition categories, such as Debate and Negotiation, were introduced until EBEC developed to its final form, consisting of the Case Study and Team Design categories.
Case Study Case Study (CS) is a theoretical, problem-solving challenge that requires the analysis, research, deliberation, testing and presentation of a solution for a current economical, legal or social problem. The solution must be provided within a limited amount of time and be supported by restricted resources, such as time and money.
Team Design Team Design (TD) is a practical, hands-on, project-based challenge that requires the design, creation and presentation of a prototype model that can successfully meet specific construction and operation criteria. The model must be created within a limited amount of time and through the use of low-cost and limited resource materials.
Competition Overview:
To date, nine editions have been organised, as summarized below.
Competition Overview:
EBEC 2009 EBEC Final was organised for the first time by Local BEST Group Ghent in August 2009. 80 students participated, finding their way to the final between more than 2300 participants in 51 universities in 18 countries. This event was supported by UNEP, that provided a real-life problem for the Team Design part, while EBEC was recognised as partner of the European Year of Creativity and Innovation.
Competition Overview:
EBEC 2010 EBEC kept on developing with 71 Technical Universities embracing this venture. With a total of 5000 students participating in 31 countries, 104 finalists were selected and gathered in Cluj-Napoca to prove themselves.
EBEC 2011 In the 3rd edition of EBEC, 79 technical universities were involved with more than 5000 students participating in the first level of the competition, 104 students having the chance to meet in Istanbul and more than 200 BEST members contributing to the realisation and development of this project.
EBEC 2012 EBEC Final 2012 was organised in Zagreb, receiving the patronage of the President of Croatia. The event consisted of four working days, Official Opening/Closing Days and one free day, where participants had the opportunity to discover the city of Zagreb.
EBEC 2013 The 5th edition of EBEC Final took place in Warsaw and involved 83 Technical Universities in Europe, 15 National/Regional EBEC Rounds and more than 6500 students participating. The event was supported by Warsaw University of Technology, as well as important institutions, such as the Ministry of Science and Higher Education and Copernicus Science Centre.
EBEC 2014 87 Local Rounds, more than 6000 participants, 116 finalists and more than 500 BEST members across 32 countries contributed in the preparation and successful conduction of the 6th edition of EBEC Final in Riga.
EBEC 2015 In 2015, EBEC Final was held in Porto, reaching the maximum number of participants so far (120) and setting high standards for the upcoming editions.
EBEC 2016 In 2016, the EBEC Final was held in Belgrad between the 2nd to 9 August.
EBEC 2017 In 2017, the EBEC Final was held in Brno, during August.
EBEC Challenge 2018 In 2018 there was not a Final round of EBEC. For this reason it was called EBEC Challenge, as there was a slight difference between the editions.
Reception:
EBEC is a competition that spreads all over Europe reaching thousands of students, universities and companies. But what makes EBEC unique is not only the numbers and the technical outcomes, but the renowned "EBEC spirit", that is the atmosphere surrounding the competition consisting of the teamwork, the unbound creativity being interlinked with knowledge, the strive for the best of oneself. This is what makes students passionate to participate and to work for the best solution, what brings professors and experts to offer their expertise and knowledge for the transparency of the competition, what makes the companies want to support the competition again and ensure that students deal with current technological problems and of course what makes BEST members continuously work more and more passionately to develop the competition. This is what brings all these people together for a common goal, to "Design the Future. Today." Awards and nominations EBEC Final 2015 in Porto was qualified as the best project of Portugal for the next round of European Charlemagne Youth Prize.
Reception:
Patronages BEST always seeks support from institutions that recognise its efforts and share its vision to assist European students. So far, EBEC is supported by many institutions and bodies, such as UNESCO, Young in Action, European Society for Engineering Education (SEFI), Institute of Electrical and Electronics Engineers.
Universities that supported EBEC during recent years include: Aristotle University of Thessaloniki, Czech Technical University in Prague, Graz University of Technology, National Technical University of Athens (NTUA), Silesian University of Technology in Gliwice, Universidade do Porto, Yildiz Teknik Universitesi.
Accreditation It is notable that EBEC is starting to be recognised from universities as a project of high quality that contributes to the education of the participants. University of Porto was the first university to recognise the competition by attributing ECTS to the participants. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Andrew G. Alleyne**
Andrew G. Alleyne:
Andrew G. Alleyne is the Dean of the College of Science and Engineering at the University of Minnesota. He was previously the Ralph M. and Catherine V. Fisher Professor in Engineering and Director of the National Science Foundation Engineering Research Center on Power Optimization of Electro Thermal Systems at the University of Illinois Urbana-Champaign. His work considers decision making in complex physical systems. He is a fellow of the American Society of Mechanical Engineers, Institute of Electrical and Electronics Engineers, and the American Association for the Advancement of Science.
Early life and education:
Alleyne attended Walt Whitman High School in Bethesda, MD and graduated in 1985. For his undergraduate education he studied mechanical and aerospace engineering at Princeton University and graduated magna cum laude in 1989. After college he joined NASA's Jet Propulsion Laboratory in Pasadena, CA where he worked on a comet nucleus sample return mission. On leave from JPL, he moved to the University of California, Berkeley, for his graduate studies, and earned his master's degree in 1992 and doctoral degree in 1994, both in mechanical engineering. Alleyne was appointed to the faculty of the Mechanical and Industrial Engineering department at University of Illinois Urbana-Champaign in 1994. The Mechanical and Industrial Engineering department was renamed the Mechanical Science and Engineering (MechSE) department.
Career:
In 2004 Alleyne was the youngest person in the Department of Mechanical Science and Engineering at the University of Illinois Urbana-Champaign to be promoted to Professor. He was also the youngest faculty member to hold a named professorship in the department. Alleyne held a visiting position as a Fulbright scholar at the Delft University of Technology. In 2008 he was appointed Associate Dean for Research in the College of Engineering at the University of Illinois, Urbana-Champaign. In 2011–2012, he held a National Research Council Fellowship and worked at the Air Force Research Laboratory in Dayton, OH.
Career:
Alleyne works on the dynamic modeling and simulation of complex systems as well as the development of algorithms for decision making for complex system. The decision making usually occurs very rapidly while the systems are in operation. His work relies on control theory; a means to evaluate how systems behave with a series of inputs and desired outputs. This may include nanoscale motion control, vehicle systems dynamics and energy management (including heating, ventilation, and air conditioning systems). His work is a continuum from mathematical theory, through computational tools, and then experimental validation in prototypes. For algorithms, he has made significant contributions to advances in Iterative Learning Control (ILC). Alleyne has created several high precision algorithms that include design rules for ILC feedforward trajectories. For experimental systems, he has developed the platform and process control for electro-hydrodynamic jet printing; which allows for the precise printing of wide variety materials and has seen commercial adoption. He has developed a commercial software that can simulate transient thermal systems called Thermosys, which is a MATLAB/Simulink toolbox for modeling dynamic transients in HVAC systems. He has created ways to dynamically monitor and control thermal management systems for power electronics, which are used in planes, ships and cars. Alleyne worked with the Air Force Research Laboratory to create the Aircraft Transient Thermal Modeling and Optimization toolbox. A more complete listing of research efforts can be found at his departmental website at Illinois.
Career:
In addition to his research efforts on campus, Alleyne has participated in several governmental efforts in service to the United States. After participating in the Defense Science Study Group, run by the Institute for Defense Analysis, he served on the U.S. Air Force Scientific Advisory Board as well as National Academies Board on Army Research and Development. He was also a member of a Quadrennial Technology Review from the Department of Energy.
Career:
Awards and honors His awards and honors include; 2005 Elected Fellow of the American Society of Mechanical Engineers 2008 American Society of Mechanical Engineers Gustus L. Larson Memorial Award 2012 Air Force Meritorious Civilian Service Award 2014 American Society of Mechanical Engineers Henry M. Paynter Outstanding Investigator Award 2016 American Society of Mechanical Engineers Charles Stark Draper Innovative Practice Award 2018 American Automatic Control Council Control Engineering Practice Award 2017 Elected Fellow of the Institute of Electrical and Electronics Engineers 2019 University of Illinois at Urbana–Champaign Innovation Transfer Award 2019 Elected Fellow of the American Association for the Advancement of Science 2019 American Society of Mechanical Engineers Milliken Award 2020 American Society of Mechanical Engineers Robert Henry Thurston Lecture Award 2020 Chief of Staff of the Air Force Award for Exceptional Public Service Academic service In addition to serving in numerous service leadership roles at Illinois and in the broader professional academic community, Alleyne has worked to improve inclusivity and gender balance within science and engineering. When he arrived at MechSE in 1994 there were no women faculty members in the department, and only one in ten members of faculty of the College of Engineering were women. Since becoming professor in 2004 Alleyne has served on several recruitment committees and transformed the MechSE faculty to 25% women. Alleyne has developed a ten step plan to improve recruitment of diverse candidates, which he has since shared with other universities. In 2017 he was awarded the Society of Women Engineers Advocating Women in Engineering Award in recognition of his commitment to gender equality.Alongside a commitment to inclusivity and gender equality, Alleyne has been dedicated to teaching and learning throughout his academic career. He was awarded the Engineering Council Award for Excellence in Advising in 1998 and 1999, and is consistently praised by his students. Due to his efforts inside and outside the classroom, as well as his commitment to building educational infrastructure, he has been recognized with the UIUC College of Engineering Teaching Excellence Award, the UIUC Campus Award for Excellence in Undergraduate Teaching, and the UIUC Campus Award for Excellence in Graduate Student Mentoring. His efforts toward teaching and mentoring diversity was recognized by the UIUC Larine Y. Cowan "Make a Difference" award in 2014. In 2016 he was awarded the University of Illinois at Urbana–Champaign Outstanding Advisor Award. For educational contributions outside of UIUC, he was presented with the American Society of Mechanical Engineers Yasundo Takahashi Education Award in 2017 for his contributions to education relevant to the Dynamic Systems and Control Division.Alleyne was named as the new Dean of the College of Science and Engineering at the University of Minnesota in September 2021. He assumed the role January 10th, 2022.
Career:
Publications Alleyne, Andrew G. (2006-05-30). "A survey of iterative learning control". IEEE Control Systems Magazine. 26 (3): 96–114. doi:10.1109/MCS.2006.1636313. S2CID 45807426.
Alleyne, Andrew G. (2007). "High-resolution electrohydrodynamic jet printing". Nature Medicine. 6 (10): 782–789. Bibcode:2007NatMa...6..782P. doi:10.1038/nmat1974. PMID 17676047. S2CID 29661877.
Alleyne, Andrew G. (1995). "Nonlinear adaptive control of active suspensions". IEEE Transactions on Control Systems Technology. 3: 94–101. doi:10.1109/87.370714.
Personal life:
Alleyne is married to Marianne Alleyne, an entomology professor at University of Illinois at Urbana–Champaign, with whom he has two children. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Xg antigen system**
Xg antigen system:
The XG antigen is a red blood cell surface antigen discovered in 1962. by researchers at the MRC Blood Group Unit.The PBDX gene that encodes the antigen is located on the short arm of the X chromosome. Since males normally have one X chromosome they are considered hemizygotes. Since women have two copies of the gene and could be heterozygotic for the presence or absence of the functioning gene they could (through the process of lyonisation) express the functioning protein on just some of their red blood cells. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Concrete cone failure**
Concrete cone failure:
Concrete cone is one of the failure modes of anchors in concrete, loaded by a tensile force. The failure is governed by crack growth in concrete, which forms a typical cone shape having the anchor's axis as revolution axis.
Mechanical models:
ACI 349-85 Under tension loading, the concrete cone failure surface has 45° inclination. A constant distribution of tensile stresses is then assumed. The concrete cone failure load N0 of a single anchor in uncracked concrete unaffected by edge influences or overlapping cones of neighboring anchors is given by: N0=fctAN[N] Where: fct - tensile strength of concrete AN - Cone's projected area Concrete capacity design (CCD) approach for fastening to concrete Under tension loading, the concrete capacity of a single anchor is calculated assuming an inclination between the failure surface and surface of the concrete member of about 35°. The concrete cone failure load N0 of a single anchor in uncracked concrete unaffected by edge influences or overlapping cones of neighboring anchors is given by: 1.5 [N] Where: k - 13.5 for post-installed fasteners, 15.5 for cast-in-site fasteners fcc - Concrete compressive strength measured on cubes [MPa] hef - Embedment depth of the anchor [mm] The model is based on fracture mechanics theory and takes into account the size effect, particularly for the factor 1.5 which differentiates from hef2 expected from the first model. In the case of concrete tensile failure with increasing member size, the failure load increases less than the available failure surface; that means the nominal stress at failure (peak load divided by failure area) decreases. Current codes take into account a reduction of the theoretical concrete cone capacity N0 considering: (i) the presence of edges; (ii) the overlapping cones due to group effect; (iii) the presence of an eccentricity of the tension load.
Mechanical models:
Difference between models The tension failure loads predicted by the CCD method fits experimental results over a wide range of embedment depth (e.g. 100 - 600 mm). Anchor load bearing capacity provided by ACI 349 does not consider size effect , thus an underestimated value for the load-carrying capacity is obtained for large embedment depths.
Influence of the head size For large head size, the bearing pressure in the bearing zone diminishes. An increase of the anchor's load-carrying capacity is observed . Different modification factors were proposed in technical literature.
Un-cracked and cracked concrete Anchors, experimentally show a lower load-bearing capacity when installed in a cracked concrete member. The reduction is up to 40% with respect to the un-cracked condition, depending on the crack width. The reduction is due to the impossibility to transfer both normal and tangential stresses at the crack plane. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TAUM system**
TAUM system:
TAUM (Traduction Automatique à l'Université de Montréal) is the name of a research group which was set up at the Université de Montréal in 1965. Most of its research was done between 1968 and 1980. It gave birth to the TAUM-73 and TAUM-METEO machine translation prototypes, using the Q-Systems programming language created by Alain Colmerauer, which were among the first attempts to perform automatic translation through linguistic analysis. The prototypes were never used in actual production.
TAUM system:
The TAUM-METEO name has been erroneously used for many years to designate the METEO System subsequently developed by John Chandioux. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Competitive yoga**
Competitive yoga:
Competitive yoga is the performance of asanas in sporting competitions. The activity is controversial as it appears to conflict with the nature of yoga.
History:
The International Federation of Sports Yoga has organised annual championships since 1989, and is led by Fernando Estevez-Griego (Swami Maitreyananda). These competitions are not restricted to asanas, but cover all eight limbs of yoga identified in the Yoga Sutras of Patanjali. The 1989 competition was held in Montevideo with the asana competition in Pondicherry.Competitive yoga has been practised by adults in America since 2009 under the auspices of the nonprofit organisation USA Yoga; competitions were later introduced for children from the age of 7. The fiercely contested Bishnu Charan Ghosh Cup is held annually in Los Angeles. Ghosh inspired the yoga style of Bikram Choudhury, the founder of Bikram Yoga, and Choudhury has been closely associated with America's competitive yoga from its inception.The documentary film Posture by Nathan Bender and Daniel Nelson portrays competitors and detractors of the USA Yoga Federation National Championship.
Sport or spiritual:
The idea of competitive yoga seems an oxymoron to some people in the yoga community. The author Rajiv Malhotra described competitive yoga as "a form of misappropriation". The yoga teacher Loretta Turner called the term "offensive, because yoga is much more than posturing". The journalist Neal Pollack said that the goal of all types of yoga is samadhi, "enlightened bliss where the ego separates from the self and the practitioner realizes that he's powerless to control the vagaries of an endlessly shifting universe". Instead, Pollack continued, yoga competitions consist of the performance of asanas derived from hatha yoga. He concluded that he was not sure what he had witnessed, but he was glad to return to his usual modest yoga, free of competitiveness.Yoga practitioners and their instructors commonly work to avoid any feeling of competitiveness. The yoga instructor Tanya Boulton comments that yoga is challenging because it teaches people not to be competitive but to be at peace with themselves. Practitioners are advised not to compare themselves to other people in their class, and to accept that yoga is an inner thing, not a matter of physical perfection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Royal Flush (game)**
Royal Flush (game):
Royal Flush is a solitaire card game which is played with a deck of 52 playing cards. The game is so called because the aim of the game is to end up with a royal flush of any suit.The game is so much mechanical in nature that there is currently no digital implementation.
Rules:
First, the deck is dealt into five piles: two piles of eleven cards and three piles of ten. The player first turns the first pile up and looks for either an ace, a ten, a king, a queen, and a jack, cards which comprise a royal flush. Once such a card is found, the search ends there, with the cards on top of it discarded and those under it left alone. The search then proceeds into the next pile and the search for other cards in the royal flush continues. Also, the suit of the first card found determines the suit of the entire royal flush. When a pile contains no card of the royal flush being searched, the entire pile is discarded. When another royal-flush card is found in a pile, the search continues until all five piles are searched and royal-flush cards are at the top of their piles.
Rules:
Afterwards, the piles are placed on top of each other in the reverse direction of the deal. So if the deal is from left to right, once the search ends, each pile is placed on its neighbor to the left. (Morehead and Mott-Smith's rule said stated that the piles should be turned face down first before the piles are gathered). The discarded cards are set aside.
Rules:
Then without shuffling, the remaining cards are dealt this time into four piles. After this, the search the cards of the royal flush begins again with the same procedure already stated above. Once all four piles are searched and regathered, the remaining cards (again without reshuffling) are dealt into three piles. The process continues, and after the gathering of the piles, the cards are dealt into two piles and the procedure begins once more.
Rules:
When the two piles are regathered, the remaining cards at this point are then dealt into a single pile. Even at this point, the search stops when a royal-flush card is found and the cards on top of it are discarded. The game is won when the cards of the royal flush are the only ones left in the pile and are arranged in any order. If there are any other cards sandwiched among the royal flush cards, the game is lost.
Variation:
To give the game some variation, Lee and Packard suggests the player to try other poker hands such as four-of-a-kinds, full houses, or straight flushes. The player can simply look for a specific hand or look for certain cards to include in their hand while playing the game.
External sources:
100 Best Solitaire Games. Cardoza Publishing. 1 September 2013. pp. 51–. ISBN 978-1-58042-562-9. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smelly**
Smelly:
Smelly may refer to something with a disagreeable odor (i.e., something that smells bad). Smelly may also refer to:
People:
Smelly (performer) or Dai Okazaki, a Japanese comedic performer Erik Sandin, drummer with NOFX nicknamed Smelly | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Superheavy element**
Superheavy element:
Superheavy elements, also known as transactinide elements, transactinides, or super-heavy elements, are the chemical elements with atomic number greater than 103. The superheavy elements are those beyond the actinides in the periodic table; the last actinide is lawrencium (atomic number 103). By definition, superheavy elements are also transuranium elements, i.e., having atomic numbers greater than that of uranium (92). Depending on the definition of group 3 adopted by authors, lawrencium may also be included to complete the 6d series.Glenn T. Seaborg first proposed the actinide concept, which led to the acceptance of the actinide series. He also proposed a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153 (although more recent work suggests the end of the superactinide series to occur at element 157 instead). The transactinide seaborgium was named in his honor.Superheavy elements are radioactive and have only been obtained synthetically in laboratories. No macroscopic sample of any of these elements have ever been produced. Superheavy elements are all named after physicists and chemists or important locations involved in the synthesis of the elements.
Superheavy element:
IUPAC defines an element to exist if its lifetime is longer than 10−14 second, which is the time it takes for the atom to form an electron cloud.The known superheavy elements form part of the 6d and 7p series in the periodic table. Except for rutherfordium and dubnium (and lawrencium if it is included), even the longest-lasting isotopes of superheavy elements have half-lives of minutes or less. The element naming controversy involved elements 102–109. Some of these elements thus used systematic names for many years after their discovery was confirmed. (Usually the systematic names are replaced with permanent names proposed by the discoverers relatively shortly after a discovery has been confirmed.)
Introduction:
Synthesis of superheavy nuclei A superheavy atomic nucleus is created in a nuclear reaction that combines two other nuclei of unequal size into one; roughly, the more unequal the two nuclei in terms of mass, the greater the possibility that the two react. The material made of the heavier nuclei is made into a target, which is then bombarded by the beam of lighter nuclei. Two nuclei can only fuse into one if they approach each other closely enough; normally, nuclei (all positively charged) repel each other due to electrostatic repulsion. The strong interaction can overcome this repulsion but only within a very short distance from a nucleus; beam nuclei are thus greatly accelerated in order to make such repulsion insignificant compared to the velocity of the beam nucleus. The energy applied to the beam nuclei to accelerate them can cause them to reach speeds as high as one-tenth of the speed of light. However, if too much energy is applied, the beam nucleus can fall apart.Coming close enough alone is not enough for two nuclei to fuse: when two nuclei approach each other, they usually remain together for approximately 10−20 seconds and then part ways (not necessarily in the same composition as before the reaction) rather than form a single nucleus. This happens because during the attempted formation of a single nucleus, electrostatic repulsion tears apart the nucleus that is being formed. Each pair of a target and a beam is characterized by its cross section—the probability that fusion will occur if two nuclei approach one another expressed in terms of the transverse area that the incident particle must hit in order for the fusion to occur. This fusion may occur as a result of the quantum effect in which nuclei can tunnel through electrostatic repulsion. If the two nuclei can stay close for past that phase, multiple nuclear interactions result in redistribution of energy and an energy equilibrium.
Introduction:
The resulting merger is an excited state—termed a compound nucleus—and thus it is very unstable. To reach a more stable state, the temporary merger may fission without formation of a more stable nucleus. Alternatively, the compound nucleus may eject a few neutrons, which would carry away the excitation energy; if the latter is not sufficient for a neutron expulsion, the merger would produce a gamma ray. This happens in approximately 10−16 seconds after the initial nuclear collision and results in creation of a more stable nucleus. The definition by the IUPAC/IUPAP Joint Working Party (JWP) states that a chemical element can only be recognized as discovered if a nucleus of it has not decayed within 10−14 seconds. This value was chosen as an estimate of how long it takes a nucleus to acquire its outer electrons and thus display its chemical properties.
Introduction:
Decay and detection The beam passes through the target and reaches the next chamber, the separator; if a new nucleus is produced, it is carried with this beam. In the separator, the newly produced nucleus is separated from other nuclides (that of the original beam and any other reaction products) and transferred to a surface-barrier detector, which stops the nucleus. The exact location of the upcoming impact on the detector is marked; also marked are its energy and the time of the arrival. The transfer takes about 10−6 seconds; in order to be detected, the nucleus must survive this long. The nucleus is recorded again once its decay is registered, and the location, the energy, and the time of the decay are measured.Stability of a nucleus is provided by the strong interaction. However, its range is very short; as nuclei become larger, its influence on the outermost nucleons (protons and neutrons) weakens. At the same time, the nucleus is torn apart by electrostatic repulsion between protons, and its range is not limited. Total binding energy provided by the strong interaction increases linearly with the number of nucleons, whereas electrostatic repulsion increases with the square of the atomic number, i.e. the latter grows faster and becomes increasingly important for heavy and superheavy nuclei. Superheavy nuclei are thus theoretically predicted and have so far been observed to predominantly decay via decay modes that are caused by such repulsion: alpha decay and spontaneous fission. Almost all alpha emitters have over 210 nucleons, and the lightest nuclide primarily undergoing spontaneous fission has 238. In both decay modes, nuclei are inhibited from decaying by corresponding energy barriers for each mode, but they can be tunnelled through.
Introduction:
Alpha particles are commonly produced in radioactive decays because mass of an alpha particle per nucleon is small enough to leave some energy for the alpha particle to be used as kinetic energy to leave the nucleus. Spontaneous fission is caused by electrostatic repulsion tearing the nucleus apart and produces various nuclei in different instances of identical nuclei fissioning. As the atomic number increases, spontaneous fission rapidly becomes more important: spontaneous fission partial half-lives decrease by 23 orders of magnitude from uranium (element 92) to nobelium (element 102), and by 30 orders of magnitude from thorium (element 90) to fermium (element 100). The earlier liquid drop model thus suggested that spontaneous fission would occur nearly instantly due to disappearance of the fission barrier for nuclei with about 280 nucleons. The later nuclear shell model suggested that nuclei with about 300 nucleons would form an island of stability in which nuclei will be more resistant to spontaneous fission and will primarily undergo alpha decay with longer half-lives. Subsequent discoveries suggested that the predicted island might be further than originally anticipated; they also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects. Experiments on lighter superheavy nuclei, as well as those closer to the expected island, have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei.Alpha decays are registered by the emitted alpha particles, and the decay products are easy to determine before the actual decay; if such a decay or a series of consecutive decays produces a known nucleus, the original product of a reaction can be easily determined. (That all decays within a decay chain were indeed related to each other is established by the location of these decays, which must be in the same place.) The known nucleus can be recognized by the specific characteristics of decay it undergoes such as decay energy (or more specifically, the kinetic energy of the emitted particle). Spontaneous fission, however, produces various nuclei as products, so the original nuclide cannot be determined from its daughters.The information available to physicists aiming to synthesize a superheavy element is thus the information collected at the detectors: location, energy, and time of arrival of a particle to the detector, and those of its decay. The physicists analyze this data and seek to conclude that it was indeed caused by a new element and could not have been caused by a different nuclide than the one claimed. Often, provided data is insufficient for a conclusion that a new element was definitely created and there is no other explanation for the observed effects; errors in interpreting data have been made.
History:
Early predictions The heaviest element known at the end of the 19th century was uranium, with an atomic mass of approximately 240 (now known to be 238) amu. Accordingly, it was placed in the last row of the periodic table; this fueled speculation about the possible existence of elements heavier than uranium and why A = 240 seemed to be the limit. Following the discovery of the noble gases, beginning with that of argon in 1895, the possibility of heavier members of the group was considered. Danish chemist Julius Thomsen proposed in 1895 the existence of a sixth noble gas with Z = 86, A = 212 and a seventh with Z = 118, A = 292, the last closing a 32-element period containing thorium and uranium. In 1913, Swedish physicist Johannes Rydberg extended Thomsen's extrapolation of the periodic table to include even heavier elements with atomic numbers up to 460, but he did not believe that these superheavy elements existed or occurred in nature.In 1914, German physicist Richard Swinne proposed that elements heavier than uranium, such as those around Z = 108, could be found in cosmic rays. He suggested that these elements may not necessarily have decreasing half-lives with increasing atomic number, leading to speculation about the possibility of some longer-lived elements at Z = 98–102 and Z = 108–110 (though separated by short-lived elements). Swinne published these predictions in 1926, believing that such elements might exist in the Earth's core, in iron meteorites, or in the ice caps of Greenland where they had been locked up from their supposed cosmic origin.
History:
Discoveries Work performed from 1961 to 2013 at four labs – the Lawrence Berkeley National Laboratory in the US, the Joint Institute for Nuclear Research in the USSR (later Russia), the GSI Helmholtz Centre for Heavy Ion Research in Germany, and Riken in Japan – identified and confirmed the elements lawrencium to oganesson according to the criteria of the IUPAC–IUPAP Transfermium Working Groups and subsequent Joint Working Parties. These discoveries complete the seventh row of the periodic table. The remaining two transactinides, ununennium (Z = 119) and unbinilium (Z = 120), have not yet been synthesized. They would begin an eighth period.
History:
List of elements 103 Lawrencium, Lr (for Ernest Lawrence); sometimes but not always included 104 Rutherfordium, Rf (for Ernest Rutherford) 105 Dubnium, Db (for the town of Dubna, near Moscow) 106 Seaborgium, Sg (for Glenn T. Seaborg) 107 Bohrium, Bh (for Niels Bohr) 108 Hassium, Hs (for Hassia [Hesse], location of Darmstadt) 109 Meitnerium, Mt (for Lise Meitner) 110 Darmstadtium, Ds (for Darmstadt) 111 Roentgenium, Rg (for Wilhelm Röntgen) 112 Copernicium, Cn (for Nicolaus Copernicus) 113 Nihonium, Nh (for Nihon [Japan], location of the Riken institute) 114 Flerovium, Fl (for Russian physicist Georgy Flyorov) 115 Moscovium, Mc (for Moscow) 116 Livermorium, Lv (for Lawrence Livermore National Laboratory) 117 Tennessine, Ts (for Tennessee, location of Oak Ridge National Laboratory) 118 Oganesson, Og (for Russian physicist Yuri Oganessian)
Characteristics:
Due to their short half-lives (for example, the most stable known isotope of seaborgium has a half-life of 14 minutes, and half-lives decrease gradually with increasing atomic number) and the low yield of the nuclear reactions that produce them, new methods have had to be created to determine their gas-phase and solution chemistry based on very small samples of a few atoms each. Relativistic effects become very important in this region of the periodic table, causing the filled 7s orbitals, empty 7p orbitals, and filling 6d orbitals to all contract inwards toward the atomic nucleus. This causes a relativistic stabilization of the 7s electrons and makes the 7p orbitals accessible in low excitation states.Elements 103 to 112, lawrencium to copernicium, form the 6d series of transition elements. Experimental evidence shows that elements 103–108 behave as expected for their position in the periodic table, as heavier homologues of lutetium through osmium. They are expected to have ionic radii between those of their 5d transition metal homologs and their actinide pseudohomologs: for example, Rf4+ is calculated to have ionic radius 76 pm, between the values for Hf4+ (71 pm) and Th4+ (94 pm). Their ions should also be less polarizable than those of their 5d homologs. Relativistic effects are expected to reach a maximum at the end of this series, at roentgenium (element 111) and copernicium (element 112). Nevertheless, many important properties of the transactinides are still not yet known experimentally, though theoretical calculations have been performed.Elements 113 to 118, nihonium to oganesson, should form a 7p series, completing the seventh period in the periodic table. Their chemistry will be greatly influenced by the very strong relativistic stabilization of the 7s electrons and a strong spin–orbit coupling effect "tearing" the 7p subshell apart into two sections, one more stabilized (7p1/2, holding two electrons) and one more destabilized (7p3/2, holding four electrons). Lower oxidation states should be stabilized here, continuing group trends, as both the 7s and 7p1/2 electrons exhibit the inert-pair effect. These elements are expected to largely continue to follow group trends, though with relativistic effects playing an increasingly larger role. In particular, the large 7p splitting results in an effective shell closure at flerovium (element 114) and a hence much higher than expected chemical activity for oganesson (element 118).Element 118 is the last element that has been synthesized. The next two elements, element 119 and element 120, should form an 8s series and be an alkali and alkaline earth metal respectively. The 8s electrons are expected to be relativistically stabilized, so that the trend toward higher reactivity down these groups will reverse and the elements will behave more like their period 5 homologs, rubidium and strontium. Still the 7p3/2 orbital is still relativistically destabilized, potentially giving these elements larger ionic radii and perhaps even being able to participate chemically. In this region, the 8p electrons are also relativistically stabilized, resulting in a ground-state 8s28p1 valence electron configuration for element 121. Large changes are expected to occur in the subshell structure in going from element 120 to element 121: for example, the radius of the 5g orbitals should drop drastically, from 25 Bohr units in element 120 in the excited [Og] 5g1 8s1 configuration to 0.8 Bohr units in element 121 in the excited [Og] 5g1 7d1 8s1 configuration, in a phenomenon called "radial collapse". Element 122 should add either a further 7d or a further 8p electron to element 121's electron configuration. Elements 121 and 122 should be similar to actinium and thorium respectively.At element 121, the superactinide series is expected to begin, when the 8s electrons and the filling 8p1/2, 7d3/2, 6f5/2, and 5g7/2 subshells determine the chemistry of these elements. Complete and accurate calculations are not available for elements beyond 123 because of the extreme complexity of the situation: the 5g, 6f, and 7d orbitals should have about the same energy level, and in the region of element 160 the 9s, 8p3/2, and 9p1/2 orbitals should also be about equal in energy. This will cause the electron shells to mix so that the block concept no longer applies very well, and will also result in novel chemical properties that will make positioning these elements in a periodic table very difficult; element 164 is expected to mix characteristics of the elements of group 10, 12, and 18.
Beyond superheavy elements:
It has been suggested that elements beyond Z = 126 be called beyond superheavy elements. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PC-LISP**
PC-LISP:
PC-LISP is an implementation of the Franz Lisp dialect by Peter Ashwood-Smith.
Version 2.11 was released on May 15, 1986. A current version may be downloaded from the external link below.
Currently, PC-LISP has been ported to 32 & 64 bit Linux, Mac, and Windows.
PC-LISP:
Note that the Franz LISP dialect was the immediate, portable successor to the ITS version of Maclisp and is perhaps the closest thing to the LISP in the Steven Levy book Hackers as is practical to operate. PC-LISP runs well in DOS emulators and on modern Windows versions. Because PC-LISP implements Franz LISP, it is a dynamically scoped predecessor to modern Common Lisp. This is therefore an historically important implementation.
Example:
The session is running the following code which demonstrates dynamic scoping in Franz LISP. Note that PC-LISP does not implement the let special form that Emacs Lisp provides for local variables. Instead, all variables are what an ALGOL-based language would call "global". The first dialect of Lisp to incorporate ALGOL scoping rules (called lexical scoping) was Scheme although the Common Lisp language also added this feature.
Example:
Another example showing the use of backquote and the power of LISP. This is a differentiation example. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Approximately finite-dimensional C*-algebra**
Approximately finite-dimensional C*-algebra:
In mathematics, an approximately finite-dimensional (AF) C*-algebra is a C*-algebra that is the inductive limit of a sequence of finite-dimensional C*-algebras. Approximate finite-dimensionality was first defined and described combinatorially by Ola Bratteli. Later, George A. Elliott gave a complete classification of AF algebras using the K0 functor whose range consists of ordered abelian groups with sufficiently nice order structure.
Approximately finite-dimensional C*-algebra:
The classification theorem for AF-algebras serves as a prototype for classification results for larger classes of separable simple amenable stably finite C*-algebras. Its proof divides into two parts. The invariant here is K0 with its natural order structure; this is a functor. First, one proves existence: a homomorphism between invariants must lift to a *-homomorphism of algebras. Second, one shows uniqueness: the lift must be unique up to approximate unitary equivalence. Classification then follows from what is known as the intertwining argument. For unital AF algebras, both existence and uniqueness follow from the fact the Murray-von Neumann semigroup of projections in an AF algebra is cancellative.
Approximately finite-dimensional C*-algebra:
The counterpart of simple AF C*-algebras in the von Neumann algebra world are the hyperfinite factors, which were classified by Connes and Haagerup.
In the context of noncommutative geometry and topology, AF C*-algebras are noncommutative generalizations of C0(X), where X is a totally disconnected metrizable space.
Definition and basic properties:
Finite-dimensional C*-algebras An arbitrary finite-dimensional C*-algebra A takes the following form, up to isomorphism: ⊕kMnk, where Mi denotes the full matrix algebra of i × i matrices.
Definition and basic properties:
Up to unitary equivalence, a unital *-homomorphism Φ : Mi → Mj is necessarily of the form Φ(a)=a⊗Ir, where r·i = j. The number r is said to be the multiplicity of Φ. In general, a unital homomorphism between finite-dimensional C*-algebras Φ:⊕1sMnk→⊕1tMml is specified, up to unitary equivalence, by a t × s matrix of partial multiplicities (rl k) satisfying, for all l ∑krlknk=ml.
Definition and basic properties:
In the non-unital case, the equality is replaced by ≤. Graphically, Φ, equivalently (rl k), can be represented by its Bratteli diagram. The Bratteli diagram is a directed graph with nodes corresponding to each nk and ml and the number of arrows from nk to ml is the partial multiplicity rlk.
Consider the category whose objects are isomorphism classes of finite-dimensional C*-algebras and whose morphisms are *-homomorphisms modulo unitary equivalence. By the above discussion, the objects can be viewed as vectors with entries in N and morphisms are the partial multiplicity matrices.
Definition and basic properties:
AF algebras A C*-algebra is AF if it is the direct limit of a sequence of finite-dimensional C*-algebras: lim →⋯→Ai→αiAi+1→⋯, where each Ai is a finite-dimensional C*-algebra and the connecting maps αi are *-homomorphisms. We will assume that each αi is unital. The inductive system specifying an AF algebra is not unique. One can always drop to a subsequence. Suppressing the connecting maps, A can also be written as A=∪nAn¯.
Definition and basic properties:
The Bratteli diagram of A is formed by the Bratteli diagrams of {αi} in the obvious way. For instance, the Pascal triangle, with the nodes connected by appropriate downward arrows, is the Bratteli diagram of an AF algebra. A Bratteli diagram of the CAR algebra is given on the right. The two arrows between nodes means each connecting map is an embedding of multiplicity 2.
Definition and basic properties:
1⇉2⇉4⇉8⇉… (A Bratteli diagram of the CAR algebra)If an AF algebra A = (∪nAn)−, then an ideal J in A takes the form ∪n (J ∩ An)−. In particular, J is itself an AF algebra. Given a Bratteli diagram of A and some subset S of nodes, the subdiagram generated by S gives inductive system that specifies an ideal of A. In fact, every ideal arises in this way.
Definition and basic properties:
Due to the presence of matrix units in the inductive sequence, AF algebras have the following local characterization: a C*-algebra A is AF if and only if A is separable and any finite subset of A is "almost contained" in some finite-dimensional C*-subalgebra.
The projections in ∪nAn in fact form an approximate unit of A.
It is clear that the extension of a finite-dimensional C*-algebra by another finite-dimensional C*-algebra is again finite-dimensional. More generally, the extension of an AF algebra by another AF algebra is again AF.
Classification:
K0 The K-theoretic group K0 is an invariant of C*-algebras. It has its origins in topological K-theory and serves as the range of a kind of "dimension function." For an AF algebra A, K0(A) can be defined as follows.
Let Mn(A) be the C*-algebra of n × n matrices whose entries are elements of A. Mn(A) can be embedded into Mn + 1(A) canonically, into the "upper left corner". Consider the algebraic direct limit lim →⋯→Mn(A)→Mn+1(A)→⋯.
Classification:
Denote the projections (self-adjoint idempotents) in this algebra by P(A). Two elements p and q are said to be Murray-von Neumann equivalent, denoted by p ~ q, if p = vv* and q = v*v for some partial isometry v in M∞(A). It is clear that ~ is an equivalence relation. Define a binary operation + on the set of equivalences P(A)/~ by [p]+[q]=[p⊕q] where ⊕ yields the orthogonal direct sum of two finite-dimensional matrices corresponding to p and q. While we could choose matrices of arbitrarily large dimension to stand in for p and q, our result will be equivalent regardless. This makes P(A)/~ a semigroup that has the cancellation property. We denote this semigroup by K0(A)+. Performing the Grothendieck group construction gives an abelian group, which is K0(A).
Classification:
K0(A) carries a natural order structure: we say [p] ≤ [q] if p is Murray-von Neumann equivalent to a subprojection of q. This makes K0(A) an ordered group whose positive cone is K0(A)+.
For example, for a finite-dimensional C*-algebra A=⊕k=1mMnk, one has (K0(A),K0(A)+)=(Zm,Z+m).
Two essential features of the mapping A ↦ K0(A) are: K0 is a (covariant) functor. A *-homomorphism α : A → B between AF algebras induces a group homomorphism α* : K0(A) → K0(B). In particular, when A and B are both finite-dimensional, α* can be identified with the partial multiplicities matrix of α.
K0 respects direct limits. If A = ∪nαn(An)−, then K0(A) is the direct limit ∪nαn*(K0(An)).
The dimension group Since M∞(M∞(A)) is isomorphic to M∞(A), K0 can only distinguish AF algebras up to stable isomorphism. For example, M2 and M4 are not isomorphic but stably isomorphic; K0(M2) = K0(M4) = Z.
A finer invariant is needed to detect isomorphism classes. For an AF algebra A, we define the scale of K0(A), denoted by Γ(A), to be the subset whose elements are represented by projections in A: Γ(A)={[p]|p∗=p2=p∈A}.
When A is unital with unit 1A, the K0 element [1A] is the maximal element of Γ(A) and in fact, Γ(A)={x∈K0(A)|0≤x≤[1A]}.
The triple (K0, K0+, Γ(A)) is called the dimension group of A.
If A = Ms, its dimension group is (Z, Z+, {1, 2,..., s}).
A group homomorphism between dimension group is said to be contractive if it is scale-preserving. Two dimension group are said to be isomorphic if there exists a contractive group isomorphism between them.
Classification:
The dimension group retains the essential properties of K0: A *-homomorphism α : A → B between AF algebras in fact induces a contractive group homomorphism α* on the dimension groups. When A and B are both finite-dimensional, corresponding to each partial multiplicities matrix ψ, there is a unique, up to unitary equivalence, *-homomorphism α : A → B such that α* = ψ.
Classification:
If A = ∪nαn(An)−, then the dimension group of A is the direct limit of those of An.
Elliott's theorem Elliott's theorem says that the dimension group is a complete invariant of AF algebras: two AF algebras A and B are isomorphic if and only if their dimension groups are isomorphic.
Two preliminary facts are needed before one can sketch a proof of Elliott's theorem. The first one summarizes the above discussion on finite-dimensional C*-algebras.
Lemma For two finite-dimensional C*-algebras A and B, and a contractive homomorphism ψ: K0(A) → K0(B), there exists a *-homomorphism φ: A → B such that φ* = ψ, and φ is unique up to unitary equivalence.
The lemma can be extended to the case where B is AF. A map ψ on the level of K0 can be "moved back", on the level of algebras, to some finite stage in the inductive system.
Lemma Let A be finite-dimensional and B AF, B = (∪nBn)−. Let βm be the canonical homomorphism of Bm into B. Then for any a contractive homomorphism ψ: K0(A) → K0(B), there exists a *-homomorphism φ: A → Bm such that βm* φ* = ψ, and φ is unique up to unitary equivalence in B.
The proof of the lemma is based on the simple observation that K0(A) is finitely generated and, since K0 respects direct limits, K0(B) = ∪n βn* K0 (Bn).
Theorem (Elliott) Two AF algebras A and B are isomorphic if and only if their dimension groups (K0(A), K0+(A), Γ(A)) and (K0(B), K0+(B), Γ(B)) are isomorphic.
The crux of the proof has become known as Elliott's intertwining argument. Given an isomorphism between dimension groups, one constructs a diagram of commuting triangles between the direct systems of A and B by applying the second lemma.
We sketch the proof for the non-trivial part of the theorem, corresponding to the sequence of commutative diagrams on the right.
Let Φ: (K0(A), K0+(A), Γ(A)) → (K0(B), K0+(B), Γ(B)) be a dimension group isomorphism.
Consider the composition of maps Φ α1* : K0(A1) → K0(B). By the previous lemma, there exists B1 and a *-homomorphism φ1: A1 → B1 such that the first diagram on the right commutes.
Same argument applied to β1* Φ−1 shows that the second diagram commutes for some A2.
Comparing diagrams 1 and 2 gives diagram 3.
Using the property of the direct limit and moving A2 further down if necessary, we obtain diagram 4, a commutative triangle on the level of K0.
For finite-dimensional algebras, two *-homomorphisms induces the same map on K0 if and only if they are unitary equivalent. So, by composing ψ1 with a unitary conjugation if needed, we have a commutative triangle on the level of algebras.
Classification:
By induction, we have a diagram of commuting triangles as indicated in the last diagram. The map φ: A → B is the direct limit of the sequence {φn}. Let ψ: B → A is the direct limit of the sequence {ψn}. It is clear that φ and ψ are mutual inverses. Therefore, A and B are isomorphic.Furthermore, on the level of K0, the adjacent diagram commutates for each k. By uniqueness of direct limit of maps, φ* = Φ.
Classification:
The Effros-Handelman-Shen theorem The dimension group of an AF algebra is a Riesz group. The Effros-Handelman-Shen theorem says the converse is true. Every Riesz group, with a given scale, arises as the dimension group of some AF algebra. This specifies the range of the classifying functor K0 for AF-algebras and completes the classification.
Riesz groups A group G with a partial order is called an ordered group. The set G+ of elements ≥ 0 is called the positive cone of G. One says that G is unperforated if k·g ∈ G+ implies g ∈ G+.
The following property is called the Riesz decomposition property: if x, yi ≥ 0 and x ≤ Σ yi, then there exists xi ≥ 0 such that x = Σ xi, and xi ≤ yi for each i.
A Riesz group (G, G+) is an ordered group that is unperforated and has the Riesz decomposition property.
It is clear that if A is finite-dimensional, (K0, K0+) is a Riesz group, where Zk is given entrywise order. The two properties of Riesz groups are preserved by direct limits, assuming the order structure on the direct limit comes from those in the inductive system. So (K0, K0+) is a Riesz group for an AF algebra A.
A key step towards the Effros-Handelman-Shen theorem is the fact that every Riesz group is the direct limit of Zk 's, each with the canonical order structure. This hinges on the following technical lemma, sometimes referred to as the Shen criterion in the literature.
Lemma Let (G, G+) be a Riesz group, ϕ: (Zk, Zk+) → (G, G+) be a positive homomorphism. Then there exists maps σ and ψ, as indicated in the adjacent diagram, such that ker(σ) = ker(ϕ).
Corollary Every Riesz group (G, G+) can be expressed as a direct limit lim →(Znk,Z+nk), where all the connecting homomorphisms in the directed system on the right hand side are positive.
The theorem Theorem If (G, G+) is a countable Riesz group with scale Γ(G), then there exists an AF algebra A such that (K0, K0+, Γ(A)) = (G, G+, Γ(G)). In particular, if Γ(G) = [0, uG] with maximal element uG, then A is unital with [1A] = [uG].
Consider first the special case where Γ(G) = [0, uG] with maximal element uG. Suppose lim where (H,Hk+)=(Znk,Z+nk).
Classification:
Dropping to a subsequence if necessary, let Γ(H1)={v∈H1+|ϕ1(v)∈Γ(G)}, where φ1(u1) = uG for some element u1. Now consider the order ideal G1 generated by u1. Because each H1 has the canonical order structure, G1 is a direct sum of Z 's (with the number of copies possible less than that in H1). So this gives a finite-dimensional algebra A1 whose dimension group is (G1 G1+, [0, u1]). Next move u1 forward by defining u2 = φ12(u1). Again u2 determines a finite-dimensional algebra A2. There is a corresponding homomorphism α12 such that α12* = φ12. Induction gives a directed system lim →Ak, whose K0 is lim →(Gk,Gk+), with scale ∪kϕk[0,uk]=[0,uG].
Classification:
This proves the special case.
A similar argument applies in general. Observe that the scale is by definition a directed set. If Γ(G) = {vk}, one can choose uk ∈ Γ(G) such that uk ≥ v1 ... vk. The same argument as above proves the theorem.
Examples:
By definition, uniformly hyperfinite algebras are AF and unital. Their dimension groups are the subgroups of Q. For example, for the 2 × 2 matrices M2, K0(M2) is the group of rational numbers of the form a/2 for a in Z. The scale is Γ(M2) = {0, 1/2, 1}. For the CAR algebra A, K0(A) is the group of dyadic rationals with scale K0(A) ∩ [0, 1], with 1 = [1A]. All such groups are simple, in a sense appropriate for ordered groups. Thus UHF algebras are simple C*-algebras. In general, the groups which are not dense in Q are the dimension groups of Mk for some k.
Examples:
Commutative C*-algebras, which were characterized by Gelfand, are AF precisely when the spectrum is totally disconnected. The continuous functions C(X) on the Cantor set X is one such example.
Elliott's classification program:
It was proposed by Elliott that other classes of C*-algebras may be classifiable by K-theoretic invariants. For a C*-algebra A, the Elliott invariant is defined to be Ell def ((K0(A),K0(A)+,Γ(A)),K1(A),T+(A),ρA), where T+(A) is the tracial positive linear functionals in the weak-* topology, and ρA is the natural pairing between T+(A) and K0(A).
The original conjecture by Elliott stated that the Elliott invariant classifies simple unital separable amenable C*-algebras.
In the literature, one can find several conjectures of this kind with corresponding modified/refined Elliott invariants.
Von Neumann algebras:
In a related context, an approximately finite-dimensional, or hyperfinite, von Neumann algebra is one with a separable predual and contains a weakly dense AF C*-algebra. Murray and von Neumann showed that, up to isomorphism, there exists a unique hyperfinite type II1 factor. Connes obtained the analogous result for the II∞ factor. Powers exhibited a family of non-isomorphic type III hyperfinite factors with cardinality of the continuum. Today we have a complete classification of hyperfinite factors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yannis K. Semertzidis**
Yannis K. Semertzidis:
Yannis K. Semertzidis is a physicist exploring axions as a dark matter candidate, precision physics in storage rings including muon g-2 and proton electric dipole moment (pEDM). The axion and the pEDM are intimately connected through the strong CP problem. Furthermore, if the pEDM is found to be non-zero, it can help resolve the matter anti-matter asymmetry mystery of our universe. During his research career, he held a number of positions in the Department of Physics in Brookhaven National Laboratory, including initiator and co-spokesperson of the Storage Ring Electric Dipole Moment Collaboration. He is the founding director of the Institute for Basic Science (IBS) Center for Axion and Precision Physics Research, is a professor in the Physics Department of KAIST, and a Fellow of the American Physical Society.
Education:
Semertzidis received a Bachelor of Science in physics from the Aristotle University of Thessaloniki, Greece in 1984. He then moved to New York and studied at the University of Rochester, obtaining Master of Science and Ph.D in physics in 1987 and 1989, respectively.
Career:
From 1990 until 1992, he worked as a research associate in the University of Rochester. Staying in New York, he next worked as an assistant physicist in the Department of Physics in Brookhaven National Laboratory (BNL) from 1992. The next year he took a leave of absence to work as a Fellow in the PPE Division at CERN from 1993 to 1995. He returned to BNL and worked as a physicist in 1997, became a tenured physicist in March 2000, and finally a tenured senior scientist in September 2012. While at BNL he primarily focused on two experimental projects: a number of precision physics experiments related to axions as a candidate for dark matter, and precision physics in storage rings, which included muons, and looking for the electric dipole moment of protons with increased sensitivity. If the electric dipole moment of protons is non-zero, it would violate the discrete symmetries of T-time and P-parity reversal symmetries in quantum mechanics. These symmetries are connected to the matter-antimatter asymmetry problem and observing the proton electric dipole moment could help solve that mystery. While working at BNL, Semertzidis also mentored a number of students with multiple of them going on to win awards. From summer 2015, his Center hosts an annual summer science program (KUSP) aimed at young physics students.In October 2013, Semertzidis became the director of the Institute for Basic Science Center for Axion and Precision Physics Research and a physics professor at KAIST, where the Center is located. The dark matter research is focusing on the axion; a hypothetical elementary particle as a result of the Peccei–Quinn theory in 1977 to resolve the strong CP problem in quantum chromodynamics. As the mass of the axion is unknown, they are searching in the mass range of 0.001 meV to 1 meV by converting axions into microwave photons inside a large volume, with a high magnetic field, and inside a microwave cavity; a technique invented by Pierre Sikivie. If it is within this range, it is possible it will be discovered within the next ten years. Utilizing techniques created for the muon g-2 experiment and elsewhere, they are working towards improving the accuracy of electric dipole moment experiments to better than 10−29e-cm.In 2023, the Center for Axion and Precision Physics Research utilized a 12T magnet to search for axions in the Dine-Fischler-Srednicki-Zhitnitsky sensitivity, becoming only the second group in the world to do so. The first group is the Axion Dark Matter eXperiment (ADMX) which uses a 8T magnet.
Honors and awards:
2005: Fellow of the American Physical Society 2003: Brookhaven National Laboratory Science and Technology Award | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TIMM10B**
TIMM10B:
Mitochondrial import inner membrane translocase subunit Tim9 B is an enzyme that in humans is encoded by the FXC1 gene.FXC1, or TIMM10B, belongs to a family of evolutionarily conserved proteins that are organized in heterooligomeric complexes in the mitochondrial intermembrane space. These proteins mediate the import and insertion of hydrophobic membrane proteins into the mitochondrial inner membrane.[supplied by OMIM] | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Floating capital**
Floating capital:
Floating capital denotes currency in circulation and assets which can be used for many purposes. It is therefore opposed to "sunk capital", which can be used only for one purpose (for example, a mineshaft).It comprises the materials and components, constantly supplied in the effecting of all manufactures; currency used for the purpose of transactions, wages and salaries; products in transportation, or in the process of being stored in the prospect of being eventually utilized for this purpose; and the working, circulating capital; rather than that which is fixed as permanently stationary value. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trichlorophenylsilane**
Trichlorophenylsilane:
Trichlorophenylsilane is a compound with formula Si(C6H5)Cl3.
Similarly to other alkylchlorosilanes, trichlorophenylsilane is a possible precursor to silicone. It hydrolyses in water to give HCl and phenylsilantriol, with the latter condensating to a polymeric substance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rand index**
Rand index:
The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance grouping of elements, this is the adjusted Rand index. The Rand index is the accuracy of determining if a link belong within a cluster or not.
Rand index:
Definition Given a set of n elements S={o1,…,on} and two partitions of S to compare, X={X1,…,Xr} , a partition of S into r subsets, and Y={Y1,…,Ys} , a partition of S into s subsets, define the following: a , the number of pairs of elements in S that are in the same subset in X and in the same subset in Y b , the number of pairs of elements in S that are in different subsets in X and in different subsets in Y c , the number of pairs of elements in S that are in the same subset in X and in different subsets in Y d , the number of pairs of elements in S that are in different subsets in X and in the same subset in Y The Rand index, R , is: R=a+ba+b+c+d=a+b(n2) Intuitively, a+b can be considered as the number of agreements between X and Y and c+d as the number of disagreements between X and Y Since the denominator is the total number of pairs, the Rand index represents the frequency of occurrence of agreements over the total pairs, or the probability that X and Y will agree on a randomly chosen pair.
Rand index:
(n2) is calculated as n(n−1)/2 Similarly, one can also view the Rand index as a measure of the percentage of correct decisions made by the algorithm. It can be computed using the following formula: RI=TP+TNTP+FP+FN+TN where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives, and FN is the number of false negatives.
Rand index:
Properties The Rand index has a value between 0 and 1, with 0 indicating that the two data clusterings do not agree on any pair of points and 1 indicating that the data clusterings are exactly the same.
Rand index:
In mathematical terms, a, b, c, d are defined as follows: a=|S∗| , where S∗={(oi,oj)∣oi,oj∈Xk,oi,oj∈Yl} b=|S∗| , where S∗={(oi,oj)∣oi∈Xk1,oj∈Xk2,oi∈Yl1,oj∈Yl2} c=|S∗| , where S∗={(oi,oj)∣oi,oj∈Xk,oi∈Yl1,oj∈Yl2} d=|S∗| , where S∗={(oi,oj)∣oi∈Xk1,oj∈Xk2,oi,oj∈Yl} for some 1≤i,j≤n,i≠j,1≤k,k1,k2≤r,k1≠k2,1≤l,l1,l2≤s,l1≠l2 Relationship with classification accuracy The Rand index can also be viewed through the prism of binary classification accuracy over the pairs of elements in S . The two class labels are " oi and oj are in the same subset in X and Y " and " oi and oj are in different subsets in X and Y ".
Rand index:
In that setting, a is the number of pairs correctly labeled as belonging to the same subset (true positives), and b is the number of pairs correctly labeled as belonging to different subsets (true negatives).
Adjusted Rand index:
The adjusted Rand index is the corrected-for-chance version of the Rand index. Such a correction for chance establishes a baseline by using the expected similarity of all pair-wise comparisons between clusterings specified by a random model. Traditionally, the Rand Index was corrected using the Permutation Model for clusterings (the number and size of clusters within a clustering are fixed, and all random clusterings are generated by shuffling the elements between the fixed clusters). However, the premises of the permutation model are frequently violated; in many clustering scenarios, either the number of clusters or the size distribution of those clusters vary drastically. For example, consider that in K-means the number of clusters is fixed by the practitioner, but the sizes of those clusters are inferred from the data. Variations of the adjusted Rand Index account for different models of random clusterings.Though the Rand Index may only yield a value between 0 and +1, the adjusted Rand index can yield negative values if the index is less than the expected index.
Adjusted Rand index:
The contingency table Given a set S of n elements, and two groupings or partitions (e.g. clusterings) of these elements, namely X={X1,X2,…,Xr} and Y={Y1,Y2,…,Ys} , the overlap between X and Y can be summarized in a contingency table [nij] where each entry nij denotes the number of objects in common between Xi and Yj : nij=|Xi∩Yj| sums 11 12 21 22 sums b1b2⋯bs Definition The original Adjusted Rand Index using the Permutation Model is ARI=∑ij(nij2)−[∑i(ai2)∑j(bj2)]/(n2)12[∑i(ai2)+∑j(bj2)]−[∑i(ai2)∑j(bj2)]/(n2) where nij,ai,bj are values from the contingency table. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Macropore**
Macropore:
In soil, macropores are defined as cavities that are larger than 75 μm. Functionally, pores of this size host preferential soil solution flow and rapid transport of solutes and colloids. Macropores increase the hydraulic conductivity of soil, allowing water to infiltrate and drain quickly, and shallow groundwater to move relatively rapidly via lateral flow. In soil, macropores are created by plant roots, soil cracks, soil fauna, and by aggregation of soil particles into peds.
Macropore:
Macropores may be defined differently in other contexts. Within the context of porous solids (i.e., not porous aggregations such as soil), colloid and surface chemists define macropores as cavities that are larger than 50 nm. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SLC22A8**
SLC22A8:
Solute carrier family 22 member 8, or organic anion transporter 3 (OAT3), is a protein that in humans is encoded by the SLC22A8 gene.
Function:
OAT3 is involved in the transport and excretion of organic ions some of which are drugs (e.g., penicillin G (benzylpenicillin), methotrexate (MTX), indomethacin (an NSAID), and ciprofloxacin (a fluoroquinolone antibiotic)) and some of which are pure toxicants. SLC22A8 (OAT3) is indirectly dependent on the inward sodium gradient, which is a driving force for reentry of dicarboxylates into the cytosol. Dicarboxylates, such as alpha-ketoglutarate generated within the cell, or recycled from the extracellular space, are used as exchange substrates to fuel the influx of organic anions against their concentration gradient. The encoded protein is an integral membrane protein and appears to be localized to the basolateral membrane of renal proximal tubule cells. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Laser ablation synthesis in solution**
Laser ablation synthesis in solution:
Laser ablation synthesis in solution (LASiS) is a commonly used method for obtaining colloidal solution of nanoparticles in a variety of solvents. Nanoparticles (NPs,), are useful in chemistry, engineering and biochemistry due to their large surface-to-volume ratio that causes them to have unique physical properties. LASiS is considered a "green" method due to its lack of use for toxic chemical precursors to synthesize nanoparticles.In the LASiS method, nanoparticles are produced by a laser beam hitting a solid target in a liquid and during the condensation of the plasma plume, the nanoparticles are formed. Since the ablation is occurring in a liquid, versus air/vacuum/gas/, the environment allows for plume expansion, cooling and condensation with a higher temperature, pressure and density to create a plume with stronger confinement. These environmental conditions allow for more refined and smaller nanoparticles LASiS is usually considered a top-down physical approach. LASiS emerged as a reliable alternative to traditional chemical reduction methods for obtaining noble metal nanoparticles (NMNp). LASiS is also used for synthesis of silver nanoparticles AgNPs, which are known for their antimicrobial effects. Production of AgNPs via LASiS causes nanoparticles with varying antimicrobial characteristics due to different properties achieved via the fine tuning of NPs size in liquid ablation.
Pros and Cons:
LASiS has some limitations in the size control of NMNp, which can be overcome by laser treatments of NMNp. Other cons of LASiS include: the slow rate of NPs production, high consumption of energy, laser equipment cost, and decreased ablation efficiency with longer usage of the laser within a session. Other pros of LASiS include: minimal waste production, minimal manual operation, and refined size control of nanoparticles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cumulus (software)**
Cumulus (software):
Cumulus is a digital asset management software designed for client/server system which is developed by Canto Software. The product makes use of metadata for indexing, organizing, and searching.
History:
Cumulus was first released as a Macintosh application in 1992, and was named by Apple Computer as the "Most Innovative Product of 1992". Cumulus introduced search capabilities beyond those available in the Macintosh at the time, particularly relating to thumbnails.Cumulus 1.0 was a single-user product with no network capabilities. Among the main features of Cumulus 1.0, the search function automatically generated previews and contained support for the included AppleTalk – Peer-to-Peer – network.
History:
Cumulus 2.5 was available in five different languages and received the 1993 MacUser magazine Eddy award for "Best Publishing & Graphics Utility". In 1995, Canto introduced the scanner software "Cirrus" to focus on the development of Cumulus.
History:
Cumulus 3, released in 1996, introduced a server version for the first time and contained the possibility to spread files over the Internet via the "Web Publisher". Since Apple offered Cumulus 3 with its "Workgroup Server" as a bundle, Cumulus became one of the leading digital asset management systems.Cumulus 4 was the first version that was network-ready, and was available for Macintosh, Windows and UNIX operating systems allowing for cross-platform file sharing. Released in 1998, the support of Solaris was discounted later.
History:
Cumulus 5 modified the software core to use an open architecture providing an API to external systems and databases. The open architecture of Cumulus 5 also enabled a more functional bridge between Cumulus and the Internet.
Cumulus 6 introduced Embedded Java Plugin (EJP) which allowed system integrators to build custom Java plug-ins in order to extend the functionality of the Cumulus client.Cumulus 6.5 marked the end of the Cumulus Single User Edition product, which was licensed to MediaDex for further development and distribution.
History:
Cumulus 7 was introduced summer of 2006.Cumulus 8 was released in June 2009, with new indexing capabilities taking advantage of multicore/multiprocessor systems, and ability to manage a wider variety of file formats.Cumulus 8.5 was released in May 2011. Support was added for multilingual metadata, sometimes referred to as "World Metadata." Cumulus Sites was updated to support metadata editing and file uploads.Cumulus 8.6 was released in July 2012, and contains an updated user interface for the administration of Cumulus Sites and additional features for web-based administration of Cumulus. Other additions include features for collaboration links, multi-language support and automated version control.
History:
Cumulus 9 was released in September 2013 and introduced a new Web Client User Interface and the Cumulus Video Cloud. The Cumulus Web Client UI was redesigned to provide users with a modern, easy-to-use interface to support and guide the user while addressing modern business needs. The Cumulus Video Cloud extends the Cumulus video handling capabilities to add conversion and global streaming. Cumulus 9 also saw the addition of upload collection links which allow external collaborators to drag and drop files directly into Cumulus without needing a Cumulus account.
History:
Cumulus 9.1 was released in May 2014 and introduced the Adobe Drive Adapter for Cumulus which allows users to browse and search digital assets in Cumulus directly from Adobe work environments such as Photoshop, InDesign, Illustrator, Premier and other Adobe applications.
History:
Cumulus 10 (Cumulus X) was released July 2015 and introduced two mobile-friendly products: the Cumulus app and Portals. The Cumulus app on iOS was designed to allow users to collaborate either on an iPhone or iPad. Portals is the read-only version of the Cumulus Web Client where users can work with assets that admins allow.Cumulus 10.1 was introduced in January 2016 and included the InDesign Client integration where users can work with Adobe InDesign while accessing their assets from Cumulus.Cumulus 10.2 was introduced in September 2016 and brought the Media Delivery Cloud using Amazon Web Services (AWS). It allows users to manage their media rendition in a single source and distribute media files globally across different channels and devices.Cumulus 10.2.3 was released in February 2017 and came with a "crop and customize photos" feature for Portals and the Web Client.
Product overview:
The cataloging of the file via upload into the archive is where Cumulus transfers maximum information about the file from the metadata. For image or photo files, this is typically Exif and IPTC data. The metadata is mainly used to search the archive. The use of embargo data supports license management for copyrighted material.
Product overview:
The managed files can be cataloged and their usage can be set. The indexing is based on a predefined taxonomy, which is governed by the internal rules of the organization or by industry standards. You can specify whether files can only be used for specific purposes or only by certain groups of people. The production management system includes version management for files. Via the publication function, the files can be distributed directly via links or e-mails. It's also possible to access from the outside via the Cumulus Portals web interface, which allows a read access to released content from the catalog.
Product overview:
There are different variants, starting with the "Workgroup archive server" up to the "Enterprise Business Server" for large companies. Both server and client are extensible through a Java-based plug-in architecture. Since version 7.0, there is a web application based on Ajax with a separate user interface. For access to the Cumulus catalog on mobile, there has been an application for Apple devices based on iOS since 2010.
Miscellaneous:
In 2015, Cumulus developer Canto established the first Canto digital asset management (DAM) event. The event is held annually in Berlin.
The Henry Stewart team has been hosting DAM conferences since 2006. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Backscattering cross section**
Backscattering cross section:
Backscattering cross section is a property of an object that determines what proportion of incident wave energy is scattered from the object, back in the direction of the incident wave. It is defined as the area which intercepts an amount of power in the incident beam which, if radiated isotropically, would yield a reflected signal strength at the transmitter of the same magnitude as the actual object produces. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**White elephant**
White elephant:
A white elephant is a possession that its owner cannot dispose of, and whose cost, particularly that of maintenance, is out of proportion to its usefulness. In modern usage, it is a metaphor used to describe an object, construction project, scheme, business venture, facility, etc. considered expensive but without equivalent utility or value relative to its capital (acquisition) and/or operational (maintenance) costs.
Background:
The term derives from the sacred white elephants kept by Southeast Asian monarchs in Burma, Thailand (Siam), Laos and Cambodia. To possess a white elephant was regarded—and is still regarded in Thailand and Burma—as a sign that the monarch reigned with justice and power, and that the kingdom was blessed with peace and prosperity. The opulence expected of anyone who owned a beast of such stature was great. Monarchs often exemplified their possession of white elephants in their formal titles (e.g., Hsinbyushin, lit. 'Lord of the White Elephant' and the third monarch of the Konbaung dynasty). Because the animals were considered sacred and laws protected them from labor, receiving a gift of a white elephant from a monarch was simultaneously a blessing and a curse. It was a blessing because the animal was sacred and a sign of the monarch's favour, and a curse because the recipient now had an expensive-to-maintain animal he could not give away and could not put to much practical use.
Background:
In the West, the term "white elephant", relating to an expensive burden that fails to meet expectations, was first used in the 17th century and became widespread in the 19th century. According to one source it was popularized following P. T. Barnum's experience with an elephant named Toung Taloung that he billed as the "Sacred White Elephant of Burma". After much effort and great expense, Barnum finally acquired the animal from the King of Siam only to discover that his "white elephant" was actually dirty grey in color with a few pink spots.The expressions "white elephant" and "gift of a white elephant" came into common use in the middle of the nineteenth century. The phrase was attached to "white elephant swaps" and "white elephant sales" in the early twentieth century. Many church bazaars held "white elephant sales" where donors could unload unwanted bric-à-brac, generating profit from the phenomenon that "one man's trash is another man's treasure" and the term has continued to be used in this context.In modern usage, the term now often refers in addition to an extremely expensive building project that fails to deliver on its function or becomes very costly to maintain. Examples include prestigious but uneconomic infrastructure projects such as airports, dams, bridges, shopping malls and football stadiums. The American Oakland Athletics baseball team has used a white elephant as a symbol and usually its main or alternate logo since 1902, originally in sarcastic defiance of John McGraw's 1902 characterization of the new team as a "white elephant". The Al Maktoum International Airport on the outskirts of Dubai has also been named a white elephant. Examples of rail-related white elephants include in Japan, where it was feared that the Yurikamome at Odaiba would end up as a multibillion-yen white elephant, and in Singapore, paper cutouts of white elephants were placed next to the completed but unopened Buangkok MRT station on the North East Line in 2005, in protest at its non-opening. The station eventually opened the following year. White Elephant is also a name of a former Polish astronomical observatory built in the Carpathian Mountains in 1938 (now Ukraine).
Background:
The term has also been applied to outdated or underperforming military projects like the U.S. Navy's Alaska-class cruiser. In Austria, the term "white elephant" means workers who have little or no use, but cannot be dismissed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Evolutionary taxonomy**
Evolutionary taxonomy:
Evolutionary taxonomy, evolutionary systematics or Darwinian classification is a branch of biological classification that seeks to classify organisms using a combination of phylogenetic relationship (shared descent), progenitor-descendant relationship (serial descent), and degree of evolutionary change. This type of taxonomy may consider whole taxa rather than single species, so that groups of species can be inferred as giving rise to new groups. The concept found its most well-known form in the modern evolutionary synthesis of the early 1940s.
Evolutionary taxonomy:
Evolutionary taxonomy differs from strict pre-Darwinian Linnaean taxonomy (producing orderly lists only), in that it builds evolutionary trees. While in phylogenetic nomenclature each taxon must consist of a single ancestral node and all its descendants, evolutionary taxonomy allows for groups to be excluded from their parent taxa (e.g. dinosaurs are not considered to include birds, but to have given rise to them), thus permitting paraphyletic taxa.
Origin of evolutionary taxonomy:
Evolutionary taxonomy arose as a result of the influence of the theory of evolution on Linnaean taxonomy. The idea of translating Linnaean taxonomy into a sort of dendrogram of the Animal and Plant Kingdoms was formulated toward the end of the 18th century, well before Charles Darwin's book On the Origin of Species was published. The first to suggest that organisms had common descent was Pierre-Louis Moreau de Maupertuis in his 1751 Essai de Cosmologie, Transmutation of species entered wider scientific circles with Erasmus Darwin's 1796 Zoönomia and Jean-Baptiste Lamarck's 1809 Philosophie Zoologique. The idea was popularised in the English-speaking world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844.Following the appearance of On the Origin of Species, Tree of Life representations became popular in scientific works. In On the Origin of Species, the ancestor remained largely a hypothetical species; Darwin was primarily occupied with showing the principle, carefully refraining from speculating on relationships between living or fossil organisms and using theoretical examples only. In contrast, Chambers had proposed specific hypotheses, the evolution of placental mammals from marsupials, for example.Following Darwin's publication, Thomas Henry Huxley used the fossils of Archaeopteryx and Hesperornis to argue that the birds are descendants of the dinosaurs. Thus, a group of extant animals could be tied to a fossil group. The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, exhibits the essential hallmark of evolutionary taxonomic thinking.
Origin of evolutionary taxonomy:
The past three decades have seen a dramatic increase in the use of DNA sequences for reconstructing phylogeny and a parallel shift in emphasis from evolutionary taxonomy towards Hennig's 'phylogenetic systematics'.Today, with the advent of modern genomics, scientists in every branch of biology make use of molecular phylogeny to guide their research. One common method is multiple sequence alignment.
Cavalier-Smith, G. G. Simpson and Ernst Mayr are some representative evolutionary taxonomists.
New methods in modern evolutionary systematics:
Efforts in combining modern methods of cladistics, phylogenetics, and DNA analysis with classical views of taxonomy have recently appeared. Certain authors have found that phylogenetic analysis is acceptable scientifically as long as paraphyly at least for certain groups is allowable. Such a stance is promoted in papers by Tod F. Stuessy and others. A particularly strict form of evolutionary systematics has been presented by Richard H. Zander in a number of papers, but summarized in his "Framework for Post-Phylogenetic Systematics".Briefly, Zander's pluralistic systematics is based on the incompleteness of each of the theories: A method that cannot falsify a hypothesis is as unscientific as a hypothesis that cannot be falsified. Cladistics generates only trees of shared ancestry, not serial ancestry. Taxa evolving seriatim cannot be dealt with by analyzing shared ancestry with cladistic methods. Hypotheses such as adaptive radiation from a single ancestral taxon cannot be falsified with cladistics. Cladistics offers a way to cluster by trait transformations but no evolutionary tree can be entirely dichotomous. Phylogenetics posits shared ancestral taxa as causal agents for dichotomies yet there is no evidence for the existence of such taxa. Molecular systematics uses DNA sequence data for tracking evolutionary changes, thus paraphyly and sometimes phylogenetic polyphyly signal ancestor-descendant transformations at the taxon level, but otherwise molecular phylogenetics makes no provision for extinct paraphyly. Additional transformational analysis is needed to infer serial descent.
New methods in modern evolutionary systematics:
The Besseyan cactus or commagram is the best evolutionary tree for showing both shared and serial ancestry. First, a cladogram or natural key is generated. Generalized ancestral taxa are identified and specialized descendant taxa are noted as coming off the lineage with a line of one color representing the progenitor through time. A Besseyan cactus or commagram is then devised that represents both shared and serial ancestry. Progenitor taxa may have one or more descendant taxa. Support measures in terms of Bayes factors may be given, following Zander's method of transformational analysis using decibans.
New methods in modern evolutionary systematics:
Cladistic analysis groups taxa by shared traits but incorporates a dichotomous branching model borrowed from phenetics. It is essentially a simplified dichotomous natural key, although reversals are tolerated. The problem, of course, is that evolution is not necessarily dichotomous. An ancestral taxon generating two or more descendants requires a longer, less parsimonious tree. A cladogram node summarizes all traits distal to it, not of any one taxon, and continuity in a cladogram is from node to node, not taxon to taxon. This is not a model of evolution, but is a variant of hierarchical cluster analysis (trait changes and non-ultrametric branches. This is why a tree based solely on shared traits is not called an evolutionary tree but merely a cladistic tree. This tree reflects to a large extent evolutionary relationships through trait transformations but ignores relationships made by species-level transformation of extant taxa.
New methods in modern evolutionary systematics:
Phylogenetics attempts to inject a serial element by postulating ad hoc, undemonstrable shared ancestors at each node of a cladistic tree. There are in number, for a fully dichotomous cladogram, one less invisible shared ancestor than the number of terminal taxa. We get, then, in effect a dichotomous natural key with an invisible shared ancestor generating each couplet. This cannot imply a process-based explanation without justification of the dichotomy, and supposition of the shared ancestors as causes. The cladistic form of analysis of evolutionary relationships cannot falsify any genuine evolutionary scenario incorporating serial transformation, according to Zander.Zander has detailed methods for generating support measures for molecular serial descent and for morphological serial descent using Bayes factors and sequential Bayes analysis through Turing deciban or Shannon informational bit addition.
The Tree of Life:
As more and more fossil groups were found and recognized in the late 19th and early 20th century, palaeontologists worked to understand the history of animals through the ages by linking together known groups. The Tree of life was slowly being mapped out, with fossil groups taking up their position in the tree as understanding increased.
The Tree of Life:
These groups still retained their formal Linnaean taxonomic ranks. Some of them are paraphyletic in that, although every organism in the group is linked to a common ancestor by an unbroken chain of intermediate ancestors within the group, some other descendants of that ancestor lie outside the group. The evolution and distribution of the various taxa through time is commonly shown as a spindle diagram (often called a Romerogram after the American palaeontologist Alfred Romer) where various spindles branch off from each other, with each spindle representing a taxon. The width of the spindles are meant to imply the abundance (often number of families) plotted against time.Vertebrate palaeontology had mapped out the evolutionary sequence of vertebrates as currently understood fairly well by the closing of the 19th century, followed by a reasonable understanding of the evolutionary sequence of the plant kingdom by the early 20th century. The tying together of the various trees into a grand Tree of Life only really became possible with advancements in microbiology and biochemistry in the period between the World Wars.
Terminological difference:
The two approaches, evolutionary taxonomy and the phylogenetic systematics derived from Willi Hennig, differ in the use of the word "monophyletic". For evolutionary systematicists, "monophyletic" means only that a group is derived from a single common ancestor. In phylogenetic nomenclature, there is an added caveat that the ancestral species and all descendants should be included in the group. The term "holophyletic" has been proposed for the latter meaning. As an example, amphibians are monophyletic under evolutionary taxonomy, since they have arisen from fishes only once. Under phylogenetic taxonomy, amphibians do not constitute a monophyletic group in that the amniotes (reptiles, birds and mammals) have evolved from an amphibian ancestor and yet are not considered amphibians. Such paraphyletic groups are rejected in phylogenetic nomenclature, but are considered a signal of serial descent by evolutionary taxonomists. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Branded content**
Branded content:
In marketing, branded content (also known as branded entertainment) is content produced by an advertiser or content whose creation was funded by an advertiser. In contrast to content marketing (in which content is presented first and foremost as a marketing ploy for a brand) and product placement (where advertisers pay to have references to their brands incorporated into outside creative works, such as films and television series), branded content is designed to build awareness for a brand by associating it with content that shares its values. The content does not necessarily need to be a promotion for the brand, although it may still include product placement.
Branded content:
Unlike conventional forms of editorial content, branded content is generally funded entirely by a brand or corporation rather than a studio or a group of solely artistic producers. Examples of branded content have appeared in television, film, online content, video games, events, and other installations. Modern branded marketing strategies are intended primarily to counter market trends, such as the decreasing acceptance of traditional commercials or low-quality advertorials.
History:
Early examples The concept of branded content dates back to the early era of broadcasting; many early radio and television programs were controlled by their sponsors and branded with their names, including the Colgate Comedy Hour, Hallmark Hall of Fame, and Westinghouse Studio One. Typically, the sponsor coordinated the entire production of the program, with the broadcaster only providing studios and airtime. These programs featured segments that promoted the sponsor's products, typically featuring the brand's spokesperson and demonstrations of new products. Notable spokespeople often became celebrities in their own right, such as Betty Furness, a B-movie actress whose fame was elevated after becoming a spokesperson for Westinghouse appliances on Studio One (Furness would later work as a consumer affairs reporter for WNBC-TV in New York City).Many melodramatic serial dramas targeting women, such as As the World Turns, were produced by the consumer goods company Procter & Gamble; in reference to its products, this prompted the genre as a whole to be dubbed a "soap opera". The Revlon cosmetics company gained significant prominence after sponsoring the quiz show The $64,000 Question—which was, for a time, the most-watched program on U.S. television. In 1956, the Ford Motor Company's new marque Edsel sponsored a CBS variety special, The Edsel Show, which starred Bing Crosby, Frank Sinatra, and Bob Hope. The special was a critical success and widely viewed, but its success did not transfer to Edsel itself, however, which was a high-profile commercial failure. By request of Crosby, the special was credited as a production of his alma mater Gonzaga University, with its revenues helping to fund the construction of a new campus library.In the late 1950s, the quiz show scandals exposed that several major television game shows had been manipulated, or outright rigged under demand of their sponsors, in order to maintain viewer interest and ratings. Dotto and Twenty One were at the center of the scandal, with both shows having been accused of presenting matches with pre-determined outcomes as if they were legitimate. Testimony by a producer of The $64,000 Question revealed that Revlon founder Charles Revson had personally exerted control over the program in order to favor specific contestants. The aftermath of the scandals, as well as increasing production costs due to factors such as the rollout of color television, prompted networks to begin asserting creative control over the production and scheduling of their programming. Broadcasters also phased out of the "single sponsor" model, in favor of having sponsors purchase blocks of time during breaks in a program to run commercials instead.Conventional product placement and cross-promotion still appeared in films and television, but it was often argued that overuse of placements can distract from the entertainment value of the work. The film Mac and Me was widely criticized for containing extensive placements of Coca-Cola and McDonald's as major plot elements (going as far as crediting the chain's mascot Ronald McDonald as appearing in the film "as himself"). Hallmark Hall of Fame still occasionally aired on broadcast TV until 2014, when it was announced that the franchise would move to Hallmark's co-owned cable channel Hallmark Channel in the future.
History:
Modern examples After releasing its hockey-themed film The Mighty Ducks, Disney established a National Hockey League expansion team known as the Mighty Ducks of Anaheim, which was named in reference to the film. Disney subsequently produced two Mighty Ducks film sequels, and an animated series inspired by the team set and in a fictional version of Anaheim. The films and cartoon series also featured cameos by Mighty Ducks players. These works bolstered the Mighty Ducks' brand, and created synergies between the team and Disney's core entertainment business. The NHL felt that the Mighty Ducks cartoon could help to promote the game of hockey among a younger audience, and counter the stereotype of hockey being associated with Canada and the U.S. northeast. The team's merchandise, which was sold at Disney Parks and Disney Store locations in addition to the NHL's main retail channels, were the best-selling among all teams for a period.In 2001, automaker BMW began a marketing campaign entitled The Hire, in which it produced a series of short films that prominently featured its vehicles, staffed by prominent directors (such as Guy Ritchie) and talent. The films were advertised through television, print, and online marketing which directed viewers to a BMW Films website, where they could stream the films, and access ancillary information such as information about their featured vehicles. BMW also distributed the films on DVD with Vanity Fair magazine to increase their distribution among the company's target audience. By the end of the campaign in 2005, the eight-film series had amassed over 100 million views, and several of the films had received both advertising-related and short film awards.In 2010, Procter & Gamble and Walmart began to fund a series of made for TV films, distributed through the former's Procter & Gamble Productions division, such as The Jensen Project and Secrets of the Mountain. They were all targeted towards family viewing, aired primarily on NBC as time-buys, and featured product placement for P&G brands and Walmart's store brand Great Value. In turn, Walmart erected promotional displays of P&G products related to each film, and sold the films on DVD immediately after their broadcast. Both companies used exclusive advertising time during the films to promote their products. P&G reported that the favorability of the products featured in Secrets of the Mountain increased by 26% among mothers who saw the film. Advertising Age felt that despite lukewarm reception and viewership, "as case studies for successful branded entertainment, they've become the holy grail of how networks and marketers can use entertainment to achieve scalable audiences, measurable product sales and active fan communities."The Canadian beer brand Kokanee (owned by Anheuser-Busch InBev) partnered with its agency Grip and Alliance Films to produce The Movie Out Here, a feature-length comedy film set in the brand's home province of British Columbia. The film was released in April 2013, after being featured at the 2012 Whistler Film Festival. Kokanee beer, along with characters from its past advertising campaigns, make appearances in the film, and an accompanying campaign allowed bars in Western Canada to compete to be a filming location, and users to vote on the film's soundtrack and have a chance to be listed as a "fan" in the credits. Grip's creative director Randy Stein stated that viewers had become more accepting of branded content, and that there would be a larger focus on the emotional aspects of Kokanee as a brand as opposed to the number of placements. In 2018, Pepsi similarly backed the comedy film Uncle Drew—a feature comedy adapted from a character from a Pepsi Max ad campaign.The energy drink company Red Bull has relied heavily on branded content as part of its marketing strategies. The company operates several Media House studios, which coordinate the production and distribution of original content targeted towards the interests of young adults—particularly music and extreme sports. Alongside digital media content such as online video (via platforms such as Red Bull TV), and print media such as The Red Bulletin, Red Bull has also organized events and sports competitions which carry its name, such as the Red Bull Air Race World Championship, Crashed Ice, and Flugtag competitions, music festivals and events, and a skydive from the Earth's stratosphere by Felix Baumgartner. These ventures are consistent with the company's image, bolster Red Bull as being a lifestyle brand in these categories, and build awareness of Red Bull without necessarily promoting the product itself. An executive for Red Bull Media House North America remarked that the growth of digital media platforms had made it easier for brands to produce and distribute their own content, and stressed that branded content was most effective when it is "authentic" and high-quality.In 2019, the housing rentals service Airbnb premiered a self-produced documentary—Gay Chorus Deep South—at the Tribeca Film Festival, which documented a 2017 tour of the Southeastern United States by the San Francisco Gay Men's Chorus. The company's head of creative James Goode stated that the film was consistent with the company's values of "telling stories of belonging and acceptance", and its involvement and support in the LGBT community. Goode did not consider the film to be branded content, stating that it was an effort to "support the chorus and make the highest-quality piece of content we could."Some branded content efforts have not been as successful. The association football (soccer) sanctioning body FIFA budgeted the 2014 film United Passions, a dramatization of the organization's history. The film was released to negative reviews, focusing primarily on its poor writing and self-serving nature, and with many considering it one of the worst films of all time. The film's North American release also coincided with the indictment of FIFA officials by U.S. federal prosecutors under charges of corruption, leading critics to point out the irony in its depiction of FIFA president Sepp Blatter. The film only took in $918 in the U.S. box office, making it the worst-grossing film of all-time.
Research and issues:
In 2003, the Branded Content Marketing Association was formed in order to promote branded content to a wider, international audience. In January 2008, the BCMA conducted a study intending to analyze the efficacy of branded content compared to traditional advertising. Reportedly, over one-third of people were skeptical about traditional ads, and only one-tenth trusted the companies producing such adverts. The study concluded that "in the overwhelming majority of cases consumers preferred the more innovative approach compared with traditional advertising". Over 95% of the time, web sites that feature branded content were more successful than web sites featuring typical advertisements, and are 24% more effective at increasing the purchase intent of viewers. Branded content is most effective in the 18-34 age group, who tend to react with more positive opinions and being overall more responsive to branded sites. Online Publishers Association’s President Pam Horan concluded, “In nearly every category measured, ad effectiveness scores on branded content sites were numerically higher than on the web in general, on portals or on ad networks.These positive results, however, having come from an organization which endeavors to promote the marketing practice, are subject to criticisms of bias.
Award community:
Webby and Lovie awards among other had recognized Branded Content as a category in prior instances, but most awards within the advertising community officially began to grow to include branded content in 2012, when "Branded Content/Entertainment" became a category at EuroBest, Dubai Lynx Spikes Asia and Cannes Lions International Festival of Creativity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conrad discontinuity**
Conrad discontinuity:
The Conrad discontinuity corresponds to the sub-horizontal boundary in the continental crust at which the seismic wave velocity increases in a discontinuous way. This boundary is observed in various continental regions at a depth of 15 to 20 km, but it is not found in oceanic regions.
Conrad discontinuity:
The Conrad discontinuity (named after the seismologist Victor Conrad) is considered to be the border between the upper continental crust and the lower one. It is not as pronounced as the Mohorovičić discontinuity, and absent in some continental regions. Up to the middle 20th Century the upper crust in continental regions was seen to consist of felsic rocks such as granite (sial, for silica-aluminium), and the lower one to consist of more magnesium-rich mafic rocks like basalt (sima, for silica-magnesium). Therefore, the seismologists of that time considered that the Conrad discontinuity should correspond to a sharply defined contact between the chemically distinct two layers, sial and sima.However, from the 1960s onward this theory was strongly contested among geologists. The exact geological significance of the Conrad discontinuity is still not clarified. The possibility that it represents the transition from amphibolite facies to granulite facies metamorphism has been given some support from observations of the uplifted central part of the Vredefort impact structure and the surrounding Kaapvaal Craton. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dastgird**
Dastgird:
Dastgird or Dastgerd or Dastjerd or Dastjird (Persian: دستگرد), also rendered as Dast-i-Jird or Dasteh Jerd or Dashtgerd or Dashtgird, may refer to:
Chaharmahal and Bakhtiari Province:
Dastgerd, Borujen, a village in Borujen County Dastgerd, Kiar, a village in Kiar County Dastgerd Rural District (Chaharmahal and Bakhtiari Province), in Kiar County Dastgerd, Lordegan, a village in Lordegan County
East Azerbaijan Province:
Dastjerd, Azarshahr, a village in Azarshahr County Dastjerd, Meyaneh, a village in Meyaneh County Dastjerd, Varzaqan, a village in Varzaqan County Dastjerd Rural District (East Azerbaijan Province), in Azarshahr County
Fars Province:
Dastjerd, Estahban, a village in Estahban County Dastjerd, Khorrambid, a village in Khorrambid County
Hamadan Province:
Dastjerd, Bahar, a village in Bahar County Dastjerd, Kabudarahang, a village in Kabudarahang County
Hormozgan Province:
Dastgerd-e Dargaz, a village in Bashagard County Dastgerd-e Nagerd, a village in Bashagard County
Isfahan Province:
Dastjerd, Ardestan, a village in Ardestan County Dastgerd, a city in Borkhar County Dastgerd, Fereydunshahr, a village in Fereydunshahr County Dastjerd, Isfahan, a village in Isfahan County Dastgerd, Isfahan, a village in Isfahan County Dastgerd, alternate name of Dastgerdu, a village in Isfahan County Dastgerd-e Mar, a village in Isfahan County Dastgerd, Mobarakeh, a village in Mobarakeh County Dastjerd, Natanz, a village in Natanz County
Kerman Province:
Dastjerd, Bardsir, a village in Bardsir County Dastjerd, Kerman, a village in Kerman County Dastjerd, Ravar, a village in Ravar County
Kermanshah Province:
Dastjerd-e Olya, Kermanshah, a village in Sahneh County Dastjerd-e Sofla, Kermanshah, a village in Sahneh County
Kohgiluyeh and Boyer-Ahmad Province:
Dastgerd, Kohgiluyeh, a village in Kohgiluyeh County Dastgerd, Charusa, a village in Kohgiluyeh County
Lorestan Province:
Dastgerd, Besharat, a village in Besharat District, Aligudarz County Dastgerd, Zaz va Mahru, a village in Zaz va Mahru District, Aligudarz County
Markazi Province:
Dastjerd, Ashtian, a village in Ashtian County Dastjerd, Shazand, a village in Shazand County
North Khorasan Province:
Dastjerd, North Khorasan, a village in North Khorasan Province, Iran
Qazvin Province:
Dastjerd, Qazvin, a village in Qazvin Province, Iran Dastjerd-e Olya, a village in Qazvin Province, Iran Dastjerd-e Sofla, Qazvin, a village in Qazvin Province, Iran Dastjerd Rural District (Qazvin Province), in Qazvin County, Qazvin Province, Iran
Qom Province:
Dastjerd, a city in Qom County, Qom Province, Iran Dastgerd, Qom, a village in Qom County, Qom Province, Iran Dastjerd Rural District (Qom Province), in Qom County, Qom Province, Iran
Razavi Khorasan Province:
Dastgerd, Razavi Khorasan, a village in Chenaran County Dastjerd, Firuzeh, a village in Firuzeh County Dastjerd-e Aqa Bozorg, a village in Mashhad County Dastjerd, Rashtkhvar, a village in Rashtkhvar County
Semnan Province:
Dastjerd, Semnan, a village in Shahrud County
South Khorasan Province:
Dastgerd, Birjand, a village in Birjand County Dastgerd, Darmian, a village in Darmian County Dastgerd, Khusf, a village in Khusf County Dastjerd, Qaen, a village in Qaen County Dastgerd, Sarbisheh, a village in Sarbisheh County Dastgerd, Mud, a village in Sarbisheh County
West Azerbaijan Province:
Dastjerd, West Azerbaijan, a village in Urmia County Dastjerd-e Abbasabad, a village in Urmia County
Yazd Province:
Dastjerd, Yazd, a village in Taft County | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mixed media**
Mixed media:
In visual art, mixed media describes artwork in which more than one medium or material has been employed.
Assemblages, collages, and sculpture are three common examples of art using different media. Materials used to create mixed media art include, but are not limited to, paint, cloth, paper, wood and found objects.Mixed media art is distinguished from multimedia art which combines visual art with non-visual elements, such as recorded sound, literature, drama, dance, motion graphics, music, or interactivity.
History of mixed media:
The first modern artwork to be considered mixed media is Pablo Picasso's 1912 collage Still Life with Chair Caning, which used paper, cloth, paint and rope to create a pseudo-3D effect. The influence of movements like Cubism and Dada contributed to the mixed media's growth in popularity throughout the 20th century with artists like Henri Matisse, Joseph Cornell, Jean Dubuffet, and Ellsworth Kelly adopting it. This led to further innovations like installations in the late 20th century. Mixed media continues to be a popular form for artists, with different forms like wet media and markings being explored.
Types of mixed media art:
Mixed media art can be differentiated into distinct types, some of which are: Collage: This is an art form which involves combining different materials like ribbons, newspaper clippings, photographs etc. to create a new whole. While it was a sporadic practice in antiquity, it became a fundamental part of modern art in the early 20th century, due to the efforts of Braque and Picasso.Assemblage: This is a 3-dimensional variant of the collage with elements jutting in or out of a defined substrate, or an entirely 3-D arrangement of objects and/or sculptures.Found object art: These are objects that are found and used by artists and incorporated into artworks because of their perceived artistic value. It was popularized by the conceptual artist Marcel Duchamp.Altered books: This is a specific form where the artist will reuse a book by modifying/altering it physically for use in the work. This can involve physically cutting and pasting pages to change the contents of the book or using the materials of the book as contents for an art piece.Wet and Dry Media: Wet media consists of materials such as paints and inks that use some sort of liquidity in their usage or composition. Dry materials (such as pencils, charcoal, and crayons) are lacking this inherent liquidity. Using wet and dry media in conjunction is considered mixed media for its combination of inherently differing media to create a finalized piece.Expansion is a mixed media sculpture by Paige Bradley which combined bronze and electricity. The Expansion sculpture is thought to be the first bronze sculpture to be illuminated from within.
Examples of mixed media artwork:
Still Life with Chair Caning: Picasso's piece depicts what can be seen as a table with a cut lemon, a knife, a napkin and a newspaper among other discernible objects. It is elliptical (with speculation that the work itself could be depicting a porthole) and uses a piece of rope to form its edge. Paper and cloth are used for the objects present on the table.Angel of Anarchy: Eileen Agar's 1937 sculpture is a modified bust of Joseph Bard, which was covered by paper and fur. When this was lost, she made a 1940 variation which shrouded and blinded the figure with feathers, beads and cloth creating an entirely different perspective on the sculpture. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Synesthesia Mandala Drums**
Synesthesia Mandala Drums:
The Synesthesia Mandala Drum is a patented electronic drum pad developed by Vince DeFranco and drummer Danny Carey from Tool. It has 128 strike position detection rings from its center to its edge, along with 127 levels of velocity sensitivity. In its current iteration, mk2.9, both values are transmitted via USB MIDI to a computer, where they can be interpreted by any MIDI software. The Mandala also includes its own "Virtual Brain" software. The current USB/software system replaces a hardware brain that the version 1.0 Mandala system had employed. The pad can be struck with drum sticks or fingers and hands.
Technology:
The Mandala pad represents patented membrane sensor technology that was developed over several years by Vince DeFranco and was released to the public in May 2006. Because a Mandala surface is divided into 128 position rings and can detect 127 strike velocities (greater than 0) it can produce up to 16,256 (128 x 127) individual triggers. For practical playing the pad can be concentrically divided into as little as one or as many as six playing zones via its Virtual Brain software. The individual zones, position rings, and velocity levels can be used to trigger different instruments, effects parameters, or volume changes.
Technology:
The newest version of the Mandala, mk2.9, is a standard class-compliant USB MIDI controller (no special drivers required) which uses a computer as its sound source. When the pad is struck it sends a MIDI trigger note with velocity as well as a position value (in the form of a MIDI continuous controller) across a USB cable into a computer. A Virtual Brain program is included with mk2.9 and has a set of included instrument samples and multiple effects (filters, delay, distortion, etc.) which can be applied to individual zones or to the overall surface. Unlimited user samples can also be added to the included sound library. Any parameter of any effect can be controlled by the position (0-127) or velocity (0-127) of a surface strike, resulting in effects such as bending pitch, changing delay time, or increasing reverb as the pad is played from center to edge. The position controller and velocity controller can be scaled to the player's liking. Factory preset sound configurations are included with the Virtual Brain as well as unlimited empty slots for user created presets. Drummers can also modify panning and note settings for each zone. The Virtual Brain program is not required to play the Mandala because the pad is a MIDI controller that can trigger any software that accepts MIDI input.
Technology:
Mk2.9 demonstrates the resolution of its sensitivity by including a bonus of over 1500 position and velocity based samples of a single vintage snare drum which get laid out over the Mandala surface for an accurate playing representation.
Version 1.0 of the Mandala included a standalone hardware 'brain' with an onboard sound chip and effects. That version is no longer available.
Endorsers:
Danny Carey Tool Pat Mastelotto King Crimson Will Calhoun Living Colour Matt Chamberlain Pearl Jam/Bowie/Peter Gabriel/Tori Amos/Critters Buggin + Igor Cavalera Sepultura/Cavalera Conspiracy/Mixhell + Aaron Harris ISIS Joe Barresi Producer, Engineer: The Melvins/Tool/Queens of the Stone Age/Weezer/Wolfmother + Jaron Lanier Computer Scientist/Composer/Visual Artist/Author Lol Tolhurst The Cure/Levinhurst | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sulfuryl diazide**
Sulfuryl diazide:
Sulfuryl diazide or sulfuryl azide is a chemical compound with the molecular formula SO2(N3)2. It was first described in the 1920s when its reactions with benzene and p-xylene were studied by Theodor Curtius and Karl Friedrich Schmidt. The compound is reported as having "exceedingly explosive, unpredictable properties" and "in many cases very violent explosions occurred without any apparent reason".It was not until 2011 that sulfuryl diazide was isolated in a pure enough state to be fully characterized. It was characterized by infrared and Raman spectroscopy; its structure in the solid state was determined by x-ray crystallography. Its melting point is -15 °C. It was prepared by the reaction of sulfuryl chloride (SO2Cl2) with sodium azide (NaN3) using acetonitrile as solvent: SO2Cl2 + 2 NaN3 → SO2(N3)2 + 2 NaClSulfuryl diazide has been used as a reagent to perform reactions that remove nitrogen from heterocyclic compounds: R1−NH−R2 + SO2(N3)2 → R1−R2 + SO2 + 2 N2 + HN3 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sexological testing**
Sexological testing:
Sexuality can be inscribed in a multidimensional model comprising different aspects of human life: biology, reproduction, culture, entertainment, relationships and love.In the last decades, a growing interest towards sexuality and a greater quest to acknowledge a "right to sexuality" has occurred both in society and individuals. The consequence of this evolution has been a renewed and more explicit call for intervention from those who suffer, or think they suffer from alterations of their sexual and relational sphere.
Sexological testing:
This has produced an increased attention of medicine and psychology towards sexual dysfunctions and the problems they cause in individuals and couples. Science has gradually adjusted already existing research tools, mostly used in other fields of clinical research, to the field of sexology, so completing and increasing the number of tools in the "toolkit" of various branches of sexological diagnosis.Psychological measurements cannot be considered as accurate as physical ones (weight, height, mass, etc.), as the former evaluate those aspects and variables pertaining to an "individual" whose individuality refers to his/her own psychological, personological and environmental constituents: emotions, expressiveness, senses, feelings and experiences which can greatly vary according to the subjects and change in the short period or depending on different settings, even in the same individual.
Sexological testing:
What is expected of psychological measurements is "sufficient" accuracy and reliability, i.e. capability to express an indication or focus which clinicians can use as a "guideline" to rapidly and accurately deepen the aspects highlighted by the measurements and check them together with their patients. For this purpose, several statistical validation indexes of psychodiagnostic tests are provided: from standardization to various constructions of validity (internal, external, face, construct, convergent, content, discriminant, etc.).
Sexological testing:
There are several sexual dysfunctions and each of them has a different cause. Therefore, the field of sexology provides different psychological evaluation devices in order to examine the various aspects of the discomfort, problem or dysfunction, regardless of whether they are individual or relational ones.The number of psychodiagnostic reactives is certainly wide and heterogeneous, nevertheless, the number of tests specifically meant for the field of sexology is quite limited. The following list (in alphabetical order) is not exhaustive but shows the best known and/or most used reactives in the field of sexological and relational psychodiagnosis.
Index:
ASEX (Arizona Sexual Experience Scale) ASEX - Arizona Sexual Experience ScaleThis test is intended for the assessment of sexual dysfunctions in psychiatric patients and people with health problems (men and women). It particularly evaluates modifications and alterations of sexual functions in relation to the intake of medicines or psychotropic substances.
This self-report questionnaire can be both administered by a clinician or self-administered. It is made up of five items rated on a 6-point Likert scale.
Index:
Each item explores a particular aspect of sexuality: 1. Sexual drive, 2. Arousal, 3a. Penile erection; 3b. Vaginal lubrication, 4. Ability to reach orgasm, 5. Satisfaction from orgasm. Only one item of the scale has a male and a female version (3a – 3b). This test provides good reliability indexes with a Cronbach's coefficient alpha of 0.90 and correlation (at intervals of 1 and 2 weeks) with r = 0.80. The "validity of the construction" has been evaluated by several studies through differences in the scores obtained by sample groups (dysfunctional patients) and control groups.
Index:
Convergent and discriminant validity have been measured comparing the results obtained by ASEX with those obtained by other tests. Particularly, it has been found a significant correlation between ASEX and BISF (Brief Index of Sexual Functioning), while little correlation has been noticed between ASEX and HRSD - Hamilton Rating Scale for Depression and BDI - Beck Depression Inventory.
Index:
ASKAS (Aging Sexuality Knowledge and Attitudes Scale) ASKAS - Aging Sexuality Knowledge and Attitudes ScaleThis questionnaire is aimed at knowing sexuality and sexual attitudes in the elderly. It is made up of 61 items divided into two subscales: "Knowledge subscale", a 35-item scale with "True/False" and "I don't know" answers and "Attitudes subscale" which is composed of 26 items rated on a 7-point Likert scale. Both subscales provide good reliability indexes (from 0.97 to 0.72) for Cronbach's alpha, test-retest and split half methods measured on different types of groups: Nursing home resident, Community older adults, Family of older adults, Persons who work with older adults, Nursing home staff. According to several studies carried out by the same author, sexual behaviour and attitudes during older age reflects those adopted during younger age, in fact: those people who were sexually active during youth tend to maintain this behaviour during older age; negative attitudes towards sex learned during youth can significantly affect the ability to have good sexuality during older age.ASKAS has been used to study the effects of sexual education on the attitudes of nursing home residents, their relatives and nursing home staff towards sexuality in the elderly. It has been noted that, after receiving sexual education, nursing home staff and relatives were more tolerant towards sexual intercourse in older age. Moreover, there was a significant increase in the sexual activity and satisfaction in those elderly people who had been given sexual education. An Italian survey carried out through a translated version of ASKAS among general practitioners has found that almost the entire sample (N=95) knew that sexuality is a lifelong need and it is not hazardous to elderly people's health, but, at the same time, it has revealed a lot of fallacies, confusion, stereotypes and lack of accurate knowledge of sexuality in old men and old women. Several studies carried out in the fields of medicine and psychology throughout the world, have confirmed that this test can be used in order to assess elderly people and to survey their relatives and those professionals (helping profession) working close to them: doctors, psychologists and social workers.
Index:
BSRI (Bem Sex-Role Inventory) BSRI – Bem Sex-Role InventorySelf-administering questionnaire (60 items in all) measures masculinity (20 items), femininity (20 items), androgyny (20 items), using the masculinity and femininity scales.
The concept of psychological androgyny implies that it is possible for an individual to be both compassionate and assertive, both expressive and instrumental, both feminine and masculine, depending upon the situational appropriateness of these various modalities.
PSESQ33 (Parental Sexual Education Styles Questionnaire) PSESQ33 – Parental Sexual Education Styles Questionnairethis questionnaire was first developed by Abdollahzadeh and Keykhosravi (2020).
Index:
The attitude of parents to their children's sexual education has an effect on their sexual behavior and interaction with their children. No specific measurement tool has ever been developed to evaluate and measure this matter. The aim of present study was to develop a parental sexual education style questionnaire and determine its psychometric criteria.Three factors were extracted from the results of confirmatory factor analysis, including strict sexual education style, permissive sexual education style and authoritative education style. In general, all three factors were able to explain 50.32% of variance related to 33 items of the questionnaire. The value of Cranach's alpha coefficient was obtained equal to 0.751 for whole of the questionnaire. Also, the value of Cranach's alpha for the first three components was equal to 0.739, 0.765 and 0.751, respectively. The Varimax rotation matrix showed that all questions are applicable to the extracted styles.
Index:
DAS (Dyadic Adjustment Scale) DAS - Dyadic Adjustment ScaleThis scale is made up of 32 items which explore four interdependent dimensions in order to evaluate relational adaptation between husband and wife: agreement between husband and wife on important matters, cohesion of the couple on common activities, satisfaction of the couple with the progress of their relationship, expression of satisfaction with their affective and sexual life.
Index:
DIQ (Diagnostic Impotence Questionnaire) DIQ - Diagnostic Impotence QuestionnaireThis questionnaire (35 item) evaluates the different components in male erectile dysfunction: Vascular (V), Neurogenic (N), Hormonal (H), Psychogenic (P).
The scores of V-N-H components provide information about those organic factors responsible for the dysfunction; the scores of P component indicate the influence of the psychogenic component. If the total score of V-N-H components is higher than the score of P component, then the organic etiology prevails over the psychogenic one (and vice versa).
This device is useful in the clinic setting. However, due to the fact that it is not validated nor standardised, it must be used carefully in researches and screenings.
Index:
DSFI (Derogatis Sexual Function Inventory) DSFI - Derogatis Sexual Function InventoryA standardised self-evaluation questionnaire made up of 258 items (245 in the original version published in 1975). It produces nine sexual dimensions (information, experience, sexual drive, attitudes, affectivity, sexual gender and role, sexual fantasies, body image and sexual satisfaction), a dimension about psychopathological symptoms (anxiety, depression and somatizations) and an SFI index (sexual functioning index). Due to the high number of items, it requires a considerable amount of time to be filled in.
Index:
EDITS (Erectile Dysfunction Inventory of Treatment Satisfaction) EDITS - Erectile Dysfunction Inventory of Treatment SatisfactionA self-evaluation questionnaire on erectile dysfunction which is meant for male patients (13 items) and their partners (5 items). It explores achievements, perceived satisfaction, and treatment effectiveness.
The items meant for male patients study expectations, effectiveness, side effects and their willingness to continue with the treatment. The items meant for their partners explore the changes occurred in the couple's sexual activity and allow to notice the concordance between the subjective answers of the patients and the objective ones provided by theirs partners.
EPES (Erotic Preferences Examination Scheme) EPES - Erotic Preferences Examination SchemeThis is one of the oldest self-report questionnaire measures of the various paraphilias listed in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM).
Index:
The EPES includes scales for sexual masochism (11 items), sexual sadism (20 items), fetishism (8 items), cross-gender fetishism—transvestism (11 items), autogynephilia—a man's tendency to be erotically aroused by the thought or image of himself as a woman (8 items), pedophilia (18 items), hebephilia—the erotic preference for pubescent, as opposed to prepubescent, children (9 items), voyeurism (6 items), and exhibitionism (13 items). The alpha reliability coefficients for these scales run from 0.74 to 0.98.
Index:
The EPES is not copyrighted and can be used without special permission.
Index:
FACES (Family Adaptability and Cohesion Evaluation Scales) FACES - Family Adaptability and Cohesion Evaluation ScalesThis scale is made up of 111 items exploring family relationships (children above 12 included) with regard to four degrees of "cohesion", i.e. emotional link between family members (regressed, attached, parted, disengaged) and four degrees of "adaptability" (chaotic, flexible, structured, rigid) i.e. capability of family members to reorganize in response to changes in the situations involving the family.
Index:
A first revision (1993) called FACES-II reduced the scale to 30 items, whereas a second one (1995) called FACES-III further reduced the number of items to 20.
FGIS (Feminine Gender Identity Scale) FGIS - Feminine Gender Identity ScaleThis scale is made up of 29 items. The questions touch on topics such as childhood playmate preference. Adolescent sexual experience, and sexual activity preference in details.
GRIMS (Golombok Rust Inventory of Marital State) GRIMS - Golombok Rust Inventory of Marital StateA 28-item questionnaire which is intended to analyse the features of dyadic relationships. It is often used in marriage counselling and couple's therapy. This inventory shows good psychometric features and is often used together with GRISS which is its individual version.
Index:
GRISS (Golombok Rust Inventory of Sexual Satisfaction) GRISS - Golombok Rust Inventory of Sexual SatisfactionIt consists of two questionnaires (i.e. a male and a female questionnaire) with 28 items each. It studies sexual dysfunctions in heterosexual subjects. It provides a total score and subscales scores: intercourse frequency, incommunicability, dissatisfaction, avoiding sexual intercourse, absence of sexuality, anorgasmia and vaginismus (only in the female version), impotence and premature ejaculation (only in the male version).
Index:
It is provided with good psychometric features and is easy to administer due to the limited number of items. However, this feature limits its illustrative and diagnostic function.
HSAS (Hendrick Sexual Attitude Scale) HSAS - Hendrick Sexual Attitude ScaleA 43-item self-evaluation scale which explore subjects' attitude towards sexuality. The scale examines four sexuality-related factors: permissiveness, sexual practices, community (i.e. participation and involvement) and instrumentalism (i.e. pleasure-oriented sexuality).
IIEF (International Index of Erectile Function) IIEF - International Index of Erectile FunctionThis standardised and validated 15-item self-evaluation scale provides pre-post treatment clinic evaluations of erectile function, orgasmic function, sexual desire, satisfaction in sexual intercourse and general satisfaction.
The IIEF-5 Sexual Health Inventory for Men, an abridged version of the IIEF, contains a shorter questionnaire of five items which takes into account the latest six months instead of the latest four weeks considered by the IIEF.
ISS (Index of Sexual Satisfaction) ISS - Index of Sexual SatisfactionA 25-item questionnaire which psychometrically evaluates the preponderance of sexual components in the problems of a couple. Sexuality-related aspects in the couple are measured with regard to the feelings, attitudes, and events occurring during the relationship.
MAT (Marital Adjustment Test) MAT - Marital Adjustment TestA 15-item questionnaire which evaluates intrarelational adaptation and the agreement between husband and wife about those behaviours they consider sensible and suitable for their marital life. Psychometric capabilities are limited due to its obsolescence.
MCI (Marital Communication Inventory) MCI - Marital Communication InventoryThis scale is made up of two questionnaires (i.e. a male and a female questionnaire) with 42 items each, which provide a total score on intra-couple communication and scores relative to six dimensions: communication, adjustment, intimacy and sexuality, children, jobs and income, and religious beliefs.
This device shows a good reliability and internal consistency of the global score in comparison to the sub-dimensions.
MMPI-2 (Minnesota Multiphasic Personality Inventory) MMPI-2 - Minnesota Multiphasic Personality InventoryA test published in 1942 by the University of Minnesota, it was revised in 1989 when the current version MMPI-2 was created (last release Restructured Form in 2003).
The MMPI-2 is made up of a considerable number of items (567) which explore several features of personality pertaining psychology and psychiatry. There are also an abridged version (370 items) and a version called MMPI-A of 478 items (350 items in a short form) aimed at evaluating adolescent between the age of 14 and 18.
The dimensions taken into account are divided into: Basic Scales (which evaluate the most relevant features of personality), Content Scales (which analyse different variables of personality), Supplementary Scales (which further investigate some of the issues in the basic scales), Validity Scales (which define the degree of sincerity and accuracy in filling the questionnaire).
Index:
The evaluation of sexual and relational settings takes into account the following aspects: masculinity – femininity (i.e. those aspects typically viewed as masculine or feminine, considered as a whole), masculine and feminine gender role (i.e. perception of gender role), marital distress and family discord (i.e. conflicts within the couple), social introversion (i.e. difficulties in social relations).Criticisms of this device relate to the amount of time required to fill it in (60–120 minutes) and to the fact that some of the Restructured Clinical Scales, although regarded as clearer and easier to interpret, raised some controversies in the academic world because they have been modified compared to those in the original version.
Index:
MPT (Marital Patterns Test) MPT - Marital Patterns TestThis test is made up of two questionnaires (i.e. a male and a female questionnaire) with 24 pairs of items each. They measure the dominance and willingness within the couple. Its validity has been improved thanks to a revision by Scott-Heyes (1982) whose title is RSMPT - Ryle/Scott-Heyes Marital Patterns Questionnaire.
Index:
MSI (Marital Satisfaction Inventory) MSI - Marital Satisfaction InventoryA 280-item inventory which evaluates marital satisfaction with regard to 12 dimensions especially concerning conventionalism, affective communication, amount of time spent together, disagreement on financial problems, disagreement on children management and sexual satisfaction. A total score of the scales provides a "global discomfort" index defined by couple dissatisfaction, whereas a reduced version of this device (made up of 44 items) shows the "indifference" degree and the "disharmony" degree of the relationship.
Index:
PEQUEST (Premature Ejaculation Questionnaire) PEQUEST - Premature Ejaculation QuestionnaireA 36-item self-evaluation questionnaire for evaluating premature ejaculation. The ejaculative/orgasmic behaviour is explored in its various problematic aspects: persistence, significance, frequency, situational factors, psychological reaction of both partners, techniques adopted by the patient in order to coping the problem, adaptation and interference levels of the disturbance, performance anxiety, and partner's behaviour during sexual intercourse.
Index:
PREPARE-ENRICH (Premarital Personal and Relationship Evaluation) PREPARE-ENRICH InventoriesThis inventory is made up of 125 items, subdivided into 14 subscales, which explore sexual intercourse, personal difficulties, marital satisfaction, couple cohesion, dyadic adaptability, communication, conflict resolution, equality of the roles, children and marital life, family and friends, financial management, leisure activities, religious orientation, idealistic distortions. This inventory requires an elaborate preparation in order to be used and results from the combination of three previous scales: PREPARE - Premarital Personal and Relationship Evaluation (for couples planning to marry who do not have children); PREPARE-MC - Marriage Children (for couples planning to marry who have children, either together or from previous relationships); ENRICH - Evaluating Nurturing Relationship Issues Communication and Happiness (for married couples seeking empowerment and counselling).
Index:
SAI (Sexual Arousability Inventory) SAI - Sexual Arousability InventoryA 28-item questionnaire that psychometrically evaluates the level of arousability produced by sexual experiences, whereas SAI-E Sexual Arousability Inventory Expanded measures anxiety and arousability and it is meant for men and women regardless their psychosexual orientation.
SAS (Sexual Attitude Scale) SAS - Sexual Attitude ScaleA 25-item scale aiming at identifying subjects' attitude (liberal or conservative) towards different forms of sexuality. This questionnaire is not meant to study sexual disturbances, it just explores the subjects' attitude towards sexuality and its numerous expressions.
SBI (Sexual Behavior Inventory) SBI - Sexual Behavior Inventory (Males, Females)A self-evaluation scale in two versions (male and female version). Both versions are made up of 21 items. The questionnaire evaluates the kind of involvement of subjects in heterosexual activities.
SESAMO_Win (Sexrelation Evaluation Schedule Assessment Monitoring on Windows) SESAMO_Win - Sexrelation Evaluation Schedule Assessment Monitoring on WindowsA standardised and validated self-administering and self-evaluation questionnaire. It studies the dysfunctional aspects in individual and couple sexuality besides family, social, affective and relational aspects.
It consists of two questionnaires (i.e. a male and a female questionnaire) which are divided in two subsections each: one for singles and one for people with a partner.
The number of items in each questionnaire is variable: 135 items for singles and 173 for people with a partner. The explored dimensions are 16 for singles and 18 for people living a dyadic situation.
This questionnaire can be directly self-administered on the computer (self-assessment); after that the software elaborates the questionnaire and produces a report made up of nine sections. Each of these sections has several levels of further diagnostic analysis.
A short version of this questionnaire, called Sexuality Evaluation Schedule Assessment Monitoring, has a lower number of items and can be administered only through the paper and pencil method.
The disadvantages of this evaluation/research tool are the time required for filling in the questionnaire and the fact that the complete Report can be elaborated only by the software.
Index:
SESII–W (Sexual Excitation/Sexual Inhibition Inventory for Women) SESII–W - Sexual Excitation/Sexual Inhibition Inventory for WomenThis test investigates sexual arousal and inhibition in women through a 115-item questionnaire rated on a 4-point Likert scale. The areas concerning sexual arousal are: Arousability (arousal and sexual stimulation); Sexual power dynamics (power dynamics in sexuality); Smell (arousing smells); Partner characteristics; Setting (unusual or unconcealed settings). Sexual inhibition factors are: Relationship importance; Arousal contingency (arousal-related factors); Concerns about sexual function (concerns about the consequences of sexual activity). This test is based on the conditioning of sexual response: sexual arousal is controlled by the balance of several factors, all of which contribute to arousal or inhibition. Validation of this test is based on a sample of 655 women with an average age of 33.09. Statistical calculations have provided a good reliability measured by test-retest method and good discriminant and convergent validity determined through the consistency of the results obtained by this test with those obtained by BIS/BAS - Behavioral Inhibition Scale/Behavioral Activation Scale, SOS – Sexual Opinion Survey and SSS – Sexual Sensation Seeking.
Index:
SFQ (Sexual Functioning Questionnaire) SFQ - Sexual Functioning QuestionnaireA standardised questionnaire which studies sexual impotence problems. It is made up of 62 items (48 of them are meant for both partners while 14 are meant exclusively for the dysfunctional patient). The scoring and the clinical evaluation must have done with the traditional method.
Index:
SHQ–R (Clarke Sex History Questionnaire for Males–Revised) SHQ–R - Clarke Sex History Questionnaire for Males–RevisedClarke Sex History Questionnaire for Males was created in 1977 by some clinicians from the Centre for Addiction and Mental Health (the former Clarke Institute of Psychiatry) in Toronto (Canada). SHQ-R is a fully validated and standardised self-report questionnaire, revised in 2002. It is composed of 508 items exploring several areas of male sexuality: I. Childhood and Adolescent Sexual Experiences (a scale to measure sexual experiences and sexual abuse during childhood and adolescence); II. Sexual Dysfunction (a scale which evaluates sexual dysfunctions such as impotence, hypersexuality and premature or retarded ejaculation); III. Adult Age/Gender Sexual Outlets (seven scales measuring the frequency of various sexual activities with adults, children and adolescents); IV. Fantasy and Pornography (three scales measuring sexual fantasies involving women, men and the use of pornography); V. Transvestism, Fetishism, and Feminine Gender Identity (three scales which evaluate personal experiences with regard to transvestism, sexual fetishes and identification with female gender traits); VI. Courtship Disorders (six scales which take into consideration several aspects of "disturbed courtship": voyeurism, exhibitionism, obscene telephone calls, frotteurism/toucherism and sexual assault). This test also includes two validity indicators: a "Lie scale" (insincere answers) and an "Infrequency scale" (infrequent answers).
Index:
SII (Sexual Interaction Inventory) SII - Sexual Interaction InventoryA standardised self-evaluation questionnaire made up of 17 items with 6 answers each. It gathers information about sexual interactions within heterosexual or homosexual couples. The result is obtained through a cross evaluation of the answers both partners have separately given in their respective questionnaires.
Manual scoring of rough points, which are then converted into percentages to be used to create a diagram which shows a sexual interaction profile for each couple.
SOC (Spouse Observation Checklist) SOC - Spouse Observation ChecklistA 400-item checklist relating partner's behaviours to be filled in by husband and wife for two weeks.
It takes into account 12 behavioural categories: love, solidarity, consideration, sexuality, communication, couple's activities, children's care, home management, decisions about financial matters, job, personal habits and independence of both partners.
It is similar to MAP - Marital Agendas Protocol in many aspects. This type of daily diaries are chiefly used in marriage counselling in order to evaluate conflict management and couple satisfaction/dissatisfaction.
SOS (Sexual Opinion Survey) SOS - Sexual Opinion SurveyA 21-item scale which explores subjects' attitude towards several sexual aspects: heterosexuality, homosexuality, erotic fantasies, sexual stimuli, etc.
TIPE (Test di Induzione Psico Erotica) TIPE – Test di Induzione Psico EroticaThe Psycho Erotic Induction Test is a projective test, standardised for evaluating erotic imagery. It is made up of eight tables concerning four specific issues: situations during childhood, initiative in love relationships, competitiveness and function of the group.
WIQ (Waring Intimacy Questionnaire) WIQ - Waring Intimacy QuestionnaireThis scale is made up of 90 items analysing nine aspects relating couple's intimacy: sexuality, love, expressiveness, marital cohesion, couple compatibility, partners' independence, conflicts, social identity and desirability bias. This scale seems to be reliable and free from sexual preconceptions although plethoric in conceptualising some of the items. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aluminium amalgam**
Aluminium amalgam:
Aluminium can form an amalgam in solution with mercury. Aluminium amalgam may be prepared by either grinding aluminium pellets or wire in mercury, or by allowing aluminium wire to react with a solution of mercury(II) chloride in water.This amalgam is used as a chemical reagent to reduce compounds, such as the reduction of imines to amines. The aluminium is the ultimate electron donor, and the mercury serves to mediate the electron transfer.
Aluminium amalgam:
The reaction and the waste from it contains mercury, so special safety precautions and disposal methods are needed. As an environmentally friendlier alternative, hydrides or other reducing agents can often be used to accomplish the same synthetic result. An alloy of aluminium and gallium was proposed as a method of hydrogen generation, as the gallium renders the aluminium more reactive by preventing it from forming an oxide layer. Mercury has this same effect on aluminium, but also serves additional functions related to electron transfer that make aluminium amalgams useful for some reactions that would not be possible with gallium.
Reactivity:
Aluminium exposed to air is ordinarily protected by a molecule-thin layer of its own oxide. This aluminium oxide layer serves as a protective barrier to the underlying unoxidized aluminium and prevents amalgamation from occurring. No reaction takes place when oxidized aluminium is exposed to mercury. However, if any elemental aluminium is exposed (even by a recent scratch), the mercury may combine with it to form the amalgam. This amalgamation can continue well beyond the vulnerable aluminium was exposed, potentially reacting with a large amount of the raw aluminium before it finally ends.The net result is similar to the mercury electrodes often used in electrochemistry, however instead of providing electrons from an electrical supply, they are provided by the aluminium which becomes oxidized in the process. The reaction that occurs at the surface of the amalgam may actually be a hydrogenation rather than a reduction.
Reactivity:
The presence of water in the solution is reportedly necessary; the electron rich amalgam will oxidize aluminium and reduce H+ from water, creating aluminium hydroxide (Al(OH)3) and hydrogen gas (H2). The electrons from the aluminium reduce mercuric Hg2+ ion to metallic mercury. The metallic mercury can then form an amalgam with the exposed aluminium metal. The amalgamated aluminium then is oxidized by water, converting the aluminium to aluminium hydroxide and releasing free metallic mercury. The generated mercury then cycles through these last two steps until the aluminium metal supply is exhausted.
Reactivity:
Al Hg Al OH Hg Hg Al Hg Al Hg Al Al OH Hg +3H2 Due to the reactivity of aluminium amalgam, restrictions are placed on the use and handling of mercury in proximity with aluminium. In particular, large amounts of mercury are not allowed aboard aircraft under most circumstances because of the risk of it forming amalgam with exposed aluminium parts in the aircraft. Even the transportation and packaging of mercury-containing thermometers and barometers is severely restricted. Accidental mercury spills in aircraft do sometimes result in insurance write-offs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tableau économique**
Tableau économique:
The Tableau économique (French pronunciation: [tablo ekɔnɔmik]) or Economic Table is an economic model first described by French economist François Quesnay in 1758, which laid the foundation of the Physiocratic school of economics.Quesnay believed that trade and industry were not sources of wealth, and instead in his 1758 manuscript Tableau économique (Economic Table) argued that agricultural surpluses, by flowing through the economy in the form of rent, wages, and purchases were the real economic movers.
The model:
The model Quesnay created consisted of three economic movers. The "Proprietary" class consisted of only landowners. The "Productive" class consisted of all agricultural laborers. The "Sterile" class is made up of artisans and merchants. The flow of production and/or cash between the three classes started with the Proprietary class because they own the land and they buy from both of the other classes. The process has these steps (consult Figure 1).
The model:
The farmer produces 1,500 food on land leased from the landlord. Of that 1,500, he retains 600 food to feed himself, his livestock, and any laborers he hires. He sells the remaining 900 in the market for $1 per unit of food. He keeps $300 ($150 for himself, $150 for his laborer) to buy non-farm goods (clothes, household goods, etc.) from the merchants and artisans. This produces $600 of net profit, to which Quesnay refers as product net.
The model:
The artisan produces 750 units of crafts. To produce at that level, he needs 300 units of food and 150 units of foreign goods. He also has subsistence need of 150 units of food and 150 units of crafts to keep himself alive during the year. The total is 450 units of food, 150 units of crafts, and 150 units of foreign goods. He buys $450 of food from the farmer and $150 of goods from the merchant, and he sells 600 units of crafts at the market for $600. Because the artisan must use the cash he made selling his crafts to buy raw materials for the next year’s production, he has no net profit.
The model:
The landlord is only a consumer of food and crafts and produces no product. His contribution to the production process is the redistribution of $600 in land rent the farmer pays for the use of naturally occurring land. The landlord uses $300 of the rent to buy food from the farmer in the market and $300 to buy crafts from the artisan. Because he is purely a consumer, Quesnay considers the landlord the prime mover of economic activity. It is his desire to consume which causes him to expend his entire lease income on food and crafts and which provides income to the other classes.
The model:
The merchant is the mechanism for exporting food in exchange for foreign imports. The merchant uses the $150 he received from the artisan to buy food from the market, and it is assumed that he takes the food out of the country to exchange it for more foreign goods.Figure 1 Production Flow Diagram for Quesnay's Tableau (4)The Tableau shows the reason why the Physiocrats disagreed with Cantillon about exporting food. The economy produces a surplus of food, and neither the farmer nor the artisan can afford to consume more than a subsistence level of food. The landlord is assumed to be consuming at a level of satiation; therefore, he cannot consume any more. Since food cannot be stored easily, it is necessary to sell it to someone who can use it. This is where the merchant provides value.
Physiocratic interpretation:
The merchant is not a source of wealth, however. The Physiocrats believed that “neither industry nor commerce generates wealth.” A “plausible explanation is that the Physiocrats developed their theory in light of the actual situation of the French economy…” France was an absolute monarchy with the land owners constituting 6-8% of the population and owning 50% of the land. (5, p. 859) Agriculture contributed to 80% of the country’s wealth, and the non-land owning segment of the population “practises a subsistence agriculture that produces the essential minimum, with virtually all income being absorbed by food requirements.” Additionally, exports consisted mostly of agricultural-based products, e.g. wine. Given the massive effect of agriculture on France’s economy, it was more likely they would develop an economic model that used it to the king’s advantage.
Physiocratic interpretation:
The Physiocrats were at the beginning of the anti-mercantilist movement. Quesnay’s argument against industry and international trade as alternatives to his doctrine is twofold. First, industry produces no gain in wealth; therefore, redirecting labor from agriculture to industry will in effect decrease the nation’s overall wealth. Additionally, population expands to fill available land and food supply; therefore, population must go down if the use of land does not produce food. Second, the basic premise of the Mercantilists is that a country must export more than it imports to gain wealth, but that assumes it has more of a tradeable resource than it needs for internal consumption. France did not have a colony with the ability to produce finished or semi-finished goods like England (e.g. India) or Holland (e.g. North America, Africa, South America). Its main colonial presence was in the Caribbean, southern North America, and southeast Asia, and like France, the colonies had agricultural-based economies. The only good which France had in enough excess to export was food; therefore, international trade based on industrial production would not yield as much wealth.
Physiocratic interpretation:
Quesnay was not anti-industry, however. He was just realistic in his assessment that France was not in good position to incubate a strong industrial market. His argument was that artisans and manufacturers would come to France only in proportion to the size of the internal market for their goods. Quesnay believed “a country should concentrate on manufacturing only to the extent that the local availability of raw materials and suitable labor enabled it to have a cost advantage over its overseas competitors.” Anything above that amount should be purchased through trade.
Legacy:
The tableau économique is credited as the "first precise formulation" of interdependent systems in economics and the origin of the theory of the multiplier in economics. An analogous table is used in the theory of money creation under fractional-reserve banking by relending of deposits, leading to the money multiplier.
The wage-fund doctrine was derived from the tableau, then later rejected.
Karl Marx used Quesnay's Tableau as a basis for his theory of circulation in Capital volume 2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Climacteric year**
Climacteric year:
In Ancient Greek philosophy and astrology, the climacterics (Latin: annus climactericus, from the Greek κλιμακτηρικός, klimaktērikós) were certain purportedly critical years in a person's life, marking turning points.
Historic use:
According to the astrologers, the person would see some very notable alterations to the body, and be at a great risk of death during these years. Authors on the subject include the following: Plato, Cicero, Macrobius, Aulus Gellius, among the ancients; as well as Argol, Maginus, and Salmasius. Augustine, Ambrose, Bede, and Boetius all countenanced the belief.
The first climacteric occurs in the seventh year of a person's life; the rest are multiples of the first, such as 21, 49, 56, and 63. The grand climacteric usually refers to the 63rd year, with the dangers here being supposedly more imminent; but may refer to the 49th (7 × 7) or the 81st (9 × 9).
The belief has a great deal of antiquity on its side. Aulus Gellius says that it was borrowed from the Chaldeans; who might probably receive it from Pythagoras, whose philosophy (Pythagoreanism) was based in numbers, and who imagined an extraordinary virtue in the number 7.
These turning points were viewed as changes from one kind of life, and attitude toward life, to another in the mind of the subject: the locus classicus is Ptolemy, Tetrabiblos C204–207, which in turn gave rise to Shakespeare's delineation of the Seven Ages of Man.
Historic use:
They were also viewed, logically within the framework of ancient medicine and its ties to astrology, as dangerous years from a medical standpoint. In this sense, the word has been used by medicine of more recent times; in the 16th through the 18th centuries, it often refers to the day on which a fever was thought to break (see quartan fever, quintan fever).
Historic use:
Marsilius Ficinus gives a foundation for the belief: he states that there is a year assigned for each planet to rule over the body of man, each in his turn. Now, Saturn being the most malefic planet of all, every seventh year, which falls to his lot, becomes very dangerous; especially that of 63, since the person is already of old age.
Historic use:
Some hold, according to this doctrine, every seventh year to be an established climacteric; but others only allow the title to those years produced by the multiplication of the climacterical space by an odd number, 3, 5, 7, 9, etc. Others observe every ninth year as a climacteric, in which case the 81st year is the grand climacteric. Some also believed that the climacteric years are also fatal to political bodies and governments.
Historic use:
The Roman emperor Augustus refers to having passed his own grand climacteric, about which he had been apprehensive.The astronomer Johannes Hevelius wrote a volume under the title Annus climactericus (1685), describing the loss he sustained in the burning of his observatory in 1679, which he considered climacteric because it was 49 years after the beginning of his observing career.
The legacy of these climacteric years is still with us to some extent: the age of reason is often taken to be when a child reaches 7, and in many countries the age of full adulthood is taken as 21.
Historic use:
For astrologers, the discovery of the planet Uranus in 1781 confirmed what may have originated as the mundane observations of the ancients on living things. Uranus has an orbital period of 84 years, which divides into four periods of 21 years each. Astrologers take Uranus as the bringer of abrupt changes, so when it forms a square or 90 degree relationship with its original position in a nativity at age 21, this brings the change from irresponsible youth to responsible adulthood. The opposition occurs at age 42, traditionally the time of midlife crisis. Thus the second square at age 63, the climacteric, would be the most dangerous for the ancients who rarely lived long enough to see the conjunction (return to the natal position) at age 84. This combination of the periods of Saturn and Uranus are powerful indicators of life changes for astrologers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grid fabric**
Grid fabric:
The Wireless Grid Fabric in communication is a MIMOS Berhad innovation for WiMAX multi-hop relay networks (IEEE802.16j) for rural area communication.
Grid fabric:
The idea of the Wireless Grid Fabric involves using multihop base stations (MR-BS) to forward messages to and from the network. Each relay station (RS) covers approximately two square kilometers of area with omnidirectional antennas. Each such square is called a cell. In this scheme, the network's scalability depends not on the number of nodes but the number of cells, each of which contains several nodes.
Grid fabric:
In any rural area community supported by a Wireless Grid Fabric, it is assumed that the main traffic (content) is self-created by the population (peer–to-peer), such as video streaming, VOIP, IPTV, and others which are all multicast-based. The Wireless Grid Fabric network has many advantages over other mesh technologies (i.e. WiFi-Mesh and Fixed WiMAX-Mesh), as it achieves hundreds of Mbit/s with mobility for hundreds of mobiles per service deployment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lobster clasp**
Lobster clasp:
A lobster clasp, also known as a lobster hook, lobster claw, trigger clasp, or bocklebee clasp, is a fastener that is held closed by a spring. The lobster clasp is opened or closed by holding a small lever, usually with a fingernail, long enough to apply, then it is attached (or removed from) a short link-chain or a ring-like structure. Lobster clasps are often used for necklaces, bracelets, and keychains.
Lobster clasp:
Lobster clasps are named as such because of their "pinching" mechanism, and they are often shaped like a lobster's claw. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jv16 powertools**
Jv16 powertools:
jv16 PowerTools, developed by Macecraft Software, is a utility software suite for the Microsoft Windows operating system designed to fix common Windows errors, clean old, unneeded junk from the system, and make computers start faster. jv16 PowerTools has been reviewed by Chip.de, PC World, Tech Advisor, Laptop Mag, Softpedia, and various tech sites and blogs.
Features:
jv16 PowerTools’ main features are System Cleaner (which includes registry cleaner functionality) and an uninstaller called Software Uninstaller. In addition, the software has features such as Finder, Big File Finder, Duplicate Finder, File Renamer, File Splitter, File Merger, File Deleter, File Wiper, Task Manager, Web Blocker, and Internet Optimizer. jv16 PowerTools is available in 19 languages.
Development:
jv16 PowerTools was developed in 2003 by the founder of Macecraft Software, Jouni Vuorio (later changed his name to Jouni Flemming) after he developed a freeware software called RegCleaner.
Crowdfunding campaign In December 2013, Macecraft created a crowdfunding campaign on Indiegogo aiming to make jv16 PowerTools available as open-source. However, the campaign didn't reach its financial goal, making jv16 PowerTools continue as shareware instead.
Critical reception:
PC World’s Steve Bass gave an expert rating of 3.5/5 stars in 2008 and commented that jv16 PowerTools ‘will tell you all you ever wanted to know about Windows Registry, but you probably won't need all of its tools’. In 2010, Ian Harac called it a “Swiss Army Knife” of Utilities, but also commented that 'despite a wide array of useful features, it's somewhat hampered by a clumsy and uninformative interface'.Laptop Mag gave an editor's rating of 4.5/5 stars in 2009 and the verdict was that jv16 PowerTools is ‘a solid solution provided that its somewhat intimidating interface doesn't turn you off.'Softpedia gave jv16 PowerTools an editor rating of 4.5/5 stars in 2017 and commented that the software is ‘a good place to start tweaking some of your system's components.'Tech Advisor gave the verdict that jv16 PowerTools has a little variable in quality, but also commented that ‘the sheer weight of features means it's still worth a look’ in 2019.Chip.de’s editorial team rated jv16 PowerTools as satisfying and mentioned ‘sensible tuning modules and monitoring of LAN traffic’ as its advantages. However, they noted ‘excessive price’ as its disadvantage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rosacea**
Rosacea:
Rosacea is a long-term skin condition that typically affects the face. It results in redness, pimples, swelling, and small and superficial dilated blood vessels. Often, the nose, cheeks, forehead, and chin are most involved. A red, enlarged nose may occur in severe disease, a condition known as rhinophyma.The cause of rosacea is unknown. Risk factors are believed to include a family history of the condition. Factors that may potentially worsen the condition include heat, exercise, sunlight, cold, spicy food, alcohol, menopause, psychological stress, or steroid cream on the face. Diagnosis is based on symptoms.While not curable, treatment usually improves symptoms. Treatment is typically with metronidazole, doxycycline, minocycline, or tetracycline. When the eyes are affected, azithromycin eye drops may help. Other treatments with tentative benefit include brimonidine cream, ivermectin cream, and isotretinoin. Dermabrasion or laser surgery may also be used. The use of sunscreen is typically recommended.Rosacea affects between 1% and 10% of people. Those affected are most often 30 to 50 years old and female. People with paler skin or European ancestry are more frequently affected. The condition was described in The Canterbury Tales in the 1300s, and possibly as early as the 200s BC by Theocritus.
Signs and symptoms:
Rosacea typically begins with reddening (flushing) of the skin in symmetrical patches near the center of the face. Common signs can depend on age and sex: flushing and red swollen patches are common in the young, small and visible dilated blood vessels in older individuals, and swelling of the nose is common in men. Other signs include lumps on the skin (papules or pustules) and swelling of the face. Many people experience stinging or burning pain and rarely itching.Skin problems tend to be aggravated by particular trigger factors, that differ for different people. Common triggers are ultraviolet light, heat, cold, or certain foods or beverages.
Signs and symptoms:
Erythematotelangiectatic rosacea Erythematotelangiectatic rosacea rosacea (also known as "vascular rosacea") is characterized by prominent history of prolonged (over 10 minutes) flushing reaction to various stimuli, such as emotional stress, hot drinks, alcohol, spicy foods, exercise, cold or hot weather, or hot baths and showers.
Glandular rosacea In glandular rosacea, men with thick sebaceous skin predominate, a disease in which the papules are edematous, and the pustules are often 0.5 to 1.0 cm in size, with nodulocystic lesions often present.
Cause:
The exact cause of rosacea is unknown. Triggers that cause episodes of flushing and blushing play a part in its development. Exposure to temperature extremes, strenuous exercise, heat from sunlight, severe sunburn, stress, anxiety, cold wind, and moving to a warm or hot environment from a cold one, such as heated shops and offices during the winter, can each cause the face to become flushed. Certain foods and drinks can also trigger flushing, such as alcohol, foods and beverages containing caffeine (especially hot tea and coffee), foods high in histamines, and spicy foods.Medications and topical irritants have also been known to trigger rosacea flares. Some acne and wrinkle treatments reported to cause rosacea include microdermabrasion and chemical peels, as well as high dosages of isotretinoin, benzoyl peroxide, and tretinoin.
Cause:
Steroid-induced rosacea is caused by the use of topical steroids. These steroids are often prescribed for seborrheic dermatitis. Dosage should be slowly decreased and not immediately stopped to avoid a flare-up.
Cathelicidins In 2007, Richard Gallo and colleagues noticed that patients with rosacea had high levels of cathelicidin, an antimicrobial peptide, and elevated levels of stratum corneum tryptic enzymes (SCTEs). Antibiotics have been used in the past to treat rosacea, but they may only work because they inhibit some SCTEs.
Cause:
Demodex folliculitis and Demodex mites Studies of rosacea and Demodex mites have revealed that some people with rosacea have increased numbers of the mite, especially those with steroid-induced rosacea. Demodex folliculitis (demodicidosis, also known as "mange" in animals) is a condition that may have a "rosacea-like" appearance.A 2007, National Rosacea Society-funded study demonstrated that Demodex folliculorum mites may be a cause or exacerbating factor in rosacea. The researchers identified Bacillus oleronius as distinct bacterium associated with Demodex mites. When analyzing blood samples using a peripheral blood mononuclear cell proliferation assay, they discovered that B. oleronius stimulated an immune system response in 79 percent of 22 patients with subtype 2 (papulopustular) rosacea, compared with only 29% of 17 subjects without the disorder. They concluded, "The immune response results in inflammation, as evident in the papules (bumps) and pustules (pimples) of subtype 2 rosacea. This suggests that the B. oleronius bacteria found in the mites could be responsible for the inflammation associated with the condition." Intestinal bacteria Small intestinal bacterial overgrowth (SIBO) was demonstrated to have greater prevalence in rosacea patients and treating it with locally acting antibiotics led to rosacea lesion improvement in two studies. Conversely in rosacea patients who were SIBO negative, antibiotic therapy had no effect. The effectiveness of treating SIBO in rosacea patients may suggest that gut bacteria play a role in the pathogenesis of rosacea lesions.
Diagnosis:
Most people with rosacea have only mild redness and are never formally diagnosed or treated. No test for rosacea is known. In many cases, simple visual inspection by a trained health-care professional is sufficient for diagnosis. In other cases, particularly when pimples or redness on less-common parts of the face is present, a trial of common treatments is useful for confirming a suspected diagnosis. The disorder can be confused or co-exist with acne vulgaris or seborrheic dermatitis. The presence of a rash on the scalp or ears suggests a different or co-existing diagnosis because rosacea is primarily a facial diagnosis, although it may occasionally appear in these other areas.
Diagnosis:
Classification Four rosacea subtypes exist, and a patient may have more than one subtype:: 176 Erythematotelangiectatic rosacea exhibits permanent redness (erythema) with a tendency to flush and blush easily. Also small, widened blood vessels visible near the surface of the skin (telangiectasias) and possibly intense burning, stinging, and itching are common. People with this type often have sensitive skin. Skin can also become very dry and flaky. In addition to the face, signs can also appear on the ears, neck, chest, upper back, and scalp.
Diagnosis:
Papulopustular rosacea presents with some permanent redness with red bumps (papules); some pus-filled pustules can last 1–4 days or longer. This subtype is often confused with acne.
Phymatous rosacea is most commonly associated with rhinophyma, an enlargement of the nose. Signs include thickening skin, irregular surface nodularities, and enlargement. Phymatous rosacea can also affect the chin (gnathophyma), forehead (metophyma), cheeks, eyelids (blepharophyma), and ears (otophyma). Telangiectasias may be present.
Diagnosis:
In ocular rosacea, affected eyes and eyelids may appear red due to telangiectasias and inflammation, and may feel dry, irritated, or gritty. Other symptoms include foreign-body sensations, itching, burning, stinging, and sensitivity to light. Eyes can become more susceptible to infection. About half of the people with subtypes 1–3 also have eye symptoms. Blurry vision and vision loss can occur if the cornea is affected.
Diagnosis:
Variants Variants of rosacea include:: 689 Pyoderma faciale, also known as rosacea fulminans, is a conglobate, nodular disease that arises abruptly on the face.
Rosacea conglobata is a severe rosacea that can mimic acne conglobata, with hemorrhagic nodular abscesses and indurated plaques.
Phymatous rosacea is a cutaneous condition characterized by overgrowth of sebaceous glands. Phyma is Greek for swelling, mass, or bulb, and these can occur on the face and ears.: 693
Treatment:
The type of rosacea a person has informs the choice of treatment. Mild cases are often not treated at all, or are simply covered up with normal cosmetics.
Treatment:
Therapy for the treatment of rosacea is not curative, and is best measured in terms of reduction in the amount of facial redness and inflammatory lesions, a decrease in the number, duration, and intensity of flares, and concomitant symptoms of itching, burning, and tenderness. The two primary modalities of rosacea treatment are topical and oral antibiotic agents. Laser therapy has also been classified as a form of treatment. While medications often produce a temporary remission of redness within a few weeks, the redness typically returns shortly after treatment is suspended. Long-term treatment, usually 1–2 years, may result in permanent control of the condition for some patients. Lifelong treatment is often necessary, although some cases resolve after a while and go into a permanent remission. Other cases, if left untreated, worsen over time. Some people has also reported better results after changing diet. This is not confirmed by medical studies, even though some studies relate the histamine production to outbreak of rosacea.
Treatment:
Behavior Avoiding triggers that worsen the condition can help reduce the onset of rosacea, but alone will not normally lead to remission except in mild cases. Keeping a journal is sometimes recommended to help identify and reduce food and beverage triggers.
Because sunlight is a common trigger, avoiding excessive exposure to the sun is widely recommended. Some people with rosacea benefit from daily use of a sunscreen; others opt for wearing hats with broad brims. Like sunlight, emotional stress can also trigger rosacea. People who develop infections of the eyelids must practice frequent eyelid hygiene.
Managing pretrigger events such as prolonged exposure to cool environments can directly influence warm-room flushing.
Treatment:
Medications Medications with good evidence include topical ivermectin and azelaic acid creams and brimonidine, and doxycycline and isotretinoin by mouth. Lesser evidence supports topical metronidazole cream and tetracycline by mouth.Metronidazole is thought to act through anti-inflammatory mechanisms, while azelaic acid is thought to decrease cathelicidin production. Oral antibiotics of the tetracycline class such as doxycycline, minocycline, and oxytetracycline are also commonly used and thought to reduce papulopustular lesions through anti-inflammatory actions rather than through their antibacterial capabilities.Using alpha-hydroxy acid peels may help relieve redness caused by irritation, and reduce papules and pustules associated with rosacea. Oral antibiotics may help to relieve symptoms of ocular rosacea. If papules and pustules persist, then sometimes isotretinoin can be prescribed. The flushing and blushing that typically accompany rosacea are typically treated with the topical application of alpha agonists such as brimonidine and less commonly oxymetazoline or xylometazoline.A review found that ivermectin was more effective than alternatives for treatment of papulopustular acne rosacea. An ivermectin cream has been approved by the FDA, as well as in Europe, for the treatment of inflammatory lesions of rosacea. The treatment is based upon the hypothesis that parasitic mites of the genus Demodex play a role in rosacea. In a clinical study, ivermectin reduced lesions by 83% over 4 months, as compared to 74% under a metronidazole standard therapy. Quassia amara extract at 4% demonstrated to have clinical efficacy for rosacea. When compared to metronidazole 0.75% as usual care in a randomized, double-blinded clinical trial, Quassia amara extract at 4% demonstrated earlier onset of action, including improvement in telangiectasia, flushing, and papules. Quassia amara showed a sustained reduction of global assessment score at day 42 (reduced to 4.68 from baseline 7.65) compared to metronidazole at day 42 (reduced to 6.32 from baseline 7.2), p<0.001.
Treatment:
Laser Evidence for the use of laser and intense pulsed-light therapy in rosacea is poor.
Outcomes:
The highly visible nature of rosacea symptoms are often psychologically challenging for those affected. People with rosacea can experience issues with self-esteem, socializing, and changes to their thoughts, feelings, and coping mechanisms.
Epidemiology:
Rosaceae affects around 5% of people worldwide. Incidence varies by ethnicity, and is particularly prevalent in those with Celtic heritage. Men and women are equally likely to develop rosacea. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scandium chloride**
Scandium chloride:
Scandium(III) chloride is the inorganic compound with the formula ScCl3. It is a white, high-melting ionic compound, which is deliquescent and highly water-soluble. This salt is mainly of interest in the research laboratory. Both the anhydrous form and hexahydrate (ScCl3•6H2O) are commercially available.
Structure:
ScCl3 crystallises in the layered BiI3 motif, which features octahedral scandium centres. Monomeric ScCl3 is the predominant species in the vapour phase at 900 K, the dimer Sc2Cl6 accounts for approximately 8%. The electron diffraction spectrum indicates that the monomer is planar and the dimer has two bridging Cl atoms each Sc being 4 coordinate.
Reactions:
ScCl3 is a Lewis acid that absorbs water to give aquo complexes. According to X-ray crystallogrphy, one such hydrate is the salt trans-[ScCl2(H2O)4]Cl·2H2O. With the less basic ligand tetrahydrofuran, ScCl3 yields the adduct ScCl3(THF)3 as white crystals. This THF-soluble complex is used in the synthesis of organoscandium compounds. ScCl3 has been converted to its dodecyl sulfate salt, which has been investigated as a "Lewis acid-surfactant combined catalyst" (LASC) in aldol-like reactions.
Reactions:
Reduction Scandium(III) chloride was used by Fischer et al. who first prepared metallic scandium by electrolysis of a eutectic melt of scandium(III) chloride and other salts at 700-800 °C.ScCl3 reacts with scandium metal to give a number of chlorides where scandium has an oxidation state <+3, ScCl, Sc7Cl10, Sc2Cl3, Sc5Cl8 and Sc7Cl12. For example, reduction of ScCl3 with scandium metal in the presence of caesium chloride gives the compound CsScCl3 which contain linear chains of composition ScIICl3−, containing ScIICl6 octahedra sharing faces.
Uses:
Scandium(III) chloride is found in some halide lamps, optical fibers, electronic ceramics, and lasers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C date and time functions**
C date and time functions:
The C date and time functions are a group of functions in the standard library of the C programming language implementing date and time manipulation operations. They provide support for time acquisition, conversion between date formats, and formatted output to strings.
Overview of functions:
The C date and time operations are defined in the time.h header file (ctime header in C++).
The timespec and related types were originally proposed by Markus Kuhn to provide a variety of time bases, but only TIME_UTC was accepted. The functionalities were, however, added to C++ in 2020 in std::chrono.
Example:
The following C source code prints the current time to the standard output stream.
The output is: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spin quantum number**
Spin quantum number:
In physics, the spin quantum number is a quantum number (designated s) that describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particle. It has the same value for all particles of the same type, such as s = 1/2 for all electrons. It is an integer for all bosons, such as photons, and a half-odd-integer for all fermions, such as electrons and protons. The component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written ms. The value of ms is the component of spin angular momentum, in units of the reduced Planck constant ħ, parallel to a given direction (conventionally labelled the z–axis). It can take values ranging from +s to −s in integer increments. For an electron, ms can be either ++1/2 or −+1/2 .
Spin quantum number:
The phrase spin quantum number was originally used to describe the fourth of a set of quantum numbers (the principal quantum number n, the azimuthal quantum number ℓ, the magnetic quantum number m, and the spin magnetic quantum number ms), which completely describe the quantum state of an electron in an atom. Some introductory chemistry textbooks describe ms as the spin quantum number, and s is not mentioned since its value 1/2 is a fixed property of the electron, sometimes using the variable s in place of ms. Some authors discourage this usage as it causes confusion. At a more advanced level where quantum mechanical operators or coupled spins are introduced, s is referred to as the spin quantum number, and ms is described as the spin magnetic quantum number or as the z-component of spin sz.Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electron. Capitalized symbols are used: S for the total electronic spin, and mS or MS for the z-axis component. A pair of electrons in a spin singlet state has S = 0, and a pair in the triplet state has S = 1, with mS = −1, 0, or +1. Nuclear-spin quantum numbers are conventionally written I for spin, and mI or MI for the z-axis component.
Spin quantum number:
The name "spin" comes from a geometrical spinning of the electron about an axis, as proposed by Uhlenbeck and Goudsmit. However, this simplistic picture was quickly realized to be physically unrealistic, because it would require the electrons to rotate faster than the speed of light. It was therefore replaced by a more abstract quantum-mechanical description.
Magnetic nature of atoms and molecules:
The spin quantum number helps to explain the magnetic properties of atoms and molecules. A spinning electron behaves like a micromagnet with a definite magnetic moment. If an atomic or molecular orbital contains two electrons, then their magnetic moments oppose and cancel each other.
If all orbitals are doubly occupied by electrons, the net magnetic moment is zero and the substance behaves as diamagnetic; it is repelled by the external magnetic field. If some orbitals are half filled (singly occupied), the substance has a net magnetic moment and is paramagnetic; it is attracted by the external magnetic field.
History:
Early attempts to explain the behavior of electrons in atoms focused on solving the Schrödinger wave equation for the hydrogen atom, the simplest possible case, with a single electron bound to the atomic nucleus. This was successful in explaining many features of atomic spectra.
History:
The solutions required each possible state of the electron to be described by three "quantum numbers". These were identified as, respectively, the electron "shell" number n, the "orbital" number ℓ, and the "orbital angular momentum" number m. Angular momentum is a so-called "classical" concept measuring the momentum of a mass in circular motion about a point. The shell numbers start at 1 and increase indefinitely. Each shell of number n contains n2 orbitals. Each orbital is characterized by its number ℓ, where ℓ takes integer values from 0 to n − 1, and its angular momentum number m, where m takes integer values from +ℓ to −ℓ. By means of a variety of approximations and extensions, physicists were able to extend their work on hydrogen to more complex atoms containing many electrons.
History:
Atomic spectra measure radiation absorbed or emitted by electrons "jumping" from one "state" to another, where a state is represented by values of n, ℓ, and m. The so-called "transition rule" limits what "jumps" are possible. In general, a jump or "transition" is allowed only if all three numbers change in the process. This is because a transition will be able to cause the emission or absorption of electromagnetic radiation only if it involves a change in the electromagnetic dipole of the atom.
History:
However, it was recognized in the early years of quantum mechanics that atomic spectra measured in an external magnetic field (see Zeeman effect) cannot be predicted with just n, ℓ, and m.
History:
In January 1925, when Ralph Kronig was still a Columbia University Ph.D. student, he first proposed electron spin after hearing Wolfgang Pauli in Tübingen. Werner Heisenberg and Pauli immediately hated the idea: They had just ruled out all imaginable actions from quantum mechanics. Now Kronig was proposing to set the electron rotating in space. Pauli especially ridiculed the idea of spin, saying that "it is indeed very clever but of course has nothing to do with reality". Faced with such criticism, Kronig decided not to publish his theory and the idea of electron spin had to wait for others to take the credit. Ralph Kronig had come up with the idea of electron spin several months before George Uhlenbeck and Samuel Goudsmit, but most textbooks credit these two Dutch physicists with the discovery.
History:
Pauli subsequently proposed (also in 1925) a new quantum degree of freedom (or quantum number) with two possible values, in order to resolve inconsistencies between observed molecular spectra and the developing theory of quantum mechanics.
Shortly thereafter Uhlenbeck and Goudsmit identified Pauli's new degree of freedom as electron spin.
Electron spin:
A spin- 1 /2 particle is characterized by an angular momentum quantum number for spin s = 1 /2. In solutions of the Schrödinger-Pauli equation, angular momentum is quantized according to this number, so that magnitude of the spin angular momentum is The hydrogen spectrum fine structure is observed as a doublet corresponding to two possibilities for the z-component of the angular momentum, where for any given direction z: whose solution has only two possible z-components for the electron. In the electron, the two different spin orientations are sometimes called "spin-up" or "spin-down".
Electron spin:
The spin property of an electron would give rise to magnetic moment, which was a requisite for the fourth quantum number. The magnetic moment vector of an electron spin is given by: where −e is the electron charge, m is the electron mass, and gs is the electron spin g-factor, which is approximately 2.0023.
Its z-axis projection is given by the spin magnetic quantum number ms according to: where μB is the Bohr magneton.
Electron spin:
When atoms have even numbers of electrons the spin of each electron in each orbital has opposing orientation to that of its immediate neighbor(s). However, many atoms have an odd number of electrons or an arrangement of electrons in which there is an unequal number of "spin-up" and "spin-down" orientations. These atoms or electrons are said to have unpaired spins that are detected in electron spin resonance.
Detection of spin:
When lines of the hydrogen spectrum are examined at very high resolution, they are found to be closely spaced doublets. This splitting is called fine structure, and was one of the first experimental evidences for electron spin. The direct observation of the electron's intrinsic angular momentum was achieved in the Stern–Gerlach experiment.
Stern–Gerlach experiment The theory of spatial quantization of the spin moment of the momentum of electrons of atoms situated in the magnetic field needed to be proved experimentally. In 1922 (two years before the theoretical description of the spin was created) Otto Stern and Walter Gerlach observed it in the experiment they conducted.
Detection of spin:
Silver atoms were evaporated using an electric furnace in a vacuum. Using thin slits, the atoms were guided into a flat beam and the beam sent through an in-homogeneous magnetic field before colliding with a metallic plate. The laws of classical physics predict that the collection of condensed silver atoms on the plate should form a thin solid line in the same shape as the original beam. However, the in-homogeneous magnetic field caused the beam to split in two separate directions, creating two lines on the metallic plate.
Detection of spin:
The phenomenon can be explained with the spatial quantization of the spin moment of momentum. In atoms the electrons are paired such that one spins upward and one downward, neutralizing the effect of their spin on the action of the atom as a whole. But in the valence shell of silver atoms, there is a single electron whose spin remains unbalanced.
Detection of spin:
The unbalanced spin creates spin magnetic moment, making the electron act like a very small magnet. As the atoms pass through the in-homogeneous magnetic field, the force moment in the magnetic field influences the electron's dipole until its position matches the direction of the stronger field. The atom would then be pulled toward or away from the stronger magnetic field a specific amount, depending on the value of the valence electron's spin. When the spin of the electron is ++ 1 /2 the atom moves away from the stronger field, and when the spin is −+ 1 /2 the atom moves toward it. Thus the beam of silver atoms is split while traveling through the in-homogeneous magnetic field, according to the spin of each atom's valence electron.
Detection of spin:
In 1927 Phipps and Taylor conducted a similar experiment, using atoms of hydrogen with similar results. Later scientists conducted experiments using other atoms that have only one electron in their valence shell: (copper, gold, sodium, potassium). Every time there were two lines formed on the metallic plate.
The atomic nucleus also may have spin, but protons and neutrons are much heavier than electrons (about 1836 times), and the magnetic dipole moment is inversely proportional to the mass. So the nuclear magnetic dipole momentum is much smaller than that of the whole atom. This small magnetic dipole was later measured by Stern, Frisch and Easterman.
Detection of spin:
Electron paramagnetic resonance For atoms or molecules with an unpaired electron, transitions in a magnetic field can also be observed in which only the spin quantum number changes, without change in the electron orbital or the other quantum numbers. This is the method of electron paramagnetic resonance (EPR) or electron spin resonance (ESR), used to study free radicals. Since only the magnetic interaction of the spin changes, the energy change is much smaller than for transitions between orbitals, and the spectra are observed in the microwave region.
Derivation:
For a solution of either the nonrelativistic Pauli equation or the relativistic Dirac equation, the quantized angular momentum (see angular momentum quantum number) can be written as: where s is the quantized spin vector or spinor ‖s‖ is the norm of the spin vector s is the spin quantum number associated with the spin angular momentum ℏ is the reduced Planck constant.Given an arbitrary direction z (usually determined by an external magnetic field) the spin z-projection is given by sz=msℏ where ms is the secondary spin quantum number, ranging from −s to +s in steps of one. This generates 2 s + 1 different values of ms.
Derivation:
The allowed values for s are non-negative integers or half-integers. Fermions have half-integer values, including the electron, proton and neutron which all have s = ++ 1 /2 . Bosons such as the photon and all mesons) have integer spin values.
Algebra:
The algebraic theory of spin is a carbon copy of the angular momentum in quantum mechanics theory.
First of all, spin satisfies the fundamental commutation relation: where ϵijk is the (antisymmetric) Levi-Civita symbol. This means that it is impossible to know two coordinates of the spin at the same time because of the restriction of the uncertainty principle.
Next, the eigenvectors of S2 and Sz satisfy: where S±=Sx±iSy are the ladder (or "raising" and "lowering") operators.
Energy levels from the Dirac equation:
In 1928, Paul Dirac developed a relativistic wave equation, now termed the Dirac equation, which predicted the spin magnetic moment correctly, and at the same time treated the electron as a point-like particle. Solving the Dirac equation for the energy levels of an electron in the hydrogen atom, all four quantum numbers including s occurred naturally and agreed well with experiment.
Total spin of an atom or molecule:
For some atoms the spins of several unpaired electrons (s1, s2, ...) are coupled to form a total spin quantum number S. This occurs especially in light atoms (or in molecules formed only of light atoms) when spin–orbit coupling is weak compared to the coupling between spins or the coupling between orbital angular momenta, a situation known as L S coupling because L and S are constants of motion. Here L is the total orbital angular momentum quantum number.For atoms with a well-defined S, the multiplicity of a state is defined as 2S + 1 . This is equal to the number of different possible values of the total (orbital plus spin) angular momentum J for a given (L, S) combination, provided that S ≤ L (the typical case). For example, if S = 1, there are three states which form a triplet. The eigenvalues of Sz for these three states are +1ħ, 0, and −1ħ . The term symbol of an atomic state indicates its values of L, S, and J .
Total spin of an atom or molecule:
As examples, the ground states of both the oxygen atom and the dioxygen molecule have two unpaired electrons and are therefore triplet states. The atomic state is described by the term symbol 3P, and the molecular state by the term symbol 3Σ−g.
Nuclear spin:
Atomic nuclei also have spins. The nuclear spin I is a fixed property of each nucleus and may be either an integer or a half-integer. The component mI of nuclear spin parallel to the z–axis can have (2I + 1) values I, I–1, ..., –I. For example, a 14N nucleus has I = 1, so that there are 3 possible orientations relative to the z–axis, corresponding to states mI = +1, 0 and −1.The spins I of different nuclei are interpreted using the nuclear shell model. Even-even nuclei with even numbers of both protons and neutrons, such as 12C and 16O, have spin zero. Odd mass number nuclei have half-integer spins, such as 3/ 2 for 7Li, 1 /2 for 13C and 5/ 2 for 17O, usually corresponding to the angular momentum of the last nucleon added. Odd-odd nuclei with odd numbers of both protons and neutrons have integer spins, such as 3 for 10B, and 1 for 14N. Values of nuclear spin for a given isotope are found in the lists of isotopes for each element. (See isotopes of oxygen, isotopes of aluminium, etc. etc.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aircraft spotting**
Aircraft spotting:
Aircraft spotting, or planespotting, is a hobby consisting of tracking the movement of aircraft, which is usually accomplished by photography or videography. Besides monitoring aircraft, planespotting enthusiasts (who are usually called planespotters) also record information regarding airports, air traffic control communications, airline routes, and more.
History and evolution:
Aviation enthusiasts have been watching airplanes and other aircraft since aviation began. However, as a hobby (distinct from active/wartime work), planespotting did not appear until the second half of the 20th century.
History and evolution:
During World War II and the subsequent Cold War some countries encouraged their citizens to become "planespotters" in an "observation corps" or similar public body for reasons of public security. Britain had the Royal Observer Corps which operated between 1925 and 1995. A journal called The Aeroplane Spotter was published in January 1940. The publication included a glossary that was refined in 2010 and published online.The development of technology and global resources enabled a revolution in plane-spotting. Point and shoot cameras, DSLRs & walkie talkies significantly changed the hobby. With the help of the internet, websites such as FlightAware and Flightradar24 have made it possible for spotters to track and locate specific aircraft from all across the world. Websites specifically for aircraft, such as airliners.net, and social networking services, such as Twitter, Facebook and Instagram, allow spotters to record their sightings and upload their photos or see pictures of aircraft spotted by other people worldwide.
Techniques:
When spotting aircraft, observers generally notice the key attributes of an aircraft, such as a distinctive noise from its engine, the number of contrails it is producing, or its callsign. Observers can also assess the size of the aircraft and the number, type, and position of its engines. Another distinctive attribute is the position of wings relative to the fuselage and the degree to which they are swept rearwards. The wings may be above the fuselage, below it, or fixed at midpoint. The number of wings indicates whether it is a monoplane, biplane or triplane. The position of the tailplane relative to the fin(s) and the shape of the fin are other attributes. The configuration of the landing gear can be distinctive, as well as the size and shape of the cockpit and passenger windows along with the layout of emergency exits and doors.
Techniques:
Other features include the speed, cockpit placement, colour scheme or special equipment that changes the silhouette of the aircraft. Taken together these traits will enable the identification of an aircraft. If the observer is familiar with the airfield being used by the aircraft and its normal traffic patterns, he or she is more likely to leap quickly to a decision about the aircraft's identity – they may have seen the same type of aircraft from the same angle many times. This is particularly prevalent if the aircraft spotter is spotting commercial aircraft, operated by airlines that have a limited fleet.
Techniques:
Spotters use equipment such as ADS-B decoders to track the movements of aircraft. The two most famous devices used are the AirNav Systems RadarBox and Kinetic Avionics SBS series. Both of them read and process the radar data and show the movements on a computer screen. Another tool that spotters can use are apps such as FlightRadar24 or Flightaware, where they can look at arrival and departure schedules and track the location of aircraft that have their transponder on. Most of the decoders also allow the exporting of logs from a certain route or airport.
Spotting styles:
Some spotters will note and compile the markings, a national insignia or airline livery or logo, a squadron badge or code letters in the case of a military aircraft. Published manuals allow more information to be deduced, such as the delivery date or the manufacturer's construction number. Camouflage markings differ, depending on the surroundings in which that aircraft is expected to operate.
Spotting styles:
In general, most spotters attempt to see as many aircraft of a given type, a particular airline, or a particular subset of aircraft such as business jets, commercial airliners, military and/or general aviation aircraft. Some spotters attempt to see every airframe and are known as "frame spotters." Others are keen to see every registration worn by each aircraft.
Spotting styles:
Ancillary activities might include listening-in to air traffic control transmissions (using radio scanners, where that is legal), liaising with other "spotters" to clear up uncertainties as to what aircraft have been seen at specific times or in particular places. Several internet mailing list groups have been formed to help communicate aircraft seen at airports, queries and anomalies. These groups can cater to certain regions, certain aircraft types, or may appeal to a wider audience. The result is that information on aircraft movements can be delivered worldwide in a real-time fashion to spotters.
Spotting styles:
The hobbyist might travel long distances to visit different airports, to see an unusual aircraft, or to view the remains of aircraft withdrawn from use. Air shows usually draw large numbers of spotters as they are opportunities to enter airfields and air bases worldwide that are usually closed to the public and to see displayed aircraft at close range. Some aircraft may be placed in the care of museums (see Aviation archaeology) – or perhaps be cannibalized in order to repair a similar aircraft already preserved.
Spotting styles:
Aircraft registrations can be found in books, with online resources, or in monthly magazines from enthusiast groups. Most spotters maintained books of different aircraft fleets and would underline or check each aircraft seen. Each year, a revised version of the books would be published and the spotter would need to re-underline every aircraft seen. With the development of commercial aircraft databases spotters were finally able to record their sightings in an electronic database and produce reports that emulated the underlined books.
Legal ramifications:
The legal repercussions of the hobby were dramatically shown in November 2001 when fourteen aircraft spotters (twelve British, two Dutch) were arrested by Greek police after being observed at an open day at the Greek Air Force base at Kalamata. They were charged with espionage and faced a possible 20-year prison sentence if found guilty. After being held for six weeks, they were eventually released on $11,696 (£9,000) bail, and the charges reduced to the misdemeanor charge of illegal information collection. Confident of their innocence they returned for their trial in April, 2002 and were stunned to be found guilty, with eight of the group sentenced to three years, the rest for one year. At their appeal a year later, all were acquitted.
As airport watch groups:
In the wake of the targeting of airports by terrorists, enthusiasts' organisations and police in the UK have cooperated in creating a code of conduct for planespotters, in a similar vein to guidelines devised for train spotters. By asking enthusiasts to contact police if spotters believe they see or hear something suspicious, this is an attempt to allow enthusiasts to continue their hobby while increasing security around airports. Birmingham and Stansted pioneered this approach in Britain and prior to the 2012 London Olympics, RAF Northolt introduced a Flightwatch scheme based on the same cooperative principles. These changes are also being made abroad in countries such as Australia, where aviation enthusiasts are reporting suspicious or malicious actions to police.
As airport watch groups:
The organisation of such groups has now been echoed in parts of North America. For example, the Bensenville, Illinois police department have sponsored an Airport Watch group at the Chicago O'Hare Airport. Members are issued identification cards and given training to accurately record and report unusual activities around the airport perimeter. (Members are not permitted airside.) Meetings are attended and supported by the FBI, Chicago Department of Aviation and the TSA who also provide regular training to group members. The Bensenville program was modeled on similar programs in Toronto, Ottawa and Minneapolis.In 2009, a similar airport watch group was organized between airport security and local aircraft spotters at Montréal–Pierre Elliott Trudeau International Airport. As of 2016, the group has 46 members and a special phone number to use to contact police if suspicious activity is seen around the airport area.
Extraordinary rendition:
Following the events of 9/11, information collected by planespotters helped uncover what is known as extraordinary rendition by the CIA. Information on unusual movements of rendition aircraft provided data that was mapped by critical geographers such as Trevor Paglen and the Institute for Applied Autonomy. These data and maps led first to news reports and then to a number of governmental and inter-governmental investigations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Decision desk**
Decision desk:
A decision desk is a team of experts that one or many US news organizations assemble to analyze incoming data about election results and project winners on election day. Decision desks use exit polling data as well as officially reported results as they come in, to project and then "call" the winners of elections on election night.
History:
Exit polling data was gathered by Voter News Service which existed from 1990 to 2003, and which was disbanded due to disastrous mistakes in the 2000 presidential election and in the 2002 elections. Afterward they formed the National Election Pool which produced skewed results in the 2004 US presidential election and in the 2016 presidential elections.Megyn Kelly was made famous when she walked backstage to Fox News' decision desk team during the broadcast of the 2012 US presidential election results, when Karl Rove contradicted the team's prediction that Obama would win. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Normal number (computing)**
Normal number (computing):
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.
Normal number (computing):
The magnitude of the smallest normal number in a format is given by: where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and E min {\textstyle E_{\text{min}}} depends on the size and layout of the format. Similarly, the magnitude of the largest normal number in a format is given by where p is the precision of the format in digits and E min {\textstyle E_{\text{min}}} is related to E max {\textstyle E_{\text{max}}} as: In the IEEE 754 binary and decimal formats, b, p, E min {\textstyle E_{\text{min}}} , and E max {\textstyle E_{\text{max}}} have the following values: For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10−95 through 9.999999 × 1096.
Normal number (computing):
Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers).
Zero is considered neither normal nor subnormal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aesthetics of science**
Aesthetics of science:
Aesthetics of science is the study of beauty and matters of taste within the scientific endeavour. Aesthetic features like simplicity, elegance and symmetry are sources of wonder and awe for many scientists, thus motivating scientific pursuit. Conversely, theories that have been empirically successful may be judged to lack aesthetic merit, which contributes to the desire to find a new theory that subsumes the old.The topic has been addressed by several publications discussing how aesthetic values are related to scientific experiments and theories. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Release liner**
Release liner:
A release liner or release paper is a paper or plastic-based film sheet (usually applied during the manufacturing process) used to prevent a sticky surface from prematurely adhering. It is coated on one or both sides with a release agent, which provides a release effect against any type of a sticky material such as an adhesive or a mastic. Release liners are available in different colors, with or without printing under the low surface energy coating or on the backside of the liner. Release is separation of the liner from a sticky material; liner is the carrier for the release agent.
Industry segmentation:
Globally there are between 400 and 500 companies involved in making or dealing with release liner products on an industrial scale. In general there are two types of companies which are manufacturing release liner.
Industry segmentation:
Liner producer Commercial coating companies deal with a lot of different end uses of this industry. They provide unique solutions to their customers, based on a wide variety of substrates and an endless combination of release agents with specialized properties. Commercial coaters usually do not make finished products, just the release liner itself and then their customers will coat a sticky material on this liner and then apply the end product to it.
Industry segmentation:
In-house producer An in-house producer makes the release liner and uses it internally to manufacture the final product. In-house producers are typically focused on a very narrow range of products e.g. labels or tapes. They use a limited amount of substrates and release materials, which are specialized for their end applications.
Liner materials:
As liner material, the industry is using a wide variety of so-called substrates, which are the carrier materials of the release agent and which is needed to transport a sticky material from the manufacturer to an industrial or private end user. Typical liner materials are: Paper SCK - Super calendered Kraft paper, typically used for labels in the USA Glassine - Is also a SCK paper but typically with a polyvinyl alcohol (PVOH) top coat, typically used for labels in Europe CCK - Clay coated Kraft paper or also just called coated paper MFK - Machine finished Kraft paper, which is the paper as it comes from a standard paper machine MG - Machine glazed paper which is a paper which has been glazed, e.g. on a Yankee cylinder of a paper machine Plastic film BO-PET: a PET film (biaxially oriented) is a very high temperature resistant and tough film liner BOPP: a biaxially oriented polypropylene (PP) film Other polyolefins: typically made out of high density polyethylene (HDPE), low density polyethylene (LDPE), PP plastic resinsPlastic films in general are made out of plastic resins by a plastics extrusion process and can be made out of one single type of plastic material, a blend of different plastic materials or multilayered coextrusions. Providing them with unique and adjusted features for the application that they are targeted for.
Liner materials:
Others Poly coated Kraft papers, which are typically MFK papers which have a polyolefin coating on one or both sides, to make them very smooth, moisture resistant and dimensionally stable.
Poly coated BO-PET film, which is a BO-PET film that has been coated on both sides with a polyolefin material. This way the tough and dimensionally stable PET film is combined with cheap polyolefin resin which makes the film a better carrier web for specialty applications.
Release agents:
Commonly used release agents for release liner can be crosslinkable silicone, other coatings, and materials that have a low surface energy.
Applications:
There are probably hundreds of different applications, where release liner materials are being used. Such as Pressure-sensitive labels Pressure-sensitive tape Self-adhesive plastic sheet Embossed release paper known as casting paper is used in the manufacture of textured materials, including bicast leather and artificial leather | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eigenmoments**
Eigenmoments:
EigenMoments is a set of orthogonal, noise robust, invariant to rotation, scaling and translation and distribution sensitive moments. Their application can be found in signal processing and computer vision as descriptors of the signal or image. The descriptors can later be used for classification purposes.
It is obtained by performing orthogonalization, via eigen analysis on geometric moments.
Framework summary:
EigenMoments are computed by performing eigen analysis on the moment space of an image by maximizing signal-to-noise ratio in the feature space in form of Rayleigh quotient.
This approach has several benefits in Image processing applications: Dependency of moments in the moment space on the distribution of the images being transformed, ensures decorrelation of the final feature space after eigen analysis on the moment space.
The ability of EigenMoments to take into account distribution of the image makes it more versatile and adaptable for different genres.
Generated moment kernels are orthogonal and therefore analysis on the moment space becomes easier. Transformation with orthogonal moment kernels into moment space is analogous to projection of the image onto a number of orthogonal axes.
Nosiy components can be removed. This makes EigenMoments robust for classification applications.
Optimal information compaction can be obtained and therefore a few number of moments are needed to characterize the images.
Problem formulation:
Assume that a signal vector s∈Rn is taken from a certain distribution having coorelation C∈Rn×n , i.e. C=E[ssT] where E[.] denotes expected value.
Dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality.
Problem formulation:
This is performed by a two-step linear transformation: q=WTXTs, where q=[q1,...,qn]T∈Rk is the transformed signal, X=[x1,...,xn]T∈Rn×m a fixed transformation matrix which transforms the signal into the moment space, and W=[w1,...,wn]T∈Rm×k the transformation matrix which we are going to determine by maximizing the SNR of the feature space resided by q . For the case of Geometric Moments, X would be the monomials. If m=k=n , a full rank transformation would result, however usually we have m≤n and k≤m . This is specially the case when n is of high dimensions.
Problem formulation:
Finding W that maximizes the SNR of the feature space: SNRtransform=wTXTCXwwTXTNXw, where N is the correlation matrix of the noise signal. The problem can thus be formulated as w1,...,wk=argmaxwwTXTCXwwTXTNXw subject to constraints: wiTXTNXwj=δij, where δij is the Kronecker delta.
It can be observed that this maximization is Rayleigh quotient by letting A=XTCX and B=XTNX and therefore can be written as: w1,...,wk=argmaxxwTAwwTBw , wiTBwj=δij Rayleigh quotient Optimization of Rayleigh quotient has the form: max max wwTAwwTBw and A and B , both are symmetric and B is positive definite and therefore invertible.
Scaling w does not change the value of the object function and hence and additional scalar constraint wTBw=1 can be imposed on w and no solution would be lost when the objective function is optimized.
This constraint optimization problem can be solved using Lagrangian multiplier: max wwTAw subject to wTBw=1 max max w(wTAw−λwTBw) equating first derivative to zero and we will have: Aw=λBw which is an instance of Generalized Eigenvalue Problem (GEP).
The GEP has the form: Aw=λBw for any pair (w,λ) that is a solution to above equation, w is called a generalized eigenvector and λ is called a generalized eigenvalue.
Finding w and λ that satisfies this equations would produce the result which optimizes Rayleigh quotient.
Problem formulation:
One way of maximizing Rayleigh quotient is through solving the Generalized Eigen Problem. Dimension reduction can be performed by simply choosing the first components wi , i=1,...,k , with the highest values for R(w) out of the m components, and discard the rest. Interpretation of this transformation is rotating and scaling the moment space, transforming it into a feature space with maximized SNR and therefore, the first k components are the components with highest k SNR values.
Problem formulation:
The other method to look at this solution is to use the concept of simultaneous diagonalization instead of Generalized Eigen Problem.
Simultaneous diagonalization Let A=XTCX and B=XTNX as mentioned earlier. We can write W as two separate transformation matrices: W=W1W2.
Problem formulation:
W1 can be found by first diagonalize B: PTBP=DB Where DB is a diagonal matrix sorted in increasing order. Since B is positive definite, thus DB>0 . We can discard those eigenvalues that large and retain those close to 0, since this means the energy of the noise is close to 0 in this space, at this stage it is also possible to discard those eigenvectors that have large eigenvalues.
Problem formulation:
Let P^ be the first k columns of P , now PT^BP^=DB^ where DB^ is the k×k principal submatrix of DB Let W1=P^DB^−1/2 and hence: W1TBW1=(P^DB^−1/2)TB(P^DB^−1/2)=I .W1 whiten B and reduces the dimensionality from m to k . The transformed space resided by q′=W1TXTs is called the noise space.
Then, we diagonalize W1TAW1 :W2TW1TAW1W2=DA where W2TW2=I . DA is the matrix with eigenvalues of W1TAW1 on its diagonal. We may retain all the eigenvalues and their corresponding eigenvectors since the most of the noise are already discarded in previous step.
Finally the transformation is given by: W=W1W2 where W diagonalizes both the numerator and denominator of the SNR, WTAW=DA , WTBW=I and the transformation of signal s is defined as q=WTXTs=W2TW1TXTs Information loss To find the information loss when we discard some of the eigenvalues and eigenvectors we can perform following analysis: η=1−trace(W1TAW1)trace(DB−1/2PTAPDB−1/2)=1−trace(DB^−1/2P^TAP^DB^−1/2)trace(DB−1/2PTAPDB−1/2)
Eigenmoments:
Eigenmoments are derived by applying the above framework on Geometric Moments. They can be derived for both 1D and 2D signals.
1D signal If we let X=[1,x,x2,...,xm−1] , i.e. the monomials, after the transformation XT we obtain Geometric Moments, denoted by vector M , of signal s=[s(x)] ,i.e. M=XTs In practice it is difficult to estimate the correlation signal due to insufficient number of samples, therefore parametric approaches are utilized.
One such model can be defined as: r(x1,x2)=r(0,0)e−c(x1−x2)2 where r(0,0)=E[tr(ssT)] . This model of correlation can be replaced by other models however this model covers general natural images.
Since r(0,0) does not affect the maximization it can be dropped.
A=XTCX=∫−11∫−11[x1jx2ie−c(x1−x2)2]i,j=0i,j=m−1dx1dx2 The correlation of noise can be modelled as σn2δ(x1,x2) , where σn2 is the energy of noise. Again σn2 can be dropped because the constant does not have any effect on the maximization problem.
B=XTNX=∫−11∫−11[x1jx2iδ(x1,x2)]i,j=0i,j=m−1dx1dx2 B=XTNX=∫−11[x1j+i]i,j=0i,j=m−1dx1=XTX Using the computed A and B and applying the algorithm discussed in previous section we find W and set of transformed monomials Φ=[ϕ1,...,ϕk]=XW which produces the moment kernels of EM. The moment kernels of EM decorrelate the correlation in the image.
Eigenmoments:
ΦTCΦ=(XW)TC(XW)=DC and are orthogonal: ΦTΦ=(XW)T(XW)=WTXTX=WTXTNXW=WTBW=I Example computation Taking 0.5 , the dimension of moment space as m=6 and the dimension of feature space as k=4 , we will have: 0.0 0.7745 0.8960 2.8669 4.4622 0.0 0.0 0.0 0.0 7.9272 2.4523 4.0225 20.6505 0.0 0.0 0.0 0.0 9.2789 0.1239 0.5092 18.4582 0.0 0.0 ) and 2.8669 4.0225 0.5092 4.4622 20.6505 18.4582 0.7745 7.9272 9.2789 0.8960 2.4523 0.1239 x4 2D signal The derivation for 2D signal is the same as 1D signal except that conventional Geometric Moments are directly employed to obtain the set of 2D EigenMoments.
Eigenmoments:
The definition of Geometric Moments of order (p+q) for 2D image signal is: mpq=∫−11∫−11xpyqf(x,y)dxdy which can be denoted as M={mj,i}i,j=0i,j=m−1 . Then the set of 2D EigenMoments are: Ω=WTMW where Ω={Ωj,i}i,j=0i,j=k−1 is a matrix that contains the set of EigenMoments.
Eigenmoments:
Ωj,i=Σr=0m−1Σs=0m−1wr,jws,imr,s EigenMoment invariants (EMI) In order to obtain a set of moment invariants we can use normalized Geometric Moments M^ instead of M Normalized Geometric Moments are invariant to Rotation, Scaling and Transformation and defined by: m^pq=αp+q+2∫−11∫−11[(x−xc)cos(θ)+(y−yc)sin(θ)]p=×[−(x−xc)sin(θ)+(y−yc)cos(θ)]q=×f(x,y)dxdy, where: 10 00 01 00 ) is the centroid of the image f(x,y) and 00 00 11 20 02 00 S in this equation is a scaling factor depending on the image. 00 S is usually set to 1 for binary images. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Synthesis of carbon nanotubes**
Synthesis of carbon nanotubes:
Techniques have been developed to produce carbon nanotubes in sizable quantities, including arc discharge, laser ablation, high-pressure carbon monoxide disproportionation, and chemical vapor deposition (CVD). Most of these processes take place in a vacuum or with process gases. CVD growth of CNTs can occur in vacuum or at atmospheric pressure. Large quantities of nanotubes can be synthesized by these methods; advances in catalysis and continuous growth are making CNTs more commercially viable.
Types:
Arc discharge Nanotubes were observed in 1991 in the carbon soot of graphite electrodes during an arc discharge, by using a current of 100 amps, that was intended to produce fullerenes. However the first macroscopic production of carbon nanotubes was made in 1992 by two researchers at NEC's Fundamental Research Laboratory. The method used was the same as in 1991. During this process, the carbon contained in the negative electrode sublimates because of the high-discharge temperatures.
Types:
The yield for this method is up to 30% by weight and it produces both single- and multi-walled nanotubes with lengths of up to 50 micrometers with few structural defects.
Arc-discharge technique uses higher temperatures (above 1,700 °C) for CNT synthesis which typically causes the expansion of CNTs with fewer structural defects in comparison with other methods.
Laser ablation In laser ablation, a pulsed laser vaporizes a graphite target in a high-temperature reactor while an inert gas is led into the chamber. Nanotubes develop on the cooler surfaces of the reactor as the vaporized carbon condenses. A water-cooled surface may be included in the system to collect the nanotubes.
Types:
This process was developed by Richard Smalley and co-workers at Rice University, who at the time of the discovery of carbon nanotubes, were blasting metals with a laser to produce various metal molecules. When they heard of the existence of nanotubes they replaced the metals with graphite to create multi-walled carbon nanotubes. Later that year the team used a composite of graphite and metal catalyst particles (the best yield was from a cobalt and nickel mixture) to synthesize single-walled carbon nanotubes.The laser ablation method yields around 70% and produces primarily single-walled carbon nanotubes with a controllable diameter determined by the reaction temperature. However, it is more expensive than either arc discharge or chemical vapor deposition.
Types:
Plasma torch Single-walled carbon nanotubes can also be synthesized by a thermal plasma method, first invented in 2000 at INRS (Institut national de la recherche scientifique) in Varennes, Canada, by Olivier Smiljanic. In this method, the aim is to reproduce the conditions prevailing in the arc discharge and laser ablation approaches, but a carbon-containing gas is used instead of graphite vapors to supply the necessary carbon. Doing so, the growth of SWNT is more efficient (decomposing the gas can be 10 times less energy-consuming than graphite vaporization). The process is also continuous and low cost. A gaseous mixture of argon, ethylene and ferrocene is introduced into a microwave plasma torch, where it is atomized by the atmospheric pressure plasma, which has the form of an intense 'flame'. The fumes created by the flame contain SWNT, metallic and carbon nanoparticles and amorphous carbon.Another way to produce single-walled carbon nanotubes with a plasma torch is to use the induction thermal plasma method, implemented in 2005 by groups from the University of Sherbrooke and the National Research Council of Canada. The method is similar to arc discharge in that both use ionized gas to reach the high temperature necessary to vaporize carbon-containing substances and the metal catalysts necessary for the ensuing nanotube growth. The thermal plasma is induced by high-frequency oscillating currents in a coil, and is maintained in flowing inert gas. Typically, a feedstock of carbon black and metal catalyst particles is fed into the plasma, and then cooled down to form single-walled carbon nanotubes. Different single-wall carbon nanotube diameter distributions can be synthesized.
Types:
The induction thermal plasma method can produce up to 2 grams of nanotube material per minute, which is higher than the arc discharge or the laser ablation methods.
Types:
Chemical vapor deposition (CVD) The catalytic vapor phase deposition of carbon was reported in 1952 and 1959, but it was not until 1993 that carbon nanotubes were formed by this process. In 2007, researchers at the University of Cincinnati (UC) developed a process to grow aligned carbon nanotube arrays of length 18 mm on a FirstNano ET3000 carbon nanotube growth system.During CVD, a substrate is prepared with a layer of metal catalyst particles, most commonly nickel, cobalt, iron, or a combination. The metal nanoparticles can also be produced by other ways, including reduction of oxides or oxides solid solutions. The diameters of the nanotubes that are to be grown are related to the size of the metal particles. This can be controlled by patterned (or masked) deposition of the metal, annealing, or by plasma etching of a metal layer. The substrate is heated to approximately 700 °C. To initiate the growth of nanotubes, two gases are bled into the reactor: a process gas (such as ammonia, nitrogen or hydrogen) and a carbon-containing gas (such as acetylene, ethylene, ethanol or methane). Nanotubes grow at the sites of the metal catalyst; the carbon-containing gas is broken apart at the surface of the catalyst particle, and the carbon is transported to the edges of the particle, where it forms the nanotubes. This mechanism is still being studied. The catalyst particles can stay at the tips of the growing nanotube during growth, or remain at the nanotube base, depending on the adhesion between the catalyst particle and the substrate. Thermal catalytic decomposition of hydrocarbon has become an active area of research and can be a promising route for the bulk production of CNTs. Fluidised bed reactor is the most widely used reactor for CNT preparation. Scale-up of the reactor is the major challenge.CVD is the most developed method for commercial production of CNT, giving more flexibility in terms of diameter, length and morphology of nanotubes. CVD method enabled OCSiAl, the world leading manufacturer of single wall carbon nanotubes, to produce high-quality single-walled carbon nanotubes at a scale of hundreds of tonnes annually at market-accepted price for their industrial use.CVD is the most widely used method for the production of carbon nanotubes. For this purpose, the metal nanoparticles are mixed with a catalyst support such as MgO or Al2O3 to increase the surface area for higher yield of the catalytic reaction of the carbon feedstock with the metal particles. One issue in this synthesis route is the removal of the catalyst support via an acid treatment, which sometimes could destroy the original structure of the carbon nanotubes. However, alternative catalyst supports that are soluble in water have proven effective for nanotube growth.If a plasma is generated by the application of a strong electric field during growth (plasma-enhanced chemical vapor deposition), then the nanotube growth will follow the direction of the electric field. By adjusting the geometry of the reactor it is possible to synthesize vertically aligned carbon nanotubes (i.e., perpendicular to the substrate), a morphology that has been of interest to researchers interested in electron emission from nanotubes. Without the plasma, the resulting nanotubes are often randomly oriented. Under certain reaction conditions, even in the absence of a plasma, closely spaced nanotubes will maintain a vertical growth direction resulting in a dense array of tubes resembling a carpet or forest.
Types:
Of the various means for nanotube synthesis, CVD shows the most promise for industrial-scale deposition, because of its price/unit ratio, and because CVD is capable of growing nanotubes directly on a desired substrate, whereas the nanotubes must be collected in the other growth techniques. The growth sites are controllable by careful deposition of the catalyst. In 2007, a team from Meijo University demonstrated a high-efficiency CVD technique for growing carbon nanotubes from camphor. Researchers at Rice University, until recently led by the late Richard Smalley, have concentrated upon finding methods to produce large, pure amounts of particular types of nanotubes. Their approach grows long fibers from many small seeds cut from a single nanotube; all of the resulting fibers were found to be of the same diameter as the original nanotube and are expected to be of the same type as the original nanotube.
Types:
Super-growth CVD Super-growth CVD (water-assisted chemical vapor deposition) was developed by Kenji Hata, Sumio Iijima and co-workers at AIST, Japan. In this process, the activity and lifetime of the catalyst are enhanced by addition of water into the CVD reactor. Dense millimeter-tall vertically aligned nanotube arrays (VANTAs) or "forests", aligned normal to the substrate, were produced. The forests height could be expressed, as H(t)=βτo(1−e−t/τo).
Types:
In this equation, β is the initial growth rate and τo is the characteristic catalyst lifetime.Their specific surface exceeds 1,000 m2/g (capped) or 2,200 m2/g (uncapped), surpassing the value of 400–1,000 m2/g for HiPco samples. The synthesis efficiency is about 100 times higher than for the laser ablation method. The time required to make SWNT forests of the height of 2.5 mm by this method was 10 minutes in 2004. Those SWNT forests can be easily separated from the catalyst, yielding clean SWNT material (purity >99.98%) without further purification. For comparison, the as-grown HiPco CNTs contain about 5–35% of metal impurities; it is therefore purified through dispersion and centrifugation that damages the nanotubes. Super-growth avoids this problem. Patterned highly organized single-walled nanotube structures were successfully fabricated using the super-growth technique.
Types:
The super-growth method is basically a variation of CVD. Therefore, it is possible to grow material containing SWNT, DWNTs and MWNTs, and to alter their ratios by tuning the growth conditions. Their ratios change by the thinness of the catalyst. Many MWNTs are included so that the diameter of the tube is wide.The vertically aligned nanotube forests originate from a "zipping effect" when they are immersed in a solvent and dried. The zipping effect is caused by the surface tension of the solvent and the van der Waals forces between the carbon nanotubes. It aligns the nanotubes into a dense material, which can be formed in various shapes, such as sheets and bars, by applying weak compression during the process. Densification increases the Vickers hardness by about 70 times and density is 0.55 g/cm3. The packed carbon nanotubes are more than 1 mm long and have a carbon purity of 99.9% or higher; they also retain the desirable alignment properties of the nanotubes forest.
Types:
Liquid electrolysis method In 2015, researchers in the George Washington University discovered a new pathway to synthesize MWCNTs by electrolysis of molten carbonates. The mechanism is similar to CVD. Some metal ions were reduced to a metal form and attached on the cathode as the nucleation point for the growing of CNTs. The reaction on the cathode is Li CO Li CNTs +O2 The formed lithium oxide can in-situ absorb carbon dioxide (if present) and form lithium carbonate, as shown in the equation.
Types:
Li CO Li CO 3 Thus the net reaction is CO CNTs +O2 In other words, the reactant is only greenhouse gas of carbon dioxide, while the product is high valued CNTs. This discovery was highlighted as a possible technology for carbon dioxide capture and conversion.
Types:
Natural, incidental, and controlled flame environments Fullerenes and carbon nanotubes are not necessarily products of high-tech laboratories; they are commonly formed in such mundane places as ordinary flames, produced by burning methane, ethylene, and benzene, and they have been found in soot from both indoor and outdoor air. However, these naturally occurring varieties can be highly irregular in size and quality because the environment in which they are produced is often highly uncontrolled. Thus, although they can be used in some applications, they can lack in the high degree of uniformity necessary to satisfy the many needs of both research and industry. Recent efforts have focused on producing more uniform carbon nanotubes in controlled flame environments. Such methods have promise for large-scale, low-cost nanotube synthesis based on theoretical models, though they must compete with rapidly developing large scale CVD production.
Purification:
Removal of catalysts Nanoscale metal catalysts are important ingredients for fixed- and fluidized-bed CVD synthesis of CNTs. They allow increasing the growth efficiency of CNTs and may give control over their structure and chirality. During synthesis, catalysts can convert carbon precursors into tubular carbon structures but can also form encapsulating carbon overcoats. Together with metal oxide supports they may therefore attach to or become incorporated into the CNT product. The presence of metal impurities can be problematic for many applications. Especially catalyst metals like nickel, cobalt or yttrium may be of toxicological concern. While unencapsulated catalyst metals may be readily removable by acid washing, encapsulated ones require oxidative treatment for opening their carbon shell. The effective removal of catalysts, especially of encapsulated ones, while preserving the CNT structure is a challenge and has been addressed in many studies. A new approach to break carbonaceous catalyst encapsulations is based on rapid thermal annealing.
Application-related issues:
Many electronic applications of carbon nanotubes crucially rely on techniques of selectively producing either semiconducting or metallic CNTs, preferably of a certain chirality. Several methods of separating semiconducting and metallic CNTs are known, but most of them are not yet suitable for large-scale technological processes. The most efficient method relies on density-gradient ultracentrifugation, which separates surfactant-wrapped nanotubes by the minute difference in their density. This density difference often translates into difference in the nanotube diameter and (semi)conducting properties. Another method of separation uses a sequence of freezing, thawing, and compression of SWNTs embedded in agarose gel. This process results in a solution containing 70% metallic SWNTs and leaves a gel containing 95% semiconducting SWNTs. The diluted solutions separated by this method show various colors. The separated carbon nanotubes using this method have been applied to electrodes, e.g. electric double-layer capacitor. Moreover, SWNTs can be separated by the column chromatography method. Yield is 95% in semiconductor type SWNT and 90% in metallic type SWNT.In addition to separation of semiconducting and metallic SWNTs, it is possible to sort SWNTs by length, diameter, and chirality. The highest resolution length sorting, with length variation of <10%, has thus far been achieved by size exclusion chromatography (SEC) of DNA-dispersed carbon nanotubes (DNA-SWNT). SWNT diameter separation has been achieved by density-gradient ultracentrifugation (DGU) using surfactant-dispersed SWNTs and by ion-exchange chromatography (IEC) for DNA-SWNT. Purification of individual chiralities has also been demonstrated with IEC of DNA-SWNT: specific short DNA oligomers can be used to isolate individual SWNT chiralities. Thus far, 12 chiralities have been isolated at purities ranging from 70% for (8,3) and (9,5) SWNTs to 90% for (6,5), (7,5) and (10,5) SWNTs. Alternatively, carbon nanotubes have been successfully sorted by chirality using the aqueous two phase extraction method. There have been successful efforts to integrate these purified nanotubes into electronic devices, such as field-effect transistors.An alternative to separation is development of a selective growth of semiconducting or metallic CNTs. This can be achieved by CVD that involves a combination of ethanol and methanol gases on a quartz substrate, resulting in horizontally aligned arrays of 95–98% semiconducting nanotubes.Nanotubes are usually grown on nanoparticles of magnetic metal (Fe, Co), which facilitates production of electronic (spintronic) devices. In particular, control of current through a field-effect transistor by magnetic field has been demonstrated in such a single-tube nanostructure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tribold**
Tribold:
Tribold was a company that operated in the Operational support systems market, producing Enterprise Product Management (EPM) software specifically for telecommunications service providers. Tribold EPM was a single, integrated suite of Enterprise Product Management applications.
Tribold EPM was based on a Centralized Product & Service Catalog (CPC) and a Product lifecycle management (PLM) solution.
Tribold was headquartered in London, United Kingdom, with offices in North America and Asia. Tribold was founded in 2003 as a privately held company. It was purchased by Sigma Systems in 2013.
History:
Tribold Limited was established in October 2003 by former Accenture Communications executives. It was founded as a commercial software company to make and sell Product Catalog/Product Lifecycle software to Communication Service Providers.
History:
In 2004, Tribold launched the first generally available version of software, Tribold Product Portfolio Manager (‘PPM’). In 2005 the company secured its first institutional investment from Eden Ventures. In 2006, Tribold contracted with its first major Tier 1 customer, Telstra in Australia, and this was closely followed by its first major European Tier 1 customer, Telekom Austria. In 2007 it also secured a USD $15 million institutional funding round from Draper Fisher Jurvetson Esprit and Eden Ventures. In 2009, it entered into another institutional funding round and secured a further US$11million from Draper Fisher Jurvetson Esprit, Eden Ventures and Intel Capital. In 2011, it opened a Research and Development centre in Cwmbran, with financial support from the Welsh government. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Carvel (boat building)**
Carvel (boat building):
Carvel built or carvel planking is a method of boat building in which hull planks are laid edge to edge and fastened to a robust frame, thereby forming a smooth surface. Traditionally the planks are neither attached to, nor slotted into, each other, having only a caulking sealant between the planks to keep water out. Modern carvel builders may attach the planks to each other with glues and fixings. It is a "frame first" method of hull construction, where the shape is determined by the framework onto which the planks are fixed. This is in contrast to "plank first" or "shell first" methods, where the outer skin of the hull is made and then reinforced by the insertion of timbers that are fitted to that shape. The most common modern "plank first" method is clinker construction; in the classical period "plank first" involved joining the edges of planks with mortise and tenon joints within the thickness of the timbers, superficially giving the smooth-hull appearance of carvel construction, but achieved by entirely different means.
Carvel (boat building):
Compared to clinker-built hulls, carvel construction allowed larger ships to be built. This is because the fastenings of a clinker hull took all the hogging and sagging forces imposed by the ship moving through large waves. In carvel construction, these forces are also taken by the edge-to-edge contact of the hull planks.
Etymology:
From Middle English carvel, carvelle, carvile, kervel (“small ship; caravel”); from Old French caruelle, carvelle, kirvelle. The term was used in English when caravels became popular in Northern European waters from c. 1440 onwards, and the method of hull construction took the name of the first vessel type made in that way in English and European shipyards.
History:
Carvel construction originated in the Mediterranean during the first millennium CE. It gradually replaced the edge-to-edge joining of hull planks by mortises and tenons – a "planking first" technique – which had been used by ancient Egyptians, Phoenicians and for much of the classical period. Archaeological evidence for this transition suggests it took place from c. 500 CE to the 9th century. Its slow adoption involved some variation and experimentation. Some ships were built using "framing-first", as opposed to the full "frame-first" system. In "framing-first", some of the framing is installed in the lower part of the hull, followed by the planking of that area, more framing is added to increase the height of the hull, and then more planking added to that. (The Romano-Celtic ship-building tradition of Northern Europe used "framing-first", but this part of Europe did not adopt the full "frame-first" method until much later, as discussed below.): 101 The changeover from planking-first to frame-first happened over the same period that the Mediterranean Square Sail rig was being replaced by lateen rig. That change has been suggested to save building, fitting out and maintenance costs (though previously it was thought to be to achieve better sailing performance – something which, against the presumptions of many maritime historians, can be shown not to have happened). The move to carvel construction is believed to be another cost-saving measure (though it is felt that this is not well understood by marine archaeologists). The difficult skill of mortising planks at precisely the right angle (where the hull is curved at the turn of the bilges) is avoided. Carvel construction allows hull shape to be determined by design, whilst planking-first relies on the "eye" of the builder. Therefore fewer very highly skilled personnel are needed.: 101 One of the transitional ships is the Yassi Ada ship (7th century CE), which was excavated between 1960 and 1965. This had the lower strakes of planking fastened edge-to-edge with mortises and tenons, then the floors were added, followed by more planking joined with tenons. This brought the planking up to the waterline. Further frames were added to this next set of planking, but these continued up to the height of the intended sheerline. The strakes from the waterline up were then fastened on as carvel planking (with some wales interspersed with the regular strakes).: 61 Northern Europe used clinker construction for the period discussed above, and into the 15th century (and continued to do so for many small craft into the present day). The different methods were known of by mariners in both places, but when, for instance, Mediterranean galleys were employed by the French and English during the Hundred Years' War, shipwrights familiar with carvel work had to be recruited to carry out maintenance and repairs.: 51 In the 1440s interest in the caravel grew in northern waters and shipyards there started building caravels in carvel construction.
History:
Relationship between clinker and carvel Clinker was the predominant method of ship construction used in Northern Europe before the carvel. In clinker built hulls, the planked edges overlap; carvel construction with its strong framing gives a heavier but more rigid hull, capable of taking a variety of sail rigs. Clinker (lapstrake) construction involves longitudinal overlapping "riven timber" (split wood) planks that are fixed together over very light scantlings. A carvel boat has a smoother surface which gives the impression that it is more hydrodynamically efficient since the exposed edges of the clinker planking appear to disturb the streamline and cause drag. A clinker certainly has a slightly larger wetted area, but a carvel hull is not necessarily more efficient: for given hull strength, the clinker boat is overall lighter, and displaces less water than a heavily-framed carvel hull.
History:
As cargo vessels become bigger, the vessel's weight becomes small in comparison with total displacement; and for a given external volume, there is greater internal hull space available. A clinker vessel whose ribs occupy less space than a carvel vessel's is more suitable for cargo which is bulky rather than dense.
History:
A structural benefit of clinker construction is that it produces a vessel that can safely twist and flex around its long axis (running from bow to stern). This is an advantage in North Atlantic rollers, provided the vessel has a small overall displacement. Due to the light nature of the construction method, increasing the beam did not commensurately increase the vessel's survivability under the twisting forces arising if, for example, when sailing downwind, the wave-train impinges on the quarter rather than dead astern. In these conditions greater beam widths may have made clinker vesselsmore vulnerable. As torsional forces increased in proportion to displaced (or cargo) weight, the forces incident on the hull imposed an upper limit on the size of clinker-built vessels. The greater rigidity of carvel construction became necessary for larger offshore cargo vessels. Later carvel-built sailing vessels exceeded the maximum size of clinker-built ships several times over.
History:
A further clinker limitation is that it does not readily support the point loads associated with lateen or sloop sailing rigs. At least some fore-and-aft sails are desirable for manoeuvrability. The same problem in providing for concentrated loads creates difficulties siting and supporting a centerboard or deep keel, which is much needed when sailing across or close to the wind. Timbers can be added as necessary compromise but always with some loss of the fundamental benefits of the construction method. Clinker construction remains a useful method of construction for small wooden vessels, especially for sea-going dinghies which need to be light enough to be readily moved and stored when out of the water.
History:
Modern carvel methods Traditional carvel methods leave a small gap between each plank that is caulked with any suitable soft, flexible, fibrous material, sometimes combined with a thick binding substance, which would gradually wear out and the hull would leak. When the boat was beached for a length of time, the planks would dry and shrink, so when first refloated, the hull would leak badly unless re-caulked, a time-consuming and physically demanding job. The modern variation is to use much narrower planks that are edge-glued instead of being caulked. With modern power sanders a much smoother hull is produced, as all the small ridges between the planks can be removed. This method started to become more common in the 1960s with the more widespread availability of waterproof glues, such as resorcinol (red glue) and then epoxy resin. Modern waterproof glues, especially epoxy resin, have caused revolutionary changes in carvel and clinker construction. Traditionally, nails provided the fastening strength; now it is the glue. It has become quite common since the 1980s for carvel and clinker construction to rely almost completely on glue for fastening. Many small boats, especially light plywood skiffs, are built without any mechanical fasteners such as nails and lag screws at all, as the glue is far stronger. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Industrial furnace**
Industrial furnace:
An industrial furnace, also known as a direct heater or a direct fired heater, is a device used to provide heat for an industrial process, typically higher than 400 degrees Celsius. They are used to provide heat for a process or can serve as reactor which provides heats of reaction. Furnace designs vary as to its function, heating duty, type of fuel and method of introducing combustion air. Heat is generated by an industrial furnace by mixing fuel with air or oxygen, or from electrical energy. The residual heat will exit the furnace as flue gas. These are designed as per international codes and standards the most common of which are ISO 13705 (Petroleum and natural gas industries — Fired heaters for general refinery service) / American Petroleum Institute (API) Standard 560 (Fired Heater for General Refinery Service). Types of industrial furnaces include batch ovens, metallurgical furnaces, vacuum furnaces, and solar furnaces. Industrial furnaces are used in applications such as chemical reactions, cremation, oil refining, and glasswork.
Overview:
Fuel flows into the burner and is burnt with air provided from an air blower. There can be more than one burner in a particular furnace which can be arranged in cells which heat a particular set of tubes. Burners can also be floor mounted, wall mounted or roof mounted depending on design. The flames heat up the tubes, which in turn heat the fluid inside in the first part of the furnace known as the radiant section or firebox. In this chamber where combustion takes place, the heat is transferred mainly by radiation to tubes around the fire in the chamber.
Overview:
The fluid to be heated passes through the tubes and is thus heated to the desired temperature. The gases from the combustion are known as flue gas. After the flue gas leaves the firebox, most furnace designs include a convection section where more heat is recovered before venting to the atmosphere through the flue gas stack. (HTF=Heat Transfer Fluid. Industries also use their furnaces to heat a secondary fluid with special additives like anti-rust and high heat transfer efficiency. This heated fluid is then circulated round the whole plant to heat exchangers to be used wherever heat is needed instead of directly heating the product line as the product or material may be volatile or prone to cracking at the furnace temperature.)
Components:
Radiant section The radiant section is where the tubes receive almost all its heat by radiation from the flame. In a vertical, cylindrical furnace, the tubes are vertical. Tubes can be vertical or horizontal, placed along the refractory wall, in the middle, etc., or arranged in cells. Studs are used to hold the insulation together and on the wall of the furnace. They are placed about 1 ft (300 mm) apart in this picture of the inside of a furnace.
Components:
The tubes, shown below, which are reddish brown from corrosion, are carbon steel tubes and run the height of the radiant section. The tubes are a distance away from the insulation so radiation can be reflected to the back of the tubes to maintain a uniform tube wall temperature. Tube guides at the top, middle and bottom hold the tubes in place.
Components:
Convection section The convection section is located above the radiant section where it is cooler to recover additional heat. Heat transfer takes place by convection here, and the tubes are finned to increase heat transfer. The first three tube rows in the bottom of the convection section and at the top of the radiant section is an area of bare tubes (without fins) and are known as the shield section ("shock tubes"), so named because they are still exposed to plenty of radiation from the firebox and they also act to shield the convection section tubes, which are normally of less resistant material from the high temperatures in the firebox.
Components:
The area of the radiant section just before flue gas enters the shield section and into the convection section called the bridgezone. A crossover is the tube that connects from the convection section outlet to the radiant section inlet. The crossover piping is normally located outside so that the temperature can be monitored and the efficiency of the convection section can be calculated. The sightglass at the top allows personnel to see the flame shape and pattern from above and visually inspect if flame impingement is occurring. Flame impingement happens when the flame touches the tubes and causes small isolated spots of very high temperature.
Components:
Radiant coil This is a series of tubes horizontal/ vertical hairpin type connected at ends (with 180° bends) or helical in construction. The radiant coil absorbs heat through radiation. They can be single pass or multi pass depending upon the process-side pressure drop allowed. The radiant coils and bends are housed in the radiant box. Radiant coil materials vary from carbon steel for low temperature services to high alloy steels for high temperature services. These are supported from the radiant side walls or hanging from the radiant roof. Material of these supports is generally high alloy steel. While designing the radiant coil, care is taken so that provision for expansion (in hot conditions) is kept.
Components:
Burner The burner in the vertical, cylindrical furnace as above, is located in the floor and fires upward. Some furnaces have side fired burners, such as in train locomotives. The burner tile is made of high temperature refractory and is where the flame is contained. Air registers located below the burner and at the outlet of the air blower are devices with movable flaps or vanes that control the shape and pattern of the flame, whether it spreads out or even swirls around. Flames should not spread out too much, as this will cause flame impingement. Air registers can be classified as primary, secondary and if applicable, tertiary, depending on when their air is introduced.
Components:
The primary air register supplies primary air, which is the first to be introduced in the burner. Secondary air is added to supplement primary air. Burners may include a pre-mixer to mix the air and fuel for better combustion before introducing into the burner. Some burners even use steam as premix to preheat the air and create better mixing of the fuel and heated air. The floor of the furnace is mostly made of a different material from that of the wall, typically hard castable refractory to allow technicians to walk on its floor during maintenance.
Components:
A furnace can be lit by a small pilot flame or in some older models, by hand. Most pilot flames nowadays are lit by an ignition transformer (much like a car's spark plugs). The pilot flame in turn lights up the main flame. The pilot flame uses natural gas while the main flame can use both diesel and natural gas. When using liquid fuels, an atomizer is used, otherwise, the liquid fuel will simply pour onto the furnace floor and become a hazard. Using a pilot flame for lighting the furnace increases safety and ease compared to using a manual ignition method (like a match).
Components:
Sootblower Sootblowers are found in the convection section. As this section is above the radiant section and air movement is slower because of the fins, soot tends to accumulate here. Sootblowing is normally done when the efficiency of the convection section is decreased. This can be calculated by looking at the temperature change from the crossover piping and at the convection section exit.
Components:
Sootblowers utilize flowing media such as water, air or steam to remove deposits from the tubes. This is typically done during maintenance with the air blower turned on. There are several different types of sootblowers used. Wall blowers of the rotary type are mounted on furnace walls protruding between the convection tubes. The lances are connected to a steam source with holes drilled into it at intervals along its length. When it is turned on, it rotates and blows the soot off the tubes and out through the stack.
Components:
Stack The flue gas stack is a cylindrical structure at the top of all the heat transfer chambers. The breeching directly below it collects the flue gas and brings it up high into the atmosphere where it will not endanger personnel.
Components:
The stack damper contained within works like a butterfly valve and regulates draft (pressure difference between air intake and air exit) in the furnace, which is what pulls the flue gas through the convection section. The stack damper also regulates the heat lost through the stack. As the damper closes, the amount of heat escaping the furnace through the stack decreases, but the pressure or draft in the furnace increases which poses risks to those working around it if there are air leakages in the furnace, the flames can then escape out of the firebox or even explode if the pressure is too great.
Components:
Insulation Insulation is an important part of the furnace because it improves efficiency by minimizing heat escape from the heated chamber. Refractory materials such as firebrick, castable refractories and ceramic fibre, are used for insulation. The floor of the furnace are normally castable type refractories while those on the walls are nailed or glued in place. Ceramic fibre is commonly used for the roof and wall of the furnace and is graded by its density and then its maximum temperature rating. For example, 8# 2,300 °F means 8 lb/ft3 density with a maximum temperature rating of 2,300 °F. The actual service temperature rating for ceramic fiber is a bit lower than the maximum rated temperature. (i.e. 2300 °F is only good to 2145 °F before permanent linear shrinkage).
Components:
Foundations Concrete pillars are foundation on which the heater is mounted. They can be four nos. for smaller heaters and may be up to 24 nos. for large size heaters. Design of pillars and entire foundation is done based on the load bearing capacity of soil and seismic conditions prevailing in the area. Foundation bolts are grouted in foundation after installation of the heater.
Components:
Access doors The heater body is provided with access doors at various locations. Access doors are to be used only during shutdown of heater. The normal size of the access door is 600x400 mm, which is sufficient for movement of people/ material into and out of the heater. During operation the access doors are properly bolted using leak proof high temperature gaskets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Break a leg**
Break a leg:
"Break a leg" is a typical English idiom used in the context of theatre or other performing arts to wish a performer "good luck". An ironic or non-literal saying of uncertain origin (a dead metaphor), "break a leg" is commonly said to actors and musicians before they go on stage to perform or before an audition. Though the term likely originates in German, the English expression is first attributed in the 1930s or possibly 1920s, originally documented without specifically theatrical associations. Among professional dancers, the traditional saying is not "break a leg", but the French word "merde".
Non-theatrical origins:
Yiddish-German pun theory Most commonly favored as a credible theory by etymologists and other scholars, the term was possibly a loan translation from the German phrase Hals- und Beinbruch, literally "neck and leg(bone) break", itself a loan translation from, and pun on, a Yiddish phrase (Yiddish: הצלחה און ברכה, romanized: hatsloche un broche, lit. 'success and blessing', Hebrew: hatzlacha u-bracha), a wish for good luck, because of the Yiddish phrase's humorously similar pronunciation to the unrelated German phrase. For example, Luftwaffe pilots are reported as using the phrase Hals- und Beinbruch to wish each other luck. The German-language term continues to mean "good luck" but is still not specific to the theatre.
Non-theatrical origins:
Superstition theory The urbane Irish nationalist Robert Wilson Lynd published an article, "A Defence of Superstition", in the 1 October 1921 edition of the New Statesman, a British liberal political and cultural magazine, regarding the theatre as the second-most superstitious institution in England, after horse racing. In horse racing, Lynd asserted that to wish a man luck is considered unlucky and so "You should say something insulting such as, 'May you break your leg!'" Thus, the expression could reflect a now-forgotten superstition (perhaps a theatrical superstition, though Lynd's 1921 mention is non-theatrical) in which directly wishing a person "good luck" would be considered bad luck, therefore an alternative way of wishing luck was employed. Lynd did not attribute the phrase in any way to theatre people, but he was familiar with many of them and frequently mingled with actors backstage.
Theatrical origins:
The aforementioned theory regarding Hals- und Beinbruch, a German saying via Yiddish origins, suggests that the term transferred from German aviation to German society at large and then, as early as the 1920s, into the American (or British and then American) theatre. The English translation of the term is probably explained by German-speaking Jewish immigrants entering the American entertainment industry after the First World War. The alternative theory that the term reflects an ironic superstition would date the term as originating around the same time.
Theatrical origins:
The earliest published example in writing specifically within a theatre context comes from American writer Edna Ferber's 1939 autobiography A Peculiar Treasure, in which she writes about the fascination in the theatre of "all the understudies sitting in the back row politely wishing the various principals would break a leg." American playwright Bernard Sobel's 1948 The Theatre Handbook and Digest of Plays describes theatrical superstitions: "before a performance actors never wish each other good luck, but say 'I hope you break a leg.'" There is some anecdotal evidence from theatrical memoirs and personal letters as early as the 1920s.
Theatrical origins:
Other popular but implausible theories The performer bowing: The term "break a leg" may refer to a performer bowing or curtsying to the audience in the metaphorical sense of bending one's leg to do so.
Theatrical origins:
The performer breaking the leg line: The edge of a stage just beyond the vantage point of the audience forms a line, imaginary or actually marked, that can be referred to as the "leg line," named after a type of concealing stage curtain: a leg. For an unpaid stand-by performer to cross or "break" this line would mean that the performer was getting an opportunity to go onstage and be paid; therefore, "break a leg" might have shifted from a specific hope for this outcome to a general hope for any performer's good fortune. Even less plausible, the saying could originally express the hope that an enthusiastic audience repeatedly calls for further bows or encores. This might cause a performer to repeatedly "break" the leg line, or, alternatively, it might even cause the leg curtains themselves to break from overuse.
Theatrical origins:
Alluding to David Garrick: During a performance of Shakespeare's Richard III, the famed 18th-century British actor David Garrick became so entranced in the performance that he was supposedly unaware of a literal fracture in his leg.
The audience breaking legs: Various folk-theories propose that Elizabethan or even Ancient Greek theatrical audiences either stomped their literal legs or banged chair legs to express applause.
Theatrical origins:
Alluding to John Wilkes Booth: One popular etymology derives the phrase from the 1865 assassination of Abraham Lincoln, during which John Wilkes Booth, the actor-turned-assassin, claimed in his diary that he broke his leg leaping to the stage of Ford's Theatre after murdering the president. The fact that actors did not start wishing each other to "break a leg" until as early as the 1920s (more than 50 years later) makes this an unlikely source. Furthermore, Booth often exaggerated and falsified his diary entries to make them more dramatic.
Alternative meanings:
There is an older, likely unrelated meaning of "break a leg" going back to the 17th and 18th centuries that refers to having "a bastard / natural child."
Alternative terms:
Professional dancers do not wish each other good luck by saying "break a leg;" instead they say "Merde!", the French word for "shit". In turn, theater people have picked up this usage and may wish each other "merde," alone or in combination with "break a leg." In Spanish, the phrase is "mucha mierda," or "lots of shit." In Portuguese, it's "muita merda," with the same meaning. This term refers to the times when carriages would take the audience to the theatre. A quick look to the street in front of the venue would tell if the play was successful: a lot of horse dung would mean many carriages had stopped to leave spectators.Opera singers use "Toi toi toi," an idiom used to ward off a spell or hex, often accompanied by knocking on wood, and onomatopoeic, spitting (or imitating the sound of spitting). Saliva traditionally was supposed to have demon-banishing powers. From Rotwelsch tof, from Yiddish tov ("good," derived from the Hebrew טוב and with phonetic similarities to the Old German word for "Devil"). One explanation sees "toi toi toi" as the onomatopoeic rendition of spitting three times. Spitting three times over someone's head or shoulder is a gesture to ward off evil spirits. A similar-sounding expression for verbal spitting occurs in modern Hebrew as "Tfu, tfu" (here, only twice), which some say that Hebrew-speakers borrowed from Russian.An alternate operatic good luck charm, originating from Italy, is the phrase "in bocca al lupo!" ("In the mouth of the wolf") with the response "Crepi il lupo!" ("May the wolf die") (see Standard Dictionary of Folklore, Myth & Legend).
Alternative terms:
In Australia, the term "chookas" has been used also. According to one oral tradition, one of the company would check audience numbers. If there were not many in the seats, the performers would have bread to eat following the performance. If the theatre was full they could then have "chook" —Australian slang for chicken— for dinner. Therefore, if it was a full house, the performer would call out "Chook it is!", which became abbreviated to "Chookas!" It is now used by performers prior to a show regardless of the number of patrons; and may be a wish for a successful turnout.
Alternative terms:
In Russian, a similar tradition existed for hunters, with one being told "Ни пуха, ни пера!" (romanized: Ni pukha, ni pera, "Neither fur nor feather") before the hunt, with the reply being "К чёрту" (romanized: K chiortu, "Go to hell"). Today, this exchange is customary for students before an exam.
In popular culture:
The 2001 Broadway musical comedy The Producers features a song titled "It's Bad Luck To Say 'Good Luck' On Opening Night," in which the novice producer Leo Bloom is instructed that the proper way to wish someone good luck on Broadway is to say "Break a leg." Moments later, the show's star is seen to break his leg—preventing him from performing—and in a later scene he breaks his other leg. The number also appears in the 2005 film version of the musical. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transcription activator-like effector**
Transcription activator-like effector:
TAL (transcription activator-like) effectors (often referred to as TALEs, but not to be confused with the three amino acid loop extension homeobox class of proteins) are proteins secreted by some β- and γ-proteobacteria. Most of these are Xanthomonads. Plant pathogenic Xanthomonas bacteria are especially known for TALEs, produced via their type III secretion system. These proteins can bind promoter sequences in the host plant and activate the expression of plant genes that aid bacterial infection. The TALE domain responsible for binding to DNA is known to have 1.5 to 33.5 short sequences that are repeated multiple times (tandem repeats). Each of these repeats was found to be specific for a certain base pair of the DNA. These repeats also have repeat variable residues (RVD) that can detect specific DNA base pairs. They recognize plant DNA sequences through a central repeat domain consisting of a variable number of ~34 amino acid repeats. There appears to be a one-to-one correspondence between the identity of two critical amino acids in each repeat and each DNA base in the target sequence. These proteins are interesting to researchers both for their role in disease of important crop species and the relative ease of retargeting them to bind new DNA sequences. Similar proteins can be found in the pathogenic bacterium Ralstonia solanacearum and Burkholderia rhizoxinica, as well as yet unidentified marine microorganisms. The term TALE-likes is used to refer to the putative protein family encompassing the TALEs and these related proteins.
Function in plant pathogenesis:
Xanthomonas Xanthomonas are Gram-negative bacteria that can infect a wide variety of plant species including pepper/capsicum, rice, citrus, cotton, tomato, and soybeans. Some types of Xanthomonas cause localized leaf spot or leaf streak while others spread systemically and cause black rot or leaf blight disease. They inject a number of effector proteins, including TAL effectors, into the plant via their type III secretion system. TAL effectors have several motifs normally associated with eukaryotes including multiple nuclear localization signals and an acidic activation domain. When injected into plants, these proteins can enter the nucleus of the plant cell, bind plant promoter sequences, and activate transcription of plant genes that aid in bacterial infection. Plants have developed a defense mechanism against type III effectors that includes R (resistance) genes triggered by these effectors. Some of these R genes appear to have evolved to contain TAL-effector binding sites similar to site in the intended target gene. This competition between pathogenic bacteria and the host plant has been hypothesized to account for the apparently malleable nature of the TAL effector DNA binding domain.
Function in plant pathogenesis:
Non-Xanthomonas R. solanacearum, B. rhizoxinica, and banana blood disease (a bacterium not yet definitively identified, in the R. solanacearum species group).
DNA recognition:
The most distinctive characteristic of TAL effectors is a central repeat domain containing between 1.5 and 33.5 repeats that are usually 34 residues in length (the C-terminal repeat is generally shorter and referred to as a “half repeat”). A typical repeat sequence is LTPEQVVAIASHDGGKQALETVQRLLPVLCQAHG, but the residues at the 12th and 13th positions are hypervariable (these two amino acids are also known as the repeat variable diresidue or RVD). There is a simple relationship between the identity of these two residues in sequential repeats and sequential DNA bases in the TAL effector's target site. The crystal structure of a TAL effector bound to DNA indicates that each repeat comprises two alpha helices and a short RVD-containing loop where the second residue of the RVD makes sequence-specific DNA contacts while the first residue of the RVD stabilizes the RVD-containing loop. Target sites of TAL effectors also tend to include a thymine flanking the 5’ base targeted by the first repeat; this appears to be due to a contact between this T and a conserved tryptophan in the region N-terminal of the central repeat domain. However, this "zero" position does not always contain a thymine, as some scaffolds are more permissive.The TAL-DNA code was broken by two separate groups in 2010. The first group, headed by Adam Bogdanove, broke this code computationally by searching for patterns in protein sequence alignments and DNA sequences of target promoters derived from a database of genes upregulated by TALEs. The second group (Boch) deduced the code through molecular analysis of the TAL effector AvrBs3 and its target DNA sequence in the promoter of a pepper gene activated by AvrBs3. The experimentally validated code between RVD sequence and target DNA base can be expressed as follows:
Target genes:
TAL effectors can induce susceptibility genes that are members of the NODULIN3 (N3) gene family. These genes are essential for the development of the disease. In rice two genes, Os-8N3 and Os-11N3, are induced by TAL effectors. Os-8N3 is induced by PthXo1 and Os-11N3 is induced by PthXo3 and AvrXa7.
Two hypotheses exist about possible functions for N3 proteins: They are involved in copper transport, resulting in detoxification of the environment for bacteria. The reduction in copper level facilitates bacterial growth.
They are involved in glucose transport, facilitating glucose flow. This mechanism provides nutrients to bacteria and stimulates pathogen growth and virulence
Engineering TAL effectors:
This simple correspondence between amino acids in TAL effectors and DNA bases in their target sites makes them useful for protein engineering applications. Numerous groups have designed artificial TAL effectors capable of recognizing new DNA sequences in a variety of experimental systems. Such engineered TAL effectors have been used to create artificial transcription factors that can be used to target and activate or repress endogenous genes in tomato, Arabidopsis thaliana, and human cells.Genetic constructs to encode TAL effector-based proteins can be made using either conventional gene synthesis or modular assembly. A plasmid kit for assembling custom TALEN and other TAL effector constructs is available through the public, not-for-profit repository Addgene. Webpages providing access to public software, protocols, and other resources for TAL effector-DNA targeting applications include the TAL Effector-Nucleotide Targeter and taleffectors.com.
Applications:
Engineered TAL effectors can also be fused to the cleavage domain of FokI to create TAL effector nucleases (TALEN) or to meganucleases (nucleases with longer recognition sites) to create "megaTALs." Such fusions share some properties with zinc finger nucleases and may be useful for genetic engineering and gene therapy applications.TALEN-based approaches are used in the emerging fields of gene editing and genome engineering. TALEN fusions show activity in a yeast-based assay, at endogenous yeast genes, in a plant reporter assay, at an endogenous plant gene, at endogenous zebrafish genes, at an endogenous rat gene, and at endogenous human genes. The human HPRT1 gene has been targeted at detectable, but unquantified levels. In addition, TALEN constructs containing the FokI cleavage domain fused to a smaller portion of the TAL effector still containing the DNA binding domain have been used to target the endogenous NTF3 and CCR5 genes in human cells with efficiencies of up to 25%. TAL effector nucleases have also been used to engineer human embryonic stem cells and induced pluripotent stem cells (IPSCs) and to knock out the endogenous ben-1 gene in C. elegans.TALE-induced non-homologous end joining modification has been used to produce novel disease resistance in rice. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Typewriter**
Typewriter:
A typewriter is a mechanical or electromechanical machine for typing characters. Typically, a typewriter has an array of keys, and each one causes a different single character to be produced on paper by striking an inked ribbon selectively against the paper with a type element. At the end of the nineteenth century, the term 'typewriter' was also applied to a person who used such a device.The first commercial typewriters were introduced in 1874, but did not become common in offices in the United States until after the mid-1880s. The typewriter quickly became an indispensable tool for practically all writing other than personal handwritten correspondence. It was widely used by professional writers, in offices, business correspondence in private homes, and by students preparing written assignments.
Typewriter:
Typewriters were a standard fixture in most offices up to the 1980s. Thereafter, they began to be largely supplanted by personal computers running word processing software. Nevertheless, typewriters remain common in some parts of the world. For example, typewriters are still used in many Indian cities and towns, especially in roadside and legal offices due to a lack of continuous, reliable electricity.The QWERTY keyboard layout, developed for typewriters in the 1870s, remains the de facto standard for English-language computer keyboards. The origins of this layout remain in dispute. Similar typewriter keyboards, with layouts optimised for other languages and orthographies, emerged soon afterwards and their layouts have also become standard for computer keyboards in their respective markets.
Typewriter:
Notable typewriter manufacturers included E. Remington and Sons, IBM, Godrej, Imperial Typewriter Company, Oliver Typewriter Company, Olivetti, Royal Typewriter Company, Smith Corona, Underwood Typewriter Company, Facit, Adler, and Olympia-Werke.
History:
Although many modern typewriters have one of several similar designs, their invention was incremental, developed by numerous inventors working independently or in competition with each other over a series of decades. As with the automobile, telephone, and telegraph, a number of people contributed insights and inventions that eventually resulted in ever more commercially successful instruments. Historians have estimated that some form of typewriter was invented 52 times as thinkers tried to come up with a workable design.Some early typing instruments include: In 1575, an Italian printmaker, Francesco Rampazetto, invented the scrittura tattile, a machine to impress letters in papers.
History:
In 1714, Henry Mill obtained a patent in Britain for a machine that, from the patent, appears to have been similar to a typewriter. The patent shows that this machine was actually created: "[he] hath by his great study and paines & expence invented and brought to perfection an artificial machine or method for impressing or transcribing of letters, one after another, as in writing, whereby all writing whatsoever may be engrossed in paper or parchment so neat and exact as not to be distinguished from print; that the said machine or method may be of great use in settlements and public records, the impression being deeper and more lasting than any other writing, and not to be erased or counterfeited without manifest discovery." In 1802, Italian Agostino Fantoni developed a particular typewriter to enable his blind sister to write.
History:
Between 1801 and 1808, Italian Pellegrino Turri invented a typewriter for his blind friend Countess Carolina Fantoni da Fivizzano.
In 1823, Italian Pietro Conti da Cilavegna invented a new model of typewriter, the tachigrafo, also known as tachitipo.
History:
In 1829, American William Austin Burt patented a machine called the "Typographer" which, in common with many other early machines, is listed as the "first typewriter". The London Science Museum describes it merely as "the first writing mechanism whose invention was documented", but even that claim may be excessive, since Turri's invention pre-dates it.By the mid-19th century, the increasing pace of business communication had created a need for mechanization of the writing process. Stenographers and telegraphers could take down information at rates up to 130 words per minute, whereas a writer with a pen was limited to a maximum of 30 words per minute (the 1853 speed record).From 1829 to 1870, many printing or typing machines were patented by inventors in Europe and America, but none went into commercial production.
History:
American Charles Thurber developed multiple patents, of which his first in 1843 was developed as an aid to the blind, such as the 1845 Chirographer.
In 1855, the Italian Giuseppe Ravizza created a prototype typewriter called Cembalo scrivano o macchina da scrivere a tasti ("Scribe harpsichord, or machine for writing with keys"). It was an advanced machine that let the user see the writing as it was typed.
History:
In 1861, Father Francisco João de Azevedo, a Brazilian priest, made his own typewriter with basic materials and tools, such as wood and knives. In that same year the Brazilian emperor D. Pedro II, presented a gold medal to Father Azevedo for this invention. Many Brazilian people as well as the Brazilian federal government recognize Fr. Azevedo as the inventor of the typewriter, a claim that has been the subject of some controversy.
History:
In 1865, John Jonathon Pratt, of Centre, Alabama (US), built a machine called the Pterotype which appeared in an 1867 Scientific American article and inspired other inventors.
Between 1864 and 1867, Peter Mitterhofer, a carpenter from South Tyrol (then part of Austria) developed several models and a fully functioning prototype typewriter in 1867.
History:
Hansen Writing Ball In 1865, Rev. Rasmus Malling-Hansen of Denmark invented the Hansen Writing Ball, which went into commercial production in 1870 and was the first commercially sold typewriter. It was a success in Europe and was reported as being used in offices on the European continent as late as 1909.Malling-Hansen used a solenoid escapement to return the carriage on some of his models which makes him a candidate for the title of inventor of the first "electric" typewriter.The Hansen Writing Ball was produced with only upper-case characters. The Writing Ball was used as a template for inventor Frank Haven Hall to create a derivative that would produce letter prints cheaper and faster.Malling-Hansen developed his typewriter further through the 1870s and 1880s and made many improvements, but the writing head remained the same. On the first model of the writing ball from 1870, the paper was attached to a cylinder inside a wooden box. In 1874, the cylinder was replaced by a carriage, moving beneath the writing head. Then, in 1875, the well-known "tall model" was patented, which was the first of the writing balls that worked without electricity. Malling-Hansen attended the world exhibitions in Vienna in 1873 and Paris in 1878 and he received the first-prize for his invention at both exhibitions.
History:
Sholes and Glidden typewriter The first typewriter to be commercially successful was patented in 1868 by Americans Christopher Latham Sholes, Frank Haven Hall, Carlos Glidden and Samuel W. Soule in Milwaukee, Wisconsin, although Sholes soon disowned the machine and refused to use or even recommend it. The working prototype was made by clock-maker and machinist Matthias Schwalbach. Hall, Glidden and Soule sold their shares in the patent (US 79,265) to Densmore and Sholes, who made an agreement with E. Remington and Sons (then famous as a manufacturer of sewing machines) to commercialize the machine as the Sholes and Glidden Type-Writer. This was the origin of the term typewriter.
History:
Remington began production of its first typewriter on March 1, 1873, in Ilion, New York. It had a QWERTY keyboard layout, which, because of the machine's success, was slowly adopted by other typewriter manufacturers. As with most other early typewriters, because the typebars strike upwards, the typist could not see the characters as they were typed.
History:
Index typewriter The index typewriter came into the market in the early 1880s. The index typewriter uses a pointer or stylus to choose a letter from an index. The pointer is mechanically linked so that the letter chosen could then be printed, most often by the activation of a lever.The index typewriter was briefly popular in niche markets. Although they were slower than keyboard type machines they were mechanically simpler and lighter, they were therefore marketed as being suitable for travellers, and because they could be produced more cheaply than keyboard machines, as budget machines for users who needed to produce small quantities of typed correspondence.
History:
For example, the Simplex Typewriter Company made index typewriters that cost 1/40th the cost of a Remington typewriter.The index typewriter's niche appeal however soon disappeared, as on the one hand new keyboard typewriters became lighter and more portable and on the other refurbished second hand machines began to become available. The last widely available western index machine was the Mignon typewriter produced by AEG which was produced until 1934. Considered one of the very best of the index typewriters, part of the Mignon's popularity was that it featured both interchangeable indexes and type, allowing the use of different fonts and character sets, something very few keyboard machines allowed and only at considerable added cost.Although pushed out of the market in most of the world by keyboard machines, successful Japanese and Chinese typewriters are of the index type albeit with a very much larger index and number of type elements.Embossing tape label makers are the most common index typewriters today, and perhaps the most common typewriters of any kind still being manufactured.The platen was mounted on a carriage that moved horizontally to the left, automatically advancing the typing position, after each character was typed. The carriage-return lever at the far left was then pressed to the right to return the carriage to its starting position and rotating the platen to advance the paper vertically. A small bell was struck a few characters before the right hand margin was reached to warn the operator to complete the word and then use the carriage-return lever.
History:
Other typewriters 1884 – Hammond "Ideal" typewriter with case, by Hammond Typewriter Company Limited, United States. Despite an unusual, curved keyboard (see picture in citation), the Hammond became popular because of its superior print quality and changeable typeface. Invented by James Hammond of Boston, Massachusetts in 1880, and commercially released in 1884. The type is carried on a pair of interchangeable rotating sectors, one controlled by each half of the keyboard. A small hammer pushes the paper against the ribbon and type sector to print each character. The mechanism was later adapted to give a straight QWERTY keyboard and proportional spacing.
History:
1891 – Fitch typewriter – No. 3287, typebar class, on a base board, made by the Fitch Typewriter Company (UK) in London. Operators of the early typewriters had to work "blind": the typed text emerged only after several lines had been completed. The Fitch was one of the first machines to allow prompt correction of mistakes; it was said to be the second machine operating on the visible writing system. The typebars were positioned behind the paper and the writing area faced upwards so that the result could be seen instantly. A curved frame kept the emerging paper from obscuring the keyboard, but the Fitch was soon eclipsed by machines in which the paper could be fed more conveniently at the rear.
History:
1893 – Gardner typewriter. This typewriter, patented by Mr J Gardner in 1893, was an attempt to reduce the size and cost. Although it prints 84 symbols, it has only 14 keys and two change-case keys. Several characters are indicated on each key and the character printed is determined by the position of the case keys, which choose one of six cases.
History:
1897 – The "Underwood 1 typewriter, 10" Pica, No. 990". This was the first typewriter with a typing area fully visible to the typist until a key is struck. These features, copied by all subsequent typewriters, allowed the typist to see and if necessary correct the typing as it proceeded. The mechanism was developed in the US by Franz X. Wagner from about 1892 and taken up, in 1895, by John T. Underwood (1857–1937), a producer of office supplies.
History:
Standardization By about 1910, the "manual" or "mechanical" typewriter had reached a somewhat standardized design. There were minor variations from one manufacturer to another, but most typewriters followed the concept that each key was attached to a typebar that had the corresponding letter molded, in reverse, into its striking head. When a key was struck briskly and firmly, the typebar hit a ribbon (usually made of inked fabric), making a printed mark on the paper wrapped around a cylindrical platen.The platen was mounted on a carriage that moved horizontally to the left, automatically advancing the typing position, after each character was typed. The carriage-return lever at the far left was then pressed to the right to return the carriage to its starting position and rotating the platen to advance the paper vertically. A small bell was struck a few characters before the right hand margin was reached to warn the operator to complete the word and then use the carriage-return lever. Typewriters for languages written right-to-left operate in the opposite direction.
History:
Frontstriking In most of the early typewriters, the typebars struck upward against the paper, pressed against the bottom of the platen, so the typist could not see the text as it was typed. What was typed was not visible until a carriage return caused it to scroll into view.
The difficulty with any other arrangement was ensuring the typebars fell back into place reliably when the key was released. This was eventually achieved with various ingenious mechanical designs and so-called "visible typewriters" which used frontstriking, in which the typebars struck forward against the front side of the platen, became standard.
One of the first was the Daugherty Visible, introduced in 1893, which also introduced the four-bank keyboard that became standard, although the Underwood which came out two years later was the first major typewriter with these features.
History:
Shift key A significant innovation was the shift key, introduced with the Remington No. 2 in 1878. This key physically "shifted" either the basket of typebars, in which case the typewriter is described as "basket shift", or the paper-holding carriage, in which case the typewriter is described as "carriage shift". Either mechanism caused a different portion of the typebar to come in contact with the ribbon/platen.
History:
The result is that each typebar could type two different characters, cutting the number of keys and typebars in half (and simplifying the internal mechanisms considerably). The obvious use for this was to allow letter keys to type both upper and lower case, but normally the number keys were also duplexed, allowing access to special symbols such as percent, %, and ampersand, &.Before the shift key, typewriters had to have a separate key and typebar for upper-case letters; in essence, the typewriter had two keyboards, one above the other. With the shift key, manufacturing costs (and therefore purchase price) were greatly reduced, and typist operation was simplified; both factors contributed greatly to mass adoption of the technology.
History:
Three-bank typewriters Certain models further reduced the number of keys and typebars by making each key perform three functions – each typebar could type three different characters. These little three-row machines were portable and could be used by journalists.Such three-row machines were popular with WWI journalists because they were lighter and more compact than four-bank typewriters, while they could type just as fast and use just as many symbols.Such three-row machines, such as the Bar-Let and the Corona No. 3 Typewriter have two separate shift keys, a "CAP" shift (for uppercase) and a "FIG" shift (for numbers and symbols).The Murray code was developed for a teletypewriter with a similar three-row typewriter keyboard.
History:
Tab key To facilitate typewriter use in business settings, a tab (tabulator) key was added in the late nineteenth century. Before using the key, the operator had to set mechanical "tab stops", pre-designated locations to which the carriage would advance when the tab key was pressed. This facilitated the typing of columns of numbers, freeing the operator from the need to manually position the carriage. The first models had one tab stop and one tab key; later ones allowed as many stops as desired, and sometimes had multiple tab keys, each of which moved the carriage a different number of spaces ahead of the decimal point (the tab stop), to facilitate the typing of columns with numbers of different length ($1.00, $10.00, $100.00, etc.) Dead keys Languages such as French, Spanish, and German required diacritics, special signs attached to or on top of the base letter: for example, a combination of the acute accent ´ plus e produced é; ~ plus n produced ñ. In metal typesetting, ⟨é⟩, ⟨ñ⟩, and others were separate sorts. With mechanical typewriters, the number of whose characters (sorts) was constrained by the physical limits of the machine, the number of keys required was reduced by the use of dead keys. Diacritics such as ´ (acute accent) would be assigned to a dead key, which did not move the platen forward, permitting another character to be imprinted at the same location; thus a single dead key such as the acute accent could be combined with a,e,i,o and u to produce á,é,í,ó and ú, reducing the number of sorts needed from 5 to 1. The typebars of "normal" characters struck a rod as they moved the metal character desired toward the ribbon and platen, and each rod depression moved the platen forward the width of one character. Dead keys had a typebar shaped so as not to strike the rod.
History:
Character sizes In English-speaking countries, ordinary typewriters printing fixed-width characters were standardized to print six horizontal lines per vertical inch, and had either of two variants of character width, one called pica for ten characters per horizontal inch and the other elite, for twelve. This differed from the use of these terms in printing, where pica is a linear unit (approximately 1⁄6 of an inch) used for any measurement, the most common one being the height of a type face.
History:
Color Some ribbons were inked in black and red stripes, each being half the width and running the entire length of the ribbon. A lever on most machines allowed switching between colors, which was useful for bookkeeping entries where negative amounts were highlighted in red. The red color was also used on some selected characters in running text, for emphasis. When a typewriter had this facility, it could still be fitted with a solid black ribbon; the lever was then used to switch to fresh ribbon when the first stripe ran out of ink. Some typewriters also had a third position which stopped the ribbon being struck at all. This enabled the keys to hit the paper unobstructed, and was used for cutting stencils for stencil duplicators (aka mimeograph machines).
History:
"Noiseless" designs In the early part of the 20th century, a typewriter was marketed under the name Noiseless and advertised as "silent". It was developed by Wellington Parker Kidder and the first model was marketed by the Noiseless Typewriter Company in 1917. Noiseless portables sold well in the 1930s and 1940s, and noiseless standards continued to be manufactured until the 1960s.In a conventional typewriter the typebar reaches the end of its travel simply by striking the ribbon and paper. A "noiseless" typewriter has a complex lever mechanism that decelerates the typebar mechanically before pressing it against the ribbon and paper in an attempt to dampen the noise.
History:
Electric designs Although electric typewriters would not achieve widespread popularity until nearly a century later, the basic groundwork for the electric typewriter was laid by the Universal Stock Ticker, invented by Thomas Edison in 1870. This device remotely printed letters and numbers on a stream of paper tape from input generated by a specially designed typewriter at the other end of a telegraph line.
History:
Early electric models Some electric typewriters were patented in the 19th century, but the first machine known to be produced in series is the Cahill of 1900.Another electric typewriter was produced by the Blickensderfer Manufacturing Company, of Stamford, Connecticut, in 1902. Like the manual Blickensderfer typewriters, it used a cylindrical typewheel rather than individual typebars. The machine was produced in several variants but apparently it was not a commercial success, for reasons that are unclear.The next step in the development of the electric typewriter came in 1910, when Charles and Howard Krum filed a patent for the first practical teletypewriter. The Krums' machine, named the Morkrum Printing Telegraph, used a typewheel rather than individual typebars. This machine was used for the first commercial teletypewriter system on Postal Telegraph Company lines between Boston and New York City in 1910.James Fields Smathers of Kansas City invented what is considered the first practical power-operated typewriter in 1914. In 1920, after returning from Army service, he produced a successful model and in 1923 turned it over to the Northeast Electric Company of Rochester for development. Northeast was interested in finding new markets for their electric motors and developed Smathers's design so that it could be marketed to typewriter manufacturers, and from 1925 Remington Electric typewriters were produced powered by Northeast's motors.After some 2,500 electric typewriters had been produced, Northeast asked Remington for a firm contract for the next batch. However, Remington was engaged in merger talks, which would eventually result in the creation of Remington Rand and no executives were willing to commit to a firm order. Northeast instead decided to enter the typewriter business for itself, and in 1929 produced the first Electromatic Typewriter.In 1928, Delco, a division of General Motors, purchased Northeast Electric, and the typewriter business was spun off as Electromatic Typewriters, Inc. In 1933, Electromatic was acquired by IBM, which then spent $1 million on a redesign of the Electromatic Typewriter, launching the IBM Electric Typewriter Model 01.In 1931, an electric typewriter was introduced by Varityper Corporation. It was called the Varityper, because a narrow cylinder-like wheel could be replaced to change the font.In 1941, IBM announced the Electromatic Model 04 electric typewriter, featuring the revolutionary concept of proportional spacing. By assigning varied rather than uniform spacing to different sized characters, the Type 4 recreated the appearance of a typeset page, an effect that was further enhanced by including the 1937 innovation of carbon-film ribbons that produced clearer, sharper words on the page.
History:
IBM Selectric IBM introduced the IBM Selectric typewriter in 1961, which replaced the typebars with a spherical element (or typeball) slightly smaller than a golf ball, with reverse-image letters molded into its surface. The Selectric used a system of latches, metal tapes, and pulleys driven by an electric motor to rotate the ball into the correct position and then strike it against the ribbon and platen. The typeball moved laterally in front of the paper, instead of the previous designs using a platen-carrying carriage moving the paper across a stationary print position.Due to the physical similarity, the typeball was sometimes referred to as a "golfball". The typeball design had many advantages, especially the elimination of "jams" (when more than one key was struck at once and the typebars became entangled) and in the ability to change the typeball, allowing multiple fonts to be used in a single document.The IBM Selectric became a commercial success, dominating the office typewriter market for at least two decades. IBM also gained an advantage by marketing more heavily to schools than did Remington, with the idea that students who learned to type on a Selectric would later choose IBM typewriters over the competition in the workplace as businesses replaced their old manual models.Later models of IBM Executives and Selectrics replaced inked fabric ribbons with "carbon film" ribbons that had a dry black or colored powder on a clear plastic tape. These could be used only once, but later models used a cartridge that was simple to replace. A side effect of this technology was that the text typed on the machine could be easily read from the used ribbon, raising issues where the machines were used for preparing classified documents (ribbons had to be accounted for to ensure that typists did not carry them from the facility).A variation known as "Correcting Selectrics" introduced a correction feature, where a sticky tape in front of the carbon film ribbon could remove the black-powdered image of a typed character, eliminating the need for little bottles of white dab-on correction fluid and for hard erasers that could tear the paper. These machines also introduced selectable "pitch" so that the typewriter could be switched between pica type (10 characters per inch) and elite type (12 per inch), even within one document. Even so, all Selectrics were monospaced – each character and letterspace was allotted the same width on the page, from a capital "W" to a period. IBM did produce a successful typebar-based machine with five levels of proportional spacing, called the IBM Executive.
History:
The only fully electromechanical Selectric Typewriter with fully proportional spacing and which used a Selectric type element was the expensive Selectric Composer, which was capable of right-margin justification (typing each line twice was required, once to calculate and again to print) and was considered a typesetting machine rather than a typewriter. Composer typeballs physically resembled those of the Selectric typewriter but were not interchangeable.In addition to its electronic successors, the Magnetic Tape Selectric Composer (MT/SC), the Mag Card Selectric Composer, and the Electronic Selectric Composer, IBM also made electronic typewriters with proportional spacing using the Selectric element that were considered typewriters or word processors instead of typesetting machines.The first of these was the relatively obscure Mag Card Executive, which used 88-character elements. Later, some of the same typestyles used for it were used on the 96-character elements used on the IBM Electronic Typewriter 50 and the later models 65 and 85.By 1970, as offset printing began to replace letterpress printing, the Composer would be adapted as the output unit for a typesetting system. The system included a computer-driven input station to capture the key strokes on magnetic tape and insert the operator's format commands, and a Composer unit to read the tape and produce the formatted text for photo reproduction.The IBM 2741 terminal was a popular example of a Selectric-based computer terminal, and similar mechanisms were employed as the console devices for many IBM System/360 computers. These mechanisms used "ruggedized" designs compared to those in standard office typewriters.
History:
Later electric models Some of IBM's advances were later adopted in less expensive machines from competitors. For example, Smith-Corona electric typewriters introduced in 1973 switched to interchangeable Coronamatic (SCM-patented) ribbon cartridges. including fabric, film, erasing, and two-color versions. At about the same time, the advent of photocopying meant that carbon copies, correction fluid and erasers were less and less necessary; only the original need be typed, and photocopies made from it.
History:
Electronic typewriters The final major development of the typewriter was the electronic typewriter. Most of these replaced the typeball with a plastic or metal daisy wheel mechanism (a disk with the letters molded on the outside edge of the "petals"). The daisy wheel concept first emerged in printers developed by Diablo Systems in the 1970s. The first electronic daisywheel typewriter marketed in the world (in 1976) is the Olivetti Tes 501, and subsequently in 1978, the Olivetti ET101 (with function display) and Olivetti TES 401 (with text display and floppy disk for memory storage). This has allowed Olivetti to maintain the world record in the design of electronic typewriters, proposing increasingly advanced and performing models in the following years.Unlike the Selectrics and earlier models, these really were "electronic" and relied on integrated circuits and electromechanical components. These typewriters were sometimes called display typewriters, dedicated word processors or word-processing typewriters, though the latter term was also frequently applied to less sophisticated machines that featured only a tiny, sometimes just single-row display. Sophisticated models were also called word processors, though today that term almost always denotes a type of software program. Manufacturers of such machines included Olivetti (TES501, first totally electronic Olivetti word processor with daisywheel and floppy disk in 1976; TES621 in 1979 etc.), Brother (Brother WP1 and WP500 etc., where WP stood for word processor), Canon (Canon Cat), Smith-Corona (PWP, i.e. Personal Word Processor line) and Philips/Magnavox (VideoWriter).
History:
Decline The pace of change was so rapid that it was common for clerical staff to have to learn several new systems, one after the other, in just a few years. While such rapid change is commonplace today, and is taken for granted, this was not always so; in fact, typewriting technology changed very little in its first 80 or 90 years.Due to falling sales, IBM sold its typewriter division in 1991 to the newly formed Lexmark, completely exiting from a market it once dominated.The increasing dominance of personal computers, desktop publishing, the introduction of low-cost, truly high-quality laser and inkjet printer technologies, and the pervasive use of web publishing, email, text messaging, and other electronic communication techniques have largely replaced typewriters in the United States. Still, as of 2009, typewriters continued to be used by a number of government agencies and other institutions in the US, where they are primarily used to fill preprinted forms. According to a Boston typewriter repairman quoted by The Boston Globe, "Every maternity ward has a typewriter, as well as funeral homes".A rather specialized market for typewriters exists due to the regulations of many correctional systems in the US, where prisoners are prohibited from having computers or telecommunication equipment, but are allowed to own typewriters. The Swintec corporation (headquartered in Moonachie, New Jersey), which, as of 2011, still produced typewriters at its overseas factories (in Japan, Indonesia, and/or Malaysia), manufactures a variety of typewriters for use in prisons, made of clear plastic (to make it harder for prisoners to hide prohibited items inside it). As of 2011, the company had contracts with prisons in 43 US states.In April 2011, Godrej and Boyce, a Mumbai-based manufacturer of mechanical typewriters, closed its doors, leading to a flurry of news reports that the "world's last typewriter factory" had shut down. The reports were quickly contested, with opinions settling to agree that it was indeed the world's last producer of manual typewriters.In November 2012, Brother's UK factory manufactured what it claimed to be the last typewriter ever made in the UK; the typewriter was donated to the London Science Museum.Russian typewriters use Cyrillic, which has made the ongoing Azerbaijani reconversion from Cyrillic to Latin alphabet more difficult. In 1997, the government of Turkey offered to donate western typewriters to the Republic of Azerbaijan in exchange for more zealous and exclusive promotion of the Latin alphabet for the Azerbaijani language; this offer, however, was declined.In Latin America and Africa, mechanical typewriters are still common because they can be used without electrical power. In Latin America, the typewriters used are most often Brazilian models; Brazil continues to produce mechanical (Facit) and electronic (Olivetti) typewriters to the present day.The early 21st century saw revival of interest in typewriters among certain subcultures, including makers, steampunks, hipsters, and street poets.
Correction technologies:
According to the standards taught in secretarial schools in the mid-20th century, a business letter was supposed to have no mistakes and no visible corrections.
Correction technologies:
Typewriter erasers The traditional erasing method involved the use of a special typewriter eraser made of hard rubber that contained an abrasive material. Some were thin, flat disks, pink or gray, approximately 2 inches (51 mm) in diameter by 1⁄8 inch (3.2 mm) thick, with a brush attached from the center, while others looked like pink pencils, with a sharpenable eraser at the "lead" end and a stiff nylon brush at the other end. Either way, these tools made possible erasure of individual typed letters. Business letters were typed on heavyweight, high-rag-content bond paper, not merely to provide a luxurious appearance, but also to stand up to erasure.Typewriter eraser brushes were necessary for clearing eraser crumbs and paper dust, and using the brush properly was an important element of typewriting skill; if erasure detritus fell into the typewriter, a small buildup could cause the typebars to jam in their narrow supporting grooves.
Correction technologies:
Erasing shield Erasing a set of carbon copies was particularly difficult, and called for the use of a device called an erasing shield or eraser shield (a thin stainless-steel rectangle about 2 by 3 inches (51 by 76 mm) with several tiny holes in it) to prevent the pressure of erasing on the upper copies from producing carbon smudges on the lower copies. To correct copies, typists had to go from one carbon copy layer to the next carbon copy layer, trying not to get their fingers dirty as they leafed through the carbon papers, and moving and repositioning the eraser shield and eraser for each copy.
Correction technologies:
Erasable bond Paper companies produced a special form of typewriter paper called erasable bond (for example, Eaton's Corrasable Bond). This incorporated a thin layer of material that prevented ink from penetrating and was relatively soft and easy to remove from the page. An ordinary soft pencil eraser could quickly produce perfect erasures on this kind of paper. However, the same characteristics that made the paper erasable made the characters subject to smudging due to ordinary friction and deliberate alteration after the fact, making it unacceptable for business correspondence, contracts, or any archival use.
Correction technologies:
Correction fluid In the 1950s and 1960s, correction fluid made its appearance, under brand names such as Liquid Paper, Wite-Out and Tipp-Ex; it was invented by Bette Nesmith Graham. Correction fluid was a kind of opaque, white, fast-drying paint that produced a fresh white surface onto which, when dry, a correction could be retyped. However, when held to the light, the covered-up characters were visible, as was the patch of dry correction fluid (which was never perfectly flat, and frequently not a perfect match for the color, texture, and luster of the surrounding paper). The standard trick for solving this problem was photocopying the corrected page, but this was possible only with high quality photocopiers.A different fluid was available for correcting stencils. It sealed up the stencil ready for retyping but did not attempt to color match.
Legacy:
Keyboard layouts QWERTY The 1874 Sholes & Glidden typewriters established the "QWERTY" layout for the letter keys. During the period in which Sholes and his colleagues were experimenting with this invention, other keyboard arrangements were apparently tried, but these are poorly documented. The QWERTY layout of keys has become the de facto standard for English-language typewriter and computer keyboards. Other languages written in the Latin alphabet sometimes use variants of the QWERTY layouts, such as the French AZERTY, the Italian QZERTY and the German QWERTZ layouts.The QWERTY layout is not the most efficient layout possible for the English language. Touch-typists are required to move their fingers between rows to type the most common letters. Although the QWERTY keyboard was the most commonly used layout in typewriters, a better, less strenuous keyboard was being searched for throughout the late 1900s.One popular but incorrect explanation for the QWERTY arrangement is that it was designed to reduce the likelihood of internal clashing of typebars by placing commonly used combinations of letters farther from each other inside the machine.
Legacy:
Other layouts for English A number of radically different layouts such as Dvorak have been proposed to reduce the perceived inefficiencies of QWERTY, but none have been able to displace the QWERTY layout; their proponents claim considerable advantages, but so far none has been widely used. The Blickensderfer typewriter with its DHIATENSOR layout may have possibly been the first attempt at optimizing the keyboard layout for efficiency advantages.On modern keyboards, the exclamation point is the shifted character on the 1 key, because these were the last characters to become "standard" on keyboards. Holding the spacebar down usually suspended the carriage advance mechanism (a so-called "dead key" feature), allowing one to superimpose multiple keystrikes on a single location. The ¢ symbol (meaning cents) was located above the number 6 on American electric typewriters, whereas ANSI-INCITS-standard computer keyboards have ^ instead.
Legacy:
Keyboards for other languages The keyboards for other Latin languages are broadly similar to QWERTY but are optimised for the relevant orthography. In addition to some changes in the order of letters, perhaps the most obvious is the presence of precomposed characters and diacritics. Many non-Latin alphabets have keyboard layouts that have nothing to do with QWERTY. The Russian layout, for instance, puts the common trigrams ыва, про, and ить on adjacent keys so that they can be typed by rolling the fingers.Typewriters were also made for East Asian languages with thousands of characters, such as Chinese or Japanese. They were not easy to operate, but professional typists used them for a long time until the development of electronic word processors and laser printers in the 1980s.
Legacy:
Typewriter conventions A number of typographical conventions stem from the typewriter's characteristics and limitations. For example, the QWERTY keyboard typewriter did not include keys for the en dash and the em dash. To overcome this limitation, users typically typed more than one adjacent hyphen to approximate these symbols. This typewriter convention is still sometimes used today, even though modern computer word processing applications can input the correct en and em dashes for each font type.Other examples of typewriter practices that are sometimes still used in desktop publishing systems include inserting a double space between sentences, and the use of the typewriter apostrophe, ', and straight quotes, ", as quotation marks and prime marks. The practice of underlining text in place of italics and the use of all capitals to provide emphasis are additional examples of typographical conventions that derived from the limitations of the typewriter keyboard that still carry on today.Many older typewriters did not include a separate key for the numeral 1 or the exclamation point !, and some even older ones also lacked the numeral zero, 0. Typists who trained on these machines learned the habit of using the lowercase letter l ("ell") for the digit 1, and the uppercase O ("oh") for the zero. A cents symbol, ¢ was created by combining (over-striking) a lower case c with a slash character (typing c, then backspace, then /). Similarly, the exclamation point was created by combining an apostrophe and a period ('+. ≈!).
Legacy:
Terminology repurposed for the computer age Some terminology from the typewriter age has survived into the computer era.
Legacy:
backspace (BS) – a keystroke that moved the cursor backwards one position (on a typewriter, this moved the physical platen backwards), to enable a character to be overtyped. Originally this was used to combine characters (for example, the sequence ', backspace, . to make !). Subsequently it facilitated "erase and retype" corrections (using correction tape or fluid.) Only the latter concept has survived into the computer age.
Legacy:
carriage return (CR) – return to the first column of text. (Most typewriters switched automatically to the next line. In computer systems, "line feed" (see below) is a function that is controlled independently.) cursor – a marker used to indicate where the next character will be printed. The cursor was originally a term to describe the clear slider on a slide rule; on typewriters, it was the paper that moved and the insertion point was fixed.
Legacy:
cut and paste – taking text, a numerical table, or an image and pasting it into a document. The term originated when such compound documents were created using manual paste up techniques for typographic page layout. Actual brushes and paste were later replaced by hot-wax machines equipped with cylinders that applied melted adhesive wax to developed prints of "typeset" copy. This copy was then cut out with knives and rulers, and slid into position on layout sheets on slanting layout tables. After the "copy" had been correctly positioned and squared up using a T-square and set square, it was pressed down with a brayer, or roller. The whole point of the exercise was to create so-called "camera-ready copy" which existed only to be photographed and then printed, usually by offset lithography.
Legacy:
dead key – a key that, when typed, does not advance the typing position, thus allowing another character to be overstruck on top of the original character. This was typically used to combine diacritical marks with letters they modified (e.g. è can be generated by first pressing ` and then e). In Europe, where most languages have diacritics, a typical mechanical arrangement meant that hitting the accent key typed the symbol but did not advance the carriage, consequently the next character to be typed 'landed' on the same position. It was this method that carried across to the computer age whereas an alternative method (press the space bar simultaneously) did not.
Legacy:
line feed (LF), also called "newline" – Whereas most typewriters rolled the paper forward automatically on a "carriage return), this is an explicit control character on computer systems that moves the cursor to the next on-screen line of text. (But not to the beginning of that line – a CR is also needed if that effect is desired.) shift – a modifier key used to type capital letters and other alternate "upper case" characters; when pressed and held down, would shift a typewriter's mechanism to allow a different typebar impression (such as 'D' instead of 'd') to press into the ribbon and print on a page. The concept of a shift key or modifier key was later extended to Ctrl, Alt, AltGr and Super ("Windows" or "Apple") keys on modern computer keyboards. The generalized concept of a shift key reached its apex in the MIT space-cadet keyboard.
Legacy:
tab (HT), shortened from "horizontal tab" or "tabulator stop" – caused the print position to advance horizontally to the next pre-set "tab stop". This was used for typing lists and tables with vertical columns of numbers or words.The vertical tab (VT) control character, named by analogy with HT, was designed for use with early computer line printers, and would cause the fan-fold paper to be fed until the next line's position.
Legacy:
tty, short for teletypewriter – used in Unix-like operating systems to designate a given "terminal".
Social effects:
When Remington started marketing typewriters, the company assumed the machine would not be used for composing but for transcribing dictation, and that the person typing would be a woman. The 1800s Sholes and Glidden typewriter had floral ornamentation on the case.During World Wars I and II, increasing numbers of women were entering the workforce. In the United States, women often started in the professional workplace as typists. Questions about morals made a salacious businessman making sexual advances to a female typist into a cliché of office life, appearing in vaudeville and movies. Being a typist was considered the right choice for a "good girl", meaning women who present themselves as being chaste and having good conduct. According to the 1900 census, 94.9% of stenographers and typists were unmarried women.The "Tijuana bibles" – adult comic books produced in Mexico for the American market, starting in the 1930s – often featured women typists. In one panel, a businessman in a three-piece suit, ogling his secretary's thigh, says, "Miss Higby, are you ready for—ahem!—er—dictation?"The typewriter was a useful machine during the censorship era of the Soviet government, starting during the Russian Civil War (1917–1922). Samizdat was a form of surreptitious self-publication used when the government was censoring what literature the public could see. The Soviet government signed a Decree on Press which prohibited the publishing of any written work that had not been previously officially reviewed and approved. Unapproved work was copied manually, most often on typewriters. In 1983, a new law required anyone who needed a typewriter to get police permission to buy or keep one. In addition, the owner would have to register a typed sample of all its letters and numbers, to ensure that any illegal literature typed with it could be traced back to its source. The typewriter became increasingly popular as the interest in prohibited books grew.
Writers with notable associations with typewriters:
Early adopters Henry James dictated to a typist.
Mark Twain claimed in his autobiography that he was the first important writer to present a publisher with a typewritten manuscript, for The Adventures of Tom Sawyer (1876). Research showed that Twain's memory was incorrect and that the first book submitted in typed form was Life on the Mississippi (1883, also by Twain).
Writers with notable associations with typewriters:
Others William S. Burroughs wrote in some of his novels – and possibly believed – that "a machine he called the 'Soft Typewriter' was writing our lives, and our books, into existence", according to a book review in The New Yorker. In the film adaptation of his novel Naked Lunch, his typewriter is a living, insect-like entity (voiced by North American actor Peter Boretski) and actually dictates the book to him.
Writers with notable associations with typewriters:
J. R. R. Tolkien was accustomed to typing from awkward positions: "balancing his typewriter on his attic bed, because there was no room on his desk".
Writers with notable associations with typewriters:
Jack Kerouac, a fast typist at 100 words per minute, typed On the Road on a roll of paper so he would not be interrupted by having to change the paper. Within two weeks of starting to write On the Road, Kerouac had one single-spaced paragraph, 120 feet (37 m) long. Some scholars say the scroll was shelf paper; others contend it was a Thermal-fax roll; another theory is that the roll consisted of sheets of architect's paper taped together. Kerouac himself stated that he used 100-foot (30 m) rolls of teletype paper.
Writers with notable associations with typewriters:
Don Marquis purposely used the limitations of a typewriter (or more precisely, a particular typist) in his archy and mehitabel series of newspaper columns, which were later compiled into a series of books. According to his literary conceit, a cockroach named "Archy" was a reincarnated free-verse poet, who would type articles overnight by jumping onto the keys of a manual typewriter. The writings were typed completely in lower case, because of the cockroach's inability to generate the heavy force needed to operate the shift key. The lone exception is the poem "CAPITALS AT LAST" from archys life of mehitabel, written in 1933.
Writers with notable associations with typewriters:
Late users Richard Polt, a philosophy professor at Xavier University in Cincinnati who collects typewriters, edits ETCetera, a quarterly magazine about historic writing machines, and is the author of the book The Typewriter Revolution: A Typist's Companion for the 21st Century.
William Gibson used a Hermes 2000 model manual typewriter to write Neuromancer and half of Count Zero before a mechanical failure and lack of replacement parts forced him to upgrade to an Apple IIc computer.
Writers with notable associations with typewriters:
Harlan Ellison used typewriters for his entire career, and when he was no longer able to have them repaired, learned to do it himself; he repeatedly stated his belief that computers are bad for writing, maintaining that "Art is not supposed to be easier!" Cormac McCarthy wrote his novels on an Olivetti Lettera 32 typewriter until his death. In 2009, the Lettera he obtained from a pawn shop in 1963, on which nearly all his novels and screenplays have been written, was auctioned for charity at Christie's for US$254,500; McCarthy obtained an identical replacement for $20 to continue writing on.
Writers with notable associations with typewriters:
Will Self explains why he uses a manual typewriter: "I think the computer user does their thinking on the screen, and the non-computer user is compelled, because he or she has to retype a whole text, to do a lot more thinking in the head." Ted Kaczynski (the "Unabomber") infamously used two old manual typewriters to write his polemic essays and messages.
Writers with notable associations with typewriters:
Actor Tom Hanks uses and collects manual typewriters. To control the size of his collection, he gifts autographed machines to appreciative fans and repair shops around the world.
Typewriters in popular culture:
In music Erik Satie's 1917 score for the ballet Parade includes a "Mach. à écrire" as a percussion instrument, along with (elsewhere) a roulette wheel and a pistol.
Typewriters in popular culture:
The composer Leroy Anderson wrote The Typewriter (1950) for orchestra and typewriter, and it has since been used as the theme for numerous radio programs. The solo instrument is a real typewriter played by a percussionist. The piece was later made famous by comedian Jerry Lewis as part of his regular routine both on screen and stage, most notably in the 1963 film Who's Minding the Store?.
Typewriters in popular culture:
The Boston Typewriter Orchestra (BTO), a comedic musical percussion group, has performed at numerous art festivals, clubs, and parties since 2004.
South Korean improviser Ryu Hankil frequently performs on typewriters, most prominently in his 2009 album Becoming Typewriter.
Other The 2012 French comedy movie Populaire, starring Romain Duris and Déborah François, centers on a young secretary in the 1950s striving to win typewriting speed competitions.
The manga (2015–2020) and anime (2018) Violet Evergarden series follows a disabled war veteran who learns to type because her handwriting has been impaired, and soon she becomes a popular typist.
California Typewriter, a 2016 documentary film, investigates the culture of typewriter enthusiasts, including an eponymous repair store in Berkeley, California.
Forensic examination:
Typewritten documents may be examined by forensic document examiners. This is done primarily to determine 1) the make and/or model of the typewriter used to produce a document, or 2) whether or not a particular suspect typewriter might have been used to produce a document.The determination of a make and/or model of typewriter is a 'classification' problem and several systems have been developed for this purpose. These include the original Haas Typewriter Atlases (Pica version) and (Non-Pica version) and the TYPE system developed by Philip Bouffard, the Royal Canadian Mounted Police's Termatrex Typewriter classification system, and Interpol's typewriter classification system, among others.The earliest reference in fictional literature to the potential identification of a typewriter as having produced a document was by Sir Arthur Conan Doyle, who wrote the Sherlock Holmes short story "A Case of Identity" in 1891.In non-fiction, the first document examiner to describe how a typewriter might be identified was William E. Hagan who wrote, in 1894, "All typewriter machines, even when using the same kind of type, become more or less peculiar by use as to the work done by them". Other early discussions of the topic were provided by A. S. Osborn in his 1908 treatise, Typewriting as Evidence, and again in his 1929 textbook, Questioned Documents.A modern description of the examination procedure is laid out in ASTM Standard E2494-08 (Standard Guide for Examination of Typewritten Items).Typewriter examination was used in the Leopold and Loeb and Alger Hiss cases.
Forensic examination:
In the Eastern Bloc, typewriters (together with printing presses, copy machines, and later computer printers) were a controlled technology, with secret police in charge of maintaining records of the typewriters and their owners. In the Soviet Union, the First Department of each organization sent data on organization's typewriters to the KGB. This posed a significant risk for dissidents and samizdat authors. In Romania, according to State Council Decree No. 98 of March 28, 1983, owning a typewriter, both by businesses or by private persons, was subject to an approval given by the local police authorities. People previously convicted of any crime or those who because of their behaviour were considered to be "a danger to public order or to the security of the state" were refused approval. In addition, once a year, typewriter owners had to take the typewriter to the local police station, where they would be asked to type a sample of all the typewriter's characters. It was also forbidden to borrow, lend, or repair typewriters other than at the places that had been authorized by the police.
Collections:
Public and private collections of typewriters exist around the world, including: Schreibmaschinenmuseum Peter Mitterhofer (Parcines, Italy) Museo della Macchina da Scrivere (Milan, Italy) Martin Howard Collection of Early Typewriters (Toronto, Canada) Liverpool Typewriter Museum (Liverpool, England) Museum of Printing – MoP (Haverhill, Massachusetts, US) Chestnut Ridge Typewriter Museum (Fairmont, West Virginia, US) Technical Museum of the Empordà (Figueres, Girona, Spain) Musée de la machine à écrire (Lausanne, Switzerland) Lu Hanbin Typewriter Museum Shanghai (Shanghai, China) Wattens Typewriter Museum (Wattens, Austria) German Typewriter Museum (Bayreuth, Germany) Tayfun Talipoğlu Typewriter Museum (Odunpazarı, Eskişehir, Turkey)Several online-only virtual museums collect and display information about typewriters and their history: Virtual Typewriter Museum Chuck & Rich's Antique Typewriter Website Mr. Martin's Typewriter Museum | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Airfield rubber removal**
Airfield rubber removal:
Airfield rubber removal, also known as runway rubber removal, is the use of high pressure water, abrasives, chemicals and other mechanical means to remove the rubber from tires that builds up on airport runways. In the United States, the Federal Aviation Administration (FAA) specifies friction levels for safe operation of planes and measures friction coefficients for the evaluation of appropriate friction levels. Individual airports incorporate rubber removal into their maintenance schedules based on the number of take offs and landings that each airport experiences.
Source of airfield rubber build-up:
When a plane lands, the tires are not spinning. The time it takes for the tires to get up to speed is referred to as "spin up time" (Speidel, 2002). During this time the tires are effectively dragging on the runway as well as being put under pressure by the weight of the airplane. This can be seen in the slight puff of smoke that comes from a landing aircraft's tires as they first touch the runway surface. The friction built up causes the rubber to polymerize and harden to the runway surface.
Effects of rubber buildup:
The buildup of rubber affects the level of friction of the runway, most noticeably as a reduction in braking and ground handling performance. This can lead to incidents such as runway overrun or a lateral slide off the runway.
Effects of rubber buildup:
The contributing factors for viscous hydroplaning are a damp or wet pavement, medium to high speed, poor pavement texture, and worn tire tread. If a runway has good microtexture and grooving and the aircraft tires have a good tread design, viscous hydroplaning could be alleviated.(NTSB, p.92) Macrotexture is visible to the naked eye. Without the aid of a microscope, microtexture can only be felt. The buildup of rubber directly affects these variables and therefore reduces the friction available for landing which increase the possibility of hydroplaning of the landing airplane.
Methods:
High pressure water and ultra high pressure water Sometimes referred to as hydrocleaning, high pressure and ultra high pressure work on the same principle of applying a spinning jet or set of jets to the surface to break the hardened rubber free from the runway surface. The main difference between the two is the pressure and flow. High pressure removal uses water at 2,000–15,000 psi (14,000–103,000 kPa) at up to 30 US gallons per minute (1.9 L/s) while ultra high pressure removal uses up to 40,000 psi (280,000 kPa) with a water usage between 8 and 16 US gallons per minute (0.50 and 1.01 L/s) (Speidel, 2002, p. 4). High pressure and ultra high pressure water operations rely on the impact of water alone, with no other chemicals used. High pressure and ultra high pressure water operations are known for destroying the integrity of the airfield and break down the surface of the runway after multiple uses.
Methods:
Chemical removal Cleaning the runway using chemicals involves the application of proprietary cleaners that are brushed into the surface of the runway and then washed off using low pressure water. There is a time between applying the chemicals and washing the runway when the chemicals are allowed to react with and break down the rubber. During this time the runway is unusable for landing operations and therefore could not be used safely in an emergency situation. The chemicals used can be expensive and require special handling. There are currently a newer generation of non-caustic rubber removal chemicals that are also environmentally superior, non-corrosive, non-toxic and relatively inexpensive to use. Some runway maintenance crews like the ability to do their rubber removal in-house using existing or lower cost machinery with a more regular, preventive maintenance program. Chemicals for rubber removal have proven to have no long-term effect on either asphalt or concrete runways. Using equipment to recover waste water/chemical/rubber allows control and safety of disposal with no runoff.
Methods:
High velocity impact removal This method employs the principle of throwing abrasive particles at a very high velocity at the runway pavement surface, thus blasting the contaminants from the surface (Speidel, 2002). Also known as sandblasting, this method can introduce foreign object debris to the runway if not contained and cleaned up properly.
Mechanical removal This type of removal involves the milling of the first 1⁄8–3⁄16 inch (3–5 mm) off the runway surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pane sciocco**
Pane sciocco:
Pane sciocco (pronounced [ˈpaːne ʃˈʃɔkko]; also called pane toscano outside Tuscany) is a variety of bread commonly found in Tuscany, Umbria, and the Marches, three regions of Italy. Sciocco means "without salt", but is also a synonym for "stupid" in Italian.
Tu proverai sì come sa di sale lo pane altrui, ...
You will experience how salty is the others' bread, ...
Dante Alighieri from the Divine Comedy
Saltless:
Being different from the other kinds of Italian bread, pane sciocco does not have any salt added. According to legend, bakers created a saltless bread so they did not have to pay an increased salt tax.Pane Sciocco is often eaten with Tuscan condiments like Pecorino Toscano cheese, ham, sausages, and prosciutto. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pairing strategy**
Pairing strategy:
In a positional game, a pairing strategy is a strategy that a player can use to guarantee victory, or at least force a draw. It is based on dividing the positions on the game-board into disjoint pairs. Whenever the opponent picks a position in a pair, the player picks the other position in the same pair.
Example:
Consider the 5-by-5 variant of Tic-tac-toe. We can create 12 pairwise-disjoint pairs of board positions, denoted by 1,...,12 below:: 3 Note that the central element (denoted by *) does not belong to any pair; it is not needed in this strategy.
Example:
Each horizontal, vertical or diagonal line contains at least one pair. Therefore the following pairing strategy can be used to force a draw: "whenever your opponent chooses an element of pair i, choose the other element of pair i". At the end of the game, you have an element of each winning-line. Therefore, you guarantee that the other player cannot win.
Example:
Since both players can use this strategy, the game is a draw.
This example is generalized below for an arbitrary Maker-Breaker game. In such a game, the goal of Maker is to occupy an entire winning-set, while the goal of Breaker is to prevent this by owning an element in each winning-set.
Pairing strategy for Maker:
A pairing-strategy for Maker requires a set of element-pairs such that:: 119 All pairs are pairwise-disjoint; Every set that contains at least one element from each pair, contains some winning-set.Whenever Breaker picks an element of a pair, Maker picks the other element of the same pair. At the end, Maker's set contains at least one element from each pair; by condition 2, he occupies an entire winning-set (this is true even when Maker plays second).
Pairing strategy for Maker:
As an example, consider a game-board containing all vertices in a perfect binary tree except the root. The winning-sets are all the paths from the leaf to one of the two children of the root. We can partition the elements into pairs by pairing each element with its sibling. The pairing-strategy guarantees that Maker wins even when playing second. If Maker plays first, he can win even when the game-board contains also the root: in the first step he just picks the root, and from then on plays the above pairing-strategy.
Pairing strategy for Breaker:
A pairing-strategy for Breaker requires a set of element-pairs such that: All pairs are pairwise-disjoint; Every winning-set contains at least one pair.Whenever Maker picks an element of a pair, Breaker picks the other element of the same pair. At the end, Breaker has an element in each pair; by condition 2, he has an element in each winning-set.
An example of such pairing-strategy for 5-by-5 tic-tac-toe is shown above. : 2–3 show other examples for 4x4 and 6x6 tic-tac-toe.
Another simple case when Breaker has a pairing-strategy is when all winning-sets are pairwise-disjoint and their size is at least 2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**V-Bor**
V-Bor:
V-Bor is a commercially packaged form of borax pentahydrate (Na2B4O7·5H2O). It is produced by the Searles Valley Minerals company from minerals mined at Searles Lake. It has most of the same uses as borax. It is also used to neutralize skins/hides in leather tanning, corrects boron deficiency in plants, reduces the melting temperature in glass processes, is a fire retardant in cellulose insulation, and is used to make a bleaching agent for home laundry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AX architecture**
AX architecture:
AX (Architecture eXtended) was a Japanese computing initiative starting in around 1986 to allow PCs to handle double-byte (DBCS) Japanese text via special hardware chips, whilst allowing compatibility with software written for foreign IBM PCs.
History:
The idea was conceived by Kazuhiko Nishi before he resigned his position as vice president of Microsoft. Microsoft Japan took over the project, and in July 1987 the Preparatory Committee of the AX Consortium started developing its specification. The AX Consortium officially started in October 1987, including ASCII Corporation, Sony, Hitachi, Sharp, Oki, Casio, Canon, Kyocera, Sanyo, Mitsubishi Electric, etc., but notably excluding Toshiba and Fujitsu (who were hence the 'opposition').At that time, NEC PC-9801 was the dominant PC architecture in the Japanese PC market because IBM PC/AT and its clone PCs could not display Japanese text. However, NEC did not tolerate PC-9801 compatible machines and was fighting court battles with Epson which was the only PC-9801 compatible machine vendor. Therefore, other vendors desperately needed a standard specification for Japanese capable PCs.
History:
Eventually two standards were developed: JEGA and AX-VGA.
History:
Due to less available software and its higher cost compared to the PC-9801 series, AX failed and was not able to break into the market in Japan. The Nikkei Personal Computing journal reported in 1989 that only 18 out of 36,165 PCs used in 937 companies were AX machines, and 90% of companies had no plan to purchase the AX machine.In 1990, IBM Japan unveiled DOS/V which enabled IBM PC/AT and its clones to display Japanese text without any additional hardware using a standard VGA card. Soon after, AX disappeared and the decline of NEC PC-9801 began.
History:
AX architecture machines Several companies released AX computers: Oki Electric Industry if386AX30 / 50 series Casio Computer AX-8000D / 8000L Canon Axi DX-20 / 20P / 10 / 10P Kyocera AX386 model A Sanyo Electric MCB-17 /18 series Sharp AX286D / 286L / AX386 ( MZ-8000 ) Sony Quarter L (PCX-300 series) Acer ACER1100 / 1200 / 1170 NCR PC-AXL / PC-AX32 Hitachi FLORA 3010 / 3020 series Mitsubishi Electric MAXY (M3201 / M3202 / M3205) Yokogawa-Hewlett-Packard Vectra-AX series
JEGA:
To display Kanji characters with sufficient clarity, AX machines had JEGA screens with a resolution of 640 x 480 rather than the 640 x 350 standard EGA resolution prevalent elsewhere at the time. JEGA was developed jointly by ASCII and Chips & Technologies, combining the P82C435 and V6367 video chips. Users could typically switch between Japanese and English modes by typing JP and US, which would also invoke the AX-BIOS and an IME enabling the input of Japanese characters.
JEGA:
In addition to the modes provided by EGA, JEGA supports the following display modes as standard: 80 x 25 character text display, effective resolution 640 x 480 pixels, 8 pages: overwrites modes 2h (graphic screen and overlaid display) and 3h of EGA; 640 x 480 pixels graphics: 1 page or 1 page overlaid with text screen.
AX-VGA:
IBM released the VGA standard soon after AX was introduced. Since the AX architecture was not compatible with the new standard, the AX consortium had to design a VGA compatible chipset . This was called AX-VGA and could be implemented in two ways: AX-VGA/H, a hardware implementation based on the AX-BIOS; AX-VGA/S, a software emulation. Development of the AX-VGA chipset was delayed, and its first implementation came out in 1991. By that time, DOS/V was already available, allowing standard IBM PC compatibles to display Japanese text using a VGA card. The need for AX was gone and further developments were discontinued. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GHK algorithm**
GHK algorithm:
The GHK algorithm (Geweke, Hajivassiliou and Keane) is an importance sampling method for simulating choice probabilities in the multivariate probit model. These simulated probabilities can be used to recover parameter estimates from the maximized likelihood equation using any one of the usual well known maximization methods (Newton's method, BFGS, etc.). Train has well documented steps for implementing this algorithm for a multinomial probit model. What follows here will applies to the binary multivariate probit model.
GHK algorithm:
Consider the case where one is attempting to evaluate the choice probability of Pr (yi|Xiβ,Σ) where yi=(y1,...,yJ),(i=1,...,N) and where we can take j as choices and i as individuals or observations, Xiβ is the mean and Σ is the covariance matrix of the model. The probability of observing choice yi is Pr Pr (yi|Xiβ,Σ)=∫1y∗∈AfN(yi∗|Xiβ,Σ)dyi∗ Where A=A1×⋯×AJ and, Aj={(−∞,0]yj=0(0,∞)yj=1 Unless J is small (less than or equal to 2) there is no closed form solution for the integrals defined above (some work has been done with J=3 ). The alternative to evaluating these integrals closed form or by quadrature methods is to use simulation. GHK is a simulation method to simulate the probability above using importance sampling methods.
GHK algorithm:
Evaluating Pr (yi|Xiβ,Σ)=∫1y∗∈AfN(yi∗|Xiβ,Σ)dyi∗ is simplified by recognizing that the latent data model yi∗=Xiβ+ϵ can be rewritten using a Cholesky factorization, Σ=CC′ . This gives yi∗=Xiβ+Cηi where the ηi terms are distributed N(0,I) Using this factorization and the fact that the ηi are distributed independently one can simulate draws from a truncated multivariate normal distribution using draws from a univariate random normal.
GHK algorithm:
For example, if the region of truncation A has lower and upper limits equal to [a,b] (including a,b = ±∞ ) then the task becomes a<y1∗<ba<y2∗<b⋮⋮⋮a<yJ∗<b Note: yi∗=Xiβ+Cηi , substituting: 11 21 22 η2<b⋮⋮⋮a<xJβJ+∑k=1JcJ,kηk<b Rearranging above, 11 11 21 22 21 22 ⋮⋮⋮a−(xJβJ+∑k=1J−1cJ,k)cJ,J<ηk<b−(xJβJ+∑k=1J−1cJ,k)cJ,J Now all one needs to do is iteratively draw from the truncated univariate normal distribution with the given bounds above. This can be done by the inverse CDF method and noting the truncated normal distribution is given by, u=Φ(x−μσ)−Φ(a−μσ)Φ(b−μσ)−Φ(a−μσ) Where u will be a number between 0 and 1 because the above is a CDF. This suggests to generate random draws from the truncated distribution one has to solve for x giving, x=σF−1(u∗(F(β)−F(α))+F(α))+μ where α=a−μσ and β=b−μσ and F is the standard normal CDF. With such draws one can reconstruct the yi∗ by its simplified equation using the Cholesky factorization. These draws will be conditional on the draws coming before and using properties of normals the product of the conditional PDFs will be the joint distribution of the yi∗ , q(yi∗|X1β,Σ)=q(y1∗|X1β,Σ)q(y2∗|y1∗,X1β,Σ)…q(yJ∗|y1∗,…,yJ−1∗,X1β,Σ) Where q(⋅) is the multivariate normal distribution.
GHK algorithm:
Because yj∗ conditional on yk,k<j is restricted to the set A by the setup using the Cholesky factorization then we know that q(⋅) is a truncated multivariate normal. The distribution function of a truncated normal is, ϕ(x−μσ)σ(Φ(b−μσ)−Φ(a−μσ)) Therefore, yj∗ has distribution, 11 11 11 11 ))×⋯×1cJJϕJ(yJ∗−(xJβ+cJ1η1+cJ2η2+⋯+cJJ−1ηJ−1)cJJ)(ΦJ(b−(xJβ+cJ1η1+cJ2η2+⋯+cJJ−1ηJ−1)cJJ)−ΦJ(a−(xJβ+cJ1η1+cJ2η2+⋯+cJJ−1ηJ−1cJJ))=∏j=1J1cjjϕj(yj∗−∑k=1k<jcjkηkcjj)∏j=1J(Φj(b−∑k=1k<jcjkηkcjj)−Φ(a−∑k=1k<jcjkηkcjj)) where ϕj is the standard normal pdf for choice j Because yj|{yk<j∗}∗∼N(Xiβ+∑k=1k<jcjkηk,cjj2) the above standardization makes each term mean 0 variance 1.
GHK algorithm:
Let the denominator ∏j=1JΦj(b−∑k=1k<jcjkηkcjj)−Φ(a−∑k=1k<jcjkηkcjj)=∏j=1Jljj and the numerator ∏j=1J1cjjϕj(yj∗−∑k=1k<jcjkηkcjj)=fN(yi∗|Xiβ,Σ) where fN(⋅) is the multivariate normal PDF.
Going back to the original goal, to evaluate the Pr (yi|Xiβ,Σ)=∫AjfN(yi∗|Xiβ,Σ)dyj∗ Using importance sampling we can evaluate this integral, Pr (yi|Xiβ,Σ)=∫AjfN(yi∗|Xiβ,Σ)dyj∗=∫AjfN(yi∗|Xiβ,Σ)q(yi∗|Xiβ,Σ)q(yi∗|Xiβ,Σ)dyj∗=∫AjfN(yi∗|Xiβ,Σ)fN(yi∗|Xiβ,Σ)∏j=1Jljjq(yi∗|Xiβ,Σ)dyj∗=Eq(∏j=1Jljj) This is well approximated by 1S∑s=1S∏j=1Jljj | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eclipse Theia**
Eclipse Theia:
Eclipse Theia is a free and open-source framework for building IDEs and tools based on modern web technologies. Theia-based applications can be deployed as desktop and web applications. It is implemented in TypeScript, is based on Visual Studio Code, and emphasizes extensibility.
History:
Theia was originally developed by TypeFox and Ericsson, and continually receives contributions from EclipseSource, Red Hat, IBM, Google, Arm Holdings as well as from individual contributors. It was first launched in March 2017. Since May 2018, Theia has been a project of the Eclipse Foundation.
About:
Theia is built on the Language Server Protocol (LSP) and supports a variety of programming languages. It can be used as a desktop application, a web application, or a hybrid application with separate front and back ends. All of Theia's features are implemented as extensions, which allows third-party developers to modify Theia's functionality by using the same application programming interfaces (APIs) as the application's default components. Theia's layout consists of draggable docks.Theia is a free and open-source software project under the Eclipse Foundation and is licensed under the Eclipse Public License 2.0 (EPL2).
Usage:
Eclipse Che uses Eclipse Theia as its default IDE starting from version 7.In September 2018, the online IDE Gitpod was released which was based on Theia.
(In 2021, Gitpod switched to Visual Studio Code.) Arduino IDE 2.0 is based on Eclipse Theia, replacing the Processing-based IDE.
Reception:
In January 2019, JAXenter, a website and blog about coding, ranked Theia as the third-most popular JavaScript integrated development environment of 2018 according to GitHub metrics, behind Visual Studio Code and Atom. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Carbon respiration**
Carbon respiration:
Carbon respiration (also called carbon emissions and carbon releases) is used in combination with carbon fixation to gauge carbon flux (as CO2) between atmospheric carbon and the global carbon cycle
Basic process:
Carbon is released to the atmosphere through the burning of fossil fuels, organic respiration, wood burning, and volcanic eruptions. The uptake of carbon from the atmosphere occurs through carbon dissolution into the oceans, Photosynthesis, and the consequent storing of carbon in various forms such as peat bogs, oil accumulation, and formation of minerals such as coal and copper. It also happens when carbohydrates are changed into carbon dioxide.
Carbon flux ratio:
The calculation of the annual net difference between carbon release and carbon storage constitutes the annual global atmospheric carbon accumulation rate. Using this method, the annual carbon flux ratio has been calculated to be approaching zero. This means the carbon respiration rate and carbon storage rate are in balance when generating a global estimate of this figure.Annual net carbon flux has been grossly calculated to be close to zero, implying the carbon release and carbon fixation rates are roughly in balance worldwide. This finding is contradicted by measuring the concentrations of carbon dioxide in the atmosphere, an important indication that the balance is tipped toward emissions. Using this data, atmospheric concentrations appear to have increased rapidly over the past 100 years and are currently higher than ever in human history, suggesting that more carbon is being released than can be absorbed on earth. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NeuroMat**
NeuroMat:
The Research, Innovation, and Dissemination Center for Neuromathematics (RIDC NeuroMat, or simply NeuroMat) is a Brazilian research center established in 2013 at the University of São Paulo that is dedicated to integrating mathematical modeling and theoretical neuroscience. Among the core missions of NeuroMat are the creation of a new mathematical system to understanding neural data and the development of neuroscientific open-source computational tools, keeping an active role under the context of open knowledge, open science and scientific dissemination. The research center is headed by Antonio Galves, from USP's Institute of Mathematics and Statistics, and is funded by the São Paulo Research Foundation (FAPESP). As of 2019, the co-principal investigators are Oswaldo Baffa Filho (USP), Pablo A. Ferrari (USP/UBA), Fernando da Paixão (UNICAMP), Antonio Carlos Roque (USP), Jorge Stolfi (UNICAMP), and Cláudia D. Vargas (UFRJ). Ernst W. Hamburger (USP) was the former director of scientific dissemination. NeuroMat's International Advisory Board consists of David R. Brillinger (UC Berkeley), Leonardo G. Cohen (NIH), Markus Diesmann (Jülich), Francesco Guerra (La Sapienza), Wojciech Szpankowski (Purdue).
Research:
NeuroMat has been involved in the development of what has been called the Galves-Löcherbach model, a model with intrinsic stochasticity for biological neural nets, in which the probability of a future spike depends on the evolution of the complete system since the last spike.[1] This model of spiking neurons was developed by mathematicians Antonio Galves and Eva Löcherbach. In the first article on the model, in 2013, they called it a model of a "system with interacting stochastic chains with memory of variable length. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exopolyphosphatase**
Exopolyphosphatase:
Exopolyphosphatase (PPX) is a phosphatase enzyme which catalyzes the hydrolysis of inorganic polyphosphate, a linear molecule composed of up to 1000 or more monomers linked by phospho-anhydride bonds. PPX is a processive exophosphatase, which means that it begins at the ends of the polyphosphate chain and cleaves the phospho-anhydride bonds to release orthophosphate as it moves along the polyphosphate molecule. PPX has several characteristics which distinguish it from other known polyphosphatases, namely that it does not act on ATP, has a strong preference for long chain polyphosphate, and has a very low affinity for polyphosphate molecules with less than 15 phosphate monomers.PPX plays an important role in the metabolism of phosphate and energy in all living organisms. It is especially important for maintenance of appropriate levels of intracellular polyphosphate, which has been implicated in a variety of cellular functions including response to stressors such as deficiencies in amino acids, orthophosphate, or nitrogen, changes in pH, nutrient downshift, and high salt, and as an inorganic molecular chaperone. PPX is classified as a polyphosphatase, which are part of the large DHH phosphoesterase family. Both subfamilies within this super family share four N-terminus motifs but have different C-terminus moieties.PPX activity is quantified by measuring the loss of radioactively labeled 32P polyphosphate. PPX is mixed with a known quantity of labeled polyphosphate, and the hydrolysis reaction is stopped with perchloric acid (HClO4). The amount of remaining labeled polyphosphate is then measured by liquid scintillation counting.
History:
PPX was discovered by the lab of Nobel laureate Arthur Kornberg in 1993 and is part of the polyphosphate operon along with polyphosphate kinase, the enzyme which synthesizes polyphosphate. The Kornberg lab was very interested in polyphosphate and published a series of papers elucidating the metabolism and roles of polyphosphate in vivo. Their interest in polyphosphate led them to identify and characterize the polyphosphate operon (which includes polyphosphate kinase [PPK] and PPX) and develop a wide variety of assays and techniques for quantification of polyphosphate production and degradation, in vitro and in vivo. The results of these studies of polyphosphate by the Kornberg lab led Kornberg to speculate that due to its high energy and phosphate content and the degree to which it is conserved across species, polyphosphate may have been the precursor to RNA, DNA, and proteins.
Structure:
The structure of PPX is characterized by the actin-like ATPase domain that is a part of this superfamily. In Aquifex aeolicus it contains a ribonuclease H-like motif that is made up of a five-stranded β-sheet with the second strand antiparallel to the rest. A few of the strands are connected by helical segments that are longer in the C-terminal domain than in the N-terminal domain. Five alpha-helices are located in the C-terminal domain and only two are located in the N-terminal domain. The closed configuration of the enzyme is referred to as the type I structure. This configuration shares similar features to other members of this superfamily, including the N-terminal and C-terminal domains being separated by two alpha-helices centered on the structure. The more open arrangement of the domains displays rotational movement of the two domains around a single hinge region. The structural flexibility has been described as a "butterfly like" cleft opening around the active site.In E. coli, exopolyphosphatase exists as a dimer, with each monomer consisting of four domains. The first two domains consist of three beta-sheets followed by an alpha-beta-alpha-beta-alpha fold. This is different from the previously described Aquifex aeolicus homolog which lacks the third and fourth domains. To date, 4 structures have been solved for this class of enzymes, with Protein Data Bank accession codes 1T6C, 1T6D, 1U6Z, and 2FLO.
Structure:
Active Site The active site of exopolyphosphatase is located in the clefts between domains I and II. In E. coli, this region contains a loop between strands beta-1 and beta-2 with the amino acids glutamate and aspartate (E121, D143, and E150). These residues, along with K197 are critical for phosphate binding and ion binding which is commonly seen among other ASKHA (acetate and sugar kinases, Hsp70, actin). In A. aeolicus, the active site of the enzyme exists in a cleft between the two domains. It is seen that catalytic carboxyl groups in this cleft are important for the enzyme activity, specifically Asp141 and Glu148. The preference of exopolyphosphatase to bind to polyphosphate and not ATP has been contributed to the clashing that would occur between the ribose and adenosine of ATP and the side chains of N21, C169, and R267.
Mechanism:
Exopolyphosphatase cleaves a terminal phosphate off of polyphosphate through the amino acid side chains of glutamate and lysine. Glutamate activates water, allowing it to act as a nucleophile and attack the terminal phosphate. The oxygen that was previously bridging the two phosphate atoms then abstracts a hydrogen from the nearby lysine residue.
Function:
Polyphosphates are utilized by exopolyphosphatase enzymes, which cleave portions of the chain of phosphates. These proteins play an essential role in the metabolism and maintenance of polyphosphates. Polyphosphate is located throughout the cytosol of each cell and is also present in the cell's organelles. There are many classes of exopolyphosphatases, each with their own unique localization and properties. It has been speculated that once the polyphosphates are broken down, they are involved with signaling molecules acting as secondary messengers. In E. coli, the regulation of polyphosphate metabolism is poorly understood.Polyphosphate is a linear chain of phosphates linked together by phosphoanhydride bonds. Polyphosphate is found in all living organisms and plays an essential role in the organisms survival. In bacteria, polyphosphate is used to store energy to replace adenosine triphosphate. It has also been shown to be involved with cell membrane formation and function, enzyme regulation, and gene transcriptional control. In mammals, polyphosphates are involved with blood coagulation and inflammation, immune response, bone tissue development, and brain function. It has been shown in a yeast model that mutant yeast deficient in exopolyphosphatase activity had problems in respiration functions and metabolism of inorganic polyphosphates. Conversely, yeast strains that have higher levels exopolyphosphatase enzyme are shown to have no obvious growth defects under phosphate deficiency or excess phosphate conditions, however the level of polyphosphate in the yeast was much lower due to the increased number of enzymes breaking the polyphosphate chains down.
Potential Clinical/Industrial Relevance:
E. coli mutants which are unable to synthesize polyphosphate die after only a few days in stationary phase. Strategies to inhibit polyphosphate accumulation in bacteria are therefore of interest as potential antibacterial treatments. This can be accomplished via inhibition of polyphosphate kinase, enhancement of exopolyphosphatase activity, or both.
Potential Clinical/Industrial Relevance:
Polyphosphate accumulation is also of interest for a variety of industrial applications including removal of Pi from aquatic environments via enhanced biological phosphorus removal and for its role as a molecular chaperone in expression of recombinant protein. Because of the activity of polyphosphate as a molecular chaperone, strains of E. coli which accumulate polyphosphate could be used to increase yield of soluble recombinant protein.Recombinant exopolyphosphatase from Saccharomyces cerevisiae protects against mortality and restores protective immune responses in pre-clinical sepsis models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acuminite**
Acuminite:
Acuminite is a rare halide mineral with chemical formula: SrAlF4(OH)·(H2O). Its name comes from the Latin word acumen, meaning "spear point". Its Mohs scale rating is 3.5.
Acumenite has only been described from its type locality of the cryolite deposit in Ivigtut, Greenland. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.