text
stringlengths
11
320k
source
stringlengths
26
161
Uriah Atherton Boyden (February 17, 1804 – October 17, 1879) was an American civil and mechanical engineer and inventor from Foxborough, Massachusetts best known for the development of a water turbine , that later became known as the Boyden Turbine around 1844, while working for the Appleton Company in Lowell, Massachusetts . Boyden improved upon a turbine developed by French engineer Fourneyron by adding a conical approach passage for the incoming water—submerged diffusers, guide vanes and a diverting exit passage. [ 1 ] Uriah was also the younger brother of Seth Boyden , also a notable inventor who perfected a process for making patent leather, among other developments. Uriah Atherton Boyden was born in Foxborough, Massachusetts on February 17, 1804, the son of Seth Boyden and Susannah Atherton. His father was farmer and blacksmith who had invented a machine to split leather. [ 2 ] In 1813, Uriah moved to Newark, New Jersey to work in his elder brother Seth's leather shop. [ 3 ] Around 1828, Uriah returned to Massachusetts where he worked on the early surveys for the Boston and Providence Railroad . He also worked under Loammi Baldwin on the dry dock at the Boston Navy Yard , as other mills in Lowell and the Boston and Lowell Railroad . While at Lowell, Boyden worked with British-born engineer James B. Francis , who in 1848 developed the Francis turbine , which superseded Boyden's earlier invention. However, Boyden-type turbines continued to be manufactured, including those installed at Harmony Mills in Cohoes, New York in the early 1870s, and those used at the first Niagara Falls hydroelectric plant in 1895. [ 4 ] In 1850, Boyden settled in Boston , and devoted himself to the study of chemistry and physics. He never married. He died in Boston on October 17, 1879. [ 5 ] Upon his death in 1879, Boyden's will left about a quarter of a million dollars to a suitable astronomical institution that would build an observatory on a mountain for the better atmospheric seeing conditions than those available at lower altitudes. His heirs challenged the will, but it was found valid. In 1887, Edward Charles Pickering convinced the trustees of Boyden's will to award the Boyden Fund to Harvard College Observatory , of which he was director. Although he initially planned to establish an observatory at Mount Wilson , those plans were abandoned (although the Mount Wilson Observatory was later built by a different group). Instead, needing an observation station for southern hemisphere skies, Harvard College Observatory established the "Boyden Station" at Arequipa , Peru in 1889. In 1927, Boyden Station was moved to South Africa due to better weather conditions and became known as the Boyden Observatory . Uriah also contributed to the Boyden Public Library in his hometown of Foxborough, Massachusetts . The National Museum of American History in Washington, DC is home to the Uriah A. Boyden Papers. [ 6 ] His maternal ancestors had resided in Lancaster, Massachusetts, having been pioneer settlers to the area. He is a direct descendant of James Atherton , [ 7 ] who arrived in Dorchester, Massachusetts in the 1630s. [ 8 ] The Atherton family ancestry originated from Lancashire , England.
https://en.wikipedia.org/wiki/Uriah_A._Boyden
Uridine diphosphate , abbreviated UDP , is an organic compound . It is an ester of pyrophosphoric acid with the nucleoside uridine . UDP consists of the pyrophosphate group , the pentose sugar ribose , and the nucleobase uracil . UDP is an important factor in glycogenesis . Before glucose can be stored as glycogen in the liver and muscles , the enzyme UDP-glucose pyrophosphorylase forms a UDP-glucose unit by combining glucose 1-phosphate with uridine triphosphate , cleaving a pyrophosphate ion in the process. Then, the enzyme glycogen synthase combines UDP-glucose units to form a glycogen chain. The UDP molecule is cleaved from the glucose ring during this process and can be reused by UDP-glucose pyrophosphorylase. [ 1 ] [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uridine_diphosphate
Uridine diphosphate N -acetylglucosamine or UDP-GlcNAc is a nucleotide sugar and a coenzyme in metabolism . It is used by glycosyltransferases to transfer N -acetylglucosamine residues to substrates. UDP-GlcNAc is used for making glycosaminoglycans , proteoglycans , and glycolipids . [ 1 ] D -Glucosamine is made naturally in the form of glucosamine-6-phosphate, and is the biochemical precursor of all nitrogen-containing sugars . [ 2 ] To be specific, glucosamine-6-phosphate is synthesized from fructose 6-phosphate and glutamine [ 3 ] as the first step of the hexosamine biosynthesis pathway. [ 4 ] The end-product of this pathway is UDP-GlcNAc. Some enzymes involved in the biosynthesis of UDP-GlcNAc vary between prokaryotic and eukaryotic organisms, serving as potential drug targets for antibiotic development. [ 5 ] UDP-GlcNAc is extensively involved in intracellular signaling as a substrate for O -linked N -acetylglucosamine transferases (OGTs) to install the O -GlcNAc post-translational modification in a wide range of species. It is also involved in nuclear pore formation and nuclear signalling. OGTs and OG-ases play an important role in the structure of the cytoskeleton . In mammals, there is enrichment of OGT transcripts in the pancreas beta-cells , and UDP-GlcNAc is thought to be part of the glucose sensing mechanism. There is also evidence that it plays a part in insulin sensitivity in other cells. In plants, it is involved in the control of gibberellin production. [ 6 ] In eukaryotic stem cells , the presence of UDP-GlcNAc is essential for maintaining pluripotency , which is sustained through O-GlcNAcylation. [ 7 ] Clostridium novyi type A alpha-toxin is an O -linked N -actetylglucosamine transferase acting on Rho proteins and causing the collapse of the cytoskeleton. There is a possible relationship between the inhibition of oxidative phosphorylation and reduced UDP-GlcNAc levels. [ 7 ] UDP-GlcNAc biosynthesis is not regulated by the same enzymes in prokaryotic and eukaryotic organisms. The lack of the bifunctional GlmU acetyltransferase and pyrophosphorylase in eukaryotes makes it a possible target for blocking UDP-GlcNAc synthesis (an essential precursor for peptidoglycan synthesis) in bacteria without affecting host cells. [ 5 ]
https://en.wikipedia.org/wiki/Uridine_diphosphate_N-acetylglucosamine
UDP-glucuronic acid is a sugar used in the creation of polysaccharides and is an intermediate in the biosynthesis of ascorbic acid (except in primates and guinea pigs ). It also participates in the heme degradation process of human. It is made from UDP-glucose by UDP-glucose 6-dehydrogenase (EC 1.1.1.22) using NAD+ as a cofactor. It is the source of the glucuronosyl group in glucuronosyltransferase reactions. [ 1 ] [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uridine_diphosphate_glucuronic_acid
Uridine-5′-triphosphate ( UTP ) is a pyrimidine nucleoside triphosphate , consisting of the organic base uracil linked to the 1′ carbon of the ribose sugar, and esterified with tri- phosphoric acid at the 5′ position. Its main role is as substrate for the synthesis of RNA during transcription . UTP is the precursor for the production of CTP via CTP synthetase . [ 1 ] UTP can be biosynthesized from UDP by Nucleoside Diphosphate Kinase after using the phosphate group from ATP. [ 2 ] [ 3 ] UDP + ATP ⇌ UTP + ADP; [ 4 ] both UTP and ATP are energetically equal. [ 4 ] The homologue in DNA is thymidine triphosphate (TTP or dTTP). UTP also has a deoxyribose form (dUTP). UTP also has the role of a source of energy or an activator of substrates in metabolic reactions, like that of ATP , but more specific. When UTP activates a substrate (such as glucose-1-phosphate), UDP-glucose is formed and inorganic phosphate is released. [ 5 ] UDP-glucose enters the synthesis of glycogen . UTP is used in the metabolism of galactose , where the activated form UDP-galactose is converted to UDP-glucose. UDP-glucuronate is used to conjugate bilirubin to a more water-soluble bilirubin diglucuronide . UTP is also used to activate amino sugars like glucosamine-1-phosphate to UDP-glucosamine, and N -acetyl-glucosamine-1-phosphate to UDP- N -acetylglucosamine. [ 6 ] UTP also has roles in mediating responses by extracellular binding to the P2Y receptors of cells. UTP and its derivatives are still being investigated for their applications in human medicine. However, there is evidence from various model systems to suggest it has applications in pathogen defense and injury repair. In mice UTP has been found to interact with P2Y4 receptors to mediate an enhancement in antibody production. [ 7 ] In Schwannoma cells, UTP binds to the P2YP receptors in the event of damage.  This leads to the downstream signal cascade that leads to the eventual injury repair. [ 8 ]
https://en.wikipedia.org/wiki/Uridine_triphosphate
Urinary anti-infective agent , also known as urinary antiseptic , is medication that can eliminate microorganisms causing urinary tract infection (UTI). UTI can be categorized into two primary types: cystitis , which refers to lower urinary tract or bladder infection, and pyelonephritis , which indicates upper urinary tract or kidney infection. [ 1 ] Escherichia coli (E. Coli) is the predominant microbial trigger of UTIs, accounting for 75% to 95% of reported cases. Other pathogens such as Proteus mirabilis , Klebsiella pneumoniae , and Staphylococcus saprophyticus can also cause UTIs . [ 2 ] [ 3 ] The use of antimicrobial therapy to treat UTIs started in the 20th century. Nitrofurantoin , trimethoprim-sulfamethoxazole (TMP/SMX), fosfomycin , and pivmecillinam are currently the first-line agents for empiric therapy of simple cystitis. [ 4 ] On the other hand, the choice of empiric antimicrobial therapy for pyelonephritis depends on the severity of illness, specific host factors , and the presence of resistant bacteria. Ceftriaxone is often considered for parenteral treatment , while oral or parenteral fluoroquinolones , such as levofloxacin and ciprofloxacin , are suitable alternatives for treating pyelonephritis. [ 5 ] Antimicrobial therapy should be tailored to the individual, considering factors like the severity of illness, specific host factors, and pathogen resistance in the local community. [ 1 ] Urinary antiseptics are medications that target bacteria in the urinary tract. [ 6 ] They can be divided into two groups: bactericidal agents, and bacteriostatic agents. These antiseptics help prevent infections by effectively eliminating UTI symptoms through their action on microorganisms. [ 7 ] [ 8 ] Nitrofurantoin is regarded as the first-line agent for simple cystitis , with an efficacy rate ranging from 88% to 92%. [ 9 ] It can also be a prophylactic agent to prevent long-term UTIs. [ 10 ] This antibacterial medication is effective against both gram-positive and gram-negative bacteria . [ 11 ] Nitrofurantoin exhibits its bactericidal activity through various mechanisms, including inhibiting ribosomal translation , causing bacterial DNA damage and interfering with the citric acid cycle . However, the specific role of each mechanism remains to be further explored. [ 9 ] [ 11 ] When nitrofurantoin is metabolized, it converts into a reactive intermediate that attacks bacterial ribosomes , inhibiting bacterial protein synthesis. [ 9 ] [ 11 ] This medication is typically taken orally and has minimal systemic absorption, reducing potential side effects. [ 12 ] Common adverse reactions associated with nitrofurantoin include brown urine discoloration, nausea, vomiting, loss of appetite, rash, and peripheral neuropathy . [ 13 ] Fosfomycin is a phosphonic acid bactericidal agent. It is commonly used as the first-line treatment for acute simple cystitis, demonstrating a 91% cure rate. [ 4 ] [ 9 ] It is administered orally as a single dose; In more complicated UTIs, the dose is adjusted to be repeated every three days to achieve successful eradication. [ 9 ] The bactericidal effect of fosfomycin is attributed to its capability to inhibit bacterial wall synthesis by inactivating an enzyme called pyruvyl transferase, which is responsible for microbial cell wall synthesis. [ 9 ] Fosfomycin acts against gram-positive and gram-negative bacteria. Administration of fosfomycin may lead to side effects such as headache, dizziness, nausea , vomiting, and abdominal cramps. [ 13 ] Beta-lactam antibiotics are often considered as a second-line option for treating UTIs due to their lower effectiveness compared to other antibiotics and their potential adverse effects. [ 14 ] [ 15 ] Commonly used beta-lactam antibiotics for UTIs include cephalosporins and penicillin . By binding to penicillin-binding proteins through their beta-lactam rings, beta-lactam antibiotics disrupt the normal function of these proteins, inhibiting bacterial cell wall synthesis, ultimately resulting in cell death. [ 16 ] Cephalosporins are a subclass of beta-lactam family with broad-spectrum activity against gram-positive and gram-negative bacteria. [ 12 ] They are categorized into five generations. [ 16 ] First and third-generation cephalosporins, like cefalexin and ceftriaxone, are more commonly used in clinical practice. [ 17 ] Common adverse effects associated with cephalosporins include hypersensitivity , rash, anaphylaxis , and seizures . [ 12 ] Penicillin is another widely used subclass that effectively targets various bacteria. [ 18 ] However, it is not regarded as the first-line treatment for uncomplicated cystitis because of the high prevalence of penicillin-resistant E. coli strains. [ 12 ] Within the penicillin class, pivmecillinam is considered the first-line empiric treatment for acute cystitis due to its wide spectrum of activity against gram-negative bacteria and its specific efficacy in the urinary tract. It has consistently demonstrated a high cure rate of over 85% for UTIs and a low resistance rate among E. coli strains. [ 4 ] [ 19 ] [ 20 ] Amoxicillin-clavulanate combination, which enhances the effectiveness of amoxicillin , is often used as an alternative for cystitis treatment when other options cannot be used. [ 21 ] Fluoroquinolones are a class of antimicrobial agents known for their high efficacy and broad spectrum activity against aerobic gram-positive and gram-negative bacteria. [ 12 ] [ 22 ] These potent antibiotics exert their bactericidal effects by selectively inhibiting the activity of type II DNA topoisomerases , which effectively halt the replication of bacterial DNA, leading to bacterial death. [ 22 ] Among the fluoroquinolones, ciprofloxacin and levofloxacin are used more frequently for the treatment of UTIs. These agents are well-absorbed orally and achieve significant concentrations in urine and various tissues. [ 12 ] However, fluoroquinolones administration carries risk of GI symptoms, confusion, hypersensitivity, tendinopathy , and neuropathy . [ 23 ] Additionally, the extensive use of fluoroquinolones has contributed to the prevalence of antimicrobial resistance in some areas. As a result, fluoroquinolones are generally reserved for more serious UTIs or when there are no better anti urinary-infective agent options. [ 23 ] Sulfonamide is a bacteriostatic agent that competitively inhibits the bacterial enzyme dihydropteroate synthase . By acting as a substrate analog of para-aminobenzoic acid , sulfonamide inhibits folic acid production. [ 24 ] TMP/SMX is a combination of two antibacterial agents that work synergistically to combat a wide range of urinary tract pathogens. [ 25 ] TMP/SMX is commonly used due to its ability to achieve high concentrations in urinary tract tissues and urine. This antibiotic combination demonstrates notable efficacy in both the treatment and prophylaxis of recurrent urinary tract infections. [ 12 ] Common adverse effects include nausea, vomiting, rash, pruritus , and photosensitivity . [ 26 ] Kidney disease can affect drug elimination , absorption, and distribution in the body, leading to altered serum drug concentrations. This can increase the risk of drug toxicity or suboptimal therapeutic effects. As a result, dosage adjustments are necessary for patients who fail to achieve the desired therapeutic serum drug levels. [ 27 ] The choice of urinary anti-infective agents for patients with renal dysfunction is generally similar to that for individuals with normal kidney function. However, in cases where the patient's glomerular filtration rate (GFR) decreases to less than 20 mL/min, drug dosages adjustment is necessary because achieving the desired therapeutic serum drug levels becomes challenging in such patients. [ 28 ] Some drugs need to be used with caution in patients with renal dysfunction. The use of nitrofurantoin is contraindicated in patients with an estimated GFR of less than 30 mL/min/1.73m 2 as drug accumulation can lead to increased side effects and impaired recovery of the urinary tract, increasing the risk of treatment failure. [ 29 ] The use of TMP/SMX also raises concerns in patients with kidney disease. In patients with creatinine clearance less than 50 mL/min, the urine concentrations of SMX may decrease to subtherapeutic levels. Therefore, in patients with low creatinine clearance, it is recommended to prescribe a reduced dosage of TMP alone. [ 30 ] Pregnant women with UTIs are at a higher risk of experiencing recurrent bacteriuria and developing pyelonephritis compared to non-pregnant individuals. [ 31 ] Untreated UTIs during pregnancy can lead to adverse outcomes, including preterm birth and low birth weight infants. [ 32 ] [ 33 ] Antimicrobial treatment should be adjusted for UTIs in pregnant women to avoid potential side effects brought to fetus. [ 34 ] For acute cystitis and pyelonephritis in pregnant women, empiric antibiotic treatment is often initiated. Commonly used antibiotics for uncomplicated cystitis include amoxicillin-clavulanate and fosfomycin, while parenteral beta-lactams are preferred for acute pyelonephritis . These options are chosen because they are considered safer in pregnancy and have a relatively broad spectrum of activity. Typically, an antimicrobial course of five to seven days is given. This duration is chosen to minimize fetal exposure to antimicrobials while ensuring optimal treatment outcomes. [ 31 ] The type of urinary anti-infective agents should be carefully chosen for pregnant women with UTIs due to the potential impact on fetal development. Penicillins, cephalosporins, and fosfomycin are safe options during pregnancy. [ 35 ] Nitrofurantoin is typically avoided during the first trimester due to uncertain associations with congenital anomalies . [ 36 ] TMP/SMX should also be avoided as it may be associated with impaired folate metabolism, which increases the risk of neural tube defects . [ 37 ] [ 38 ] However, when all alternative antibiotics are contraindicated, nitrofurantoin and TMP/SMX become the last resort at the expense of the fetus. [ 39 ] Fluoroquinolones should be avoided during pregnancy as they are associated with bone and cartilage toxicity in developing fetuses. [ 40 ] [ 41 ] [ 42 ] Urinary tract infection in pediatric patients is a significant clinical issue, affecting approximately 7% of fevered infants and children. [ 43 ] If left untreated, the infection can ascend from the bladder to the kidneys, resulting in acute pyelonephritis, which leads to hypertension , kidney scarring , and end-stage kidney disease . [ 44 ] The choice of urinary anti-infective agents used in pediatric patients and the duration of therapy depend on the types of UTIs they are suffering from. It is important to note that the dosage of antibiotics used in children is typically weight-dependent. Generally, oral or parenteral cephalosporins are recommended as the first-line agent for children older than two months. [ 45 ] [ 46 ] Second-line therapy should be considered for patients who have poor response to first-line treatment. Alternative choices include amoxicillin-clavulanate, nitrofurantoin, TMP/SMX, and ciprofloxacin. [ 44 ] For the treatment of simple cystitis in children, a five-day oral course of cephalexin is the preferred choice. As for children with suspected pyelonephritis, a ten-day treatment regimen is recommended. In such cases, a third-generation cephalosporin, such as cefdinir, is suggested as an appropriate option. If second-line therapy is initiated in pediatric patients with suspected pyelonephritis, ciprofloxacin should be the preferred option among the four alternatives. Nitrofurantoin may not be adequate in treating upper urinary tract infections, while TMP/SMX and amoxicillin-clavulanate should be used with caution due to the risk of kidney scarring in these patients. [ 44 ] The choice of urinary anti-infective agents in pediatric patients may differ from that in adults due to the potential harm they can cause to children. For example, the systemic use of fluoroquinolones is not appropriate in pediatric patients due to the potential risk of musculoskeletal toxicity. [ 47 ] The discovery of antimicrobial agents contributed significantly to UTI management during the 20th century. Nitrofurantoin emerged as the first practical and safe urinary antimicrobial agent, but it was with limited spectrum of activity. [ 48 ] Subsequently, in the 1970s, beta-lactam antibiotics and TMP/SMX became available for UTI therapy. [ 48 ] Antimicrobial resistance was developed to these agents due to their widespread and extensive usage, which restricted their clinical efficacy in UTI management. Fluoroquinolones emerged during the 1980s and were recommended as an alternative when resistance to TMP/SMX reaches 10% or higher. [ 48 ] The evolving landscape of drug resistance will continue to influence the development and application of antimicrobial agents in UTI therapy. [ 49 ]
https://en.wikipedia.org/wiki/Urinary_anti-infective_agent
Specific gravity , in the context of clinical pathology , is a urinalysis parameter commonly used in the evaluation of kidney function and can aid in the diagnosis of various renal diseases . One of the main roles of the kidneys in humans and other mammals is to aid in the clearance of various water-soluble molecules, including toxins , toxicants , and metabolic waste . The body excretes some of these waste molecules via urination , and the role of the kidney is to concentrate the urine, such that waste molecules can be excreted with minimal loss of water and nutrients. The concentration of the excreted molecules determines the urine's specific gravity . In adult humans, normal specific gravity values range from 1.010 to 1.030. Adults generally have a specific gravity in the range of 1.010 to 1.030. Increases in specific gravity (hypersthenuria, i.e. increased concentration of solutes in the urine) may be associated with dehydration , diarrhea , emesis , excessive sweating , urinary tract/bladder infection , glucosuria , renal artery stenosis , hepatorenal syndrome , decreased blood flow to the kidney (especially as a result of heart failure ), and an excess of antidiuretic hormone caused by the syndrome of inappropriate antidiuretic hormone secretion . [ 1 ] A specific gravity greater than 1.035 is consistent with frank dehydration. [ 2 ] In neonates, normal urine specific gravity is 1.003. Hypovolemic patients usually have a specific gravity >1.015. Decreased specific gravity (hyposthenuria, i.e. decreased concentration of solutes in urine) may be associated with renal failure , pyelonephritis , diabetes insipidus , acute tubular necrosis , interstitial nephritis , and excessive fluid intake (e.g., psychogenic polydipsia ). [ 3 ] [ 4 ] Osmolality is normally used for more detailed analysis, but USG remains popular for its convenience. [ 5 ]
https://en.wikipedia.org/wiki/Urine_specific_gravity
The Urmetazoan is the hypothetical last common ancestor of all animals . The name derives from metazoa , an old biological term for animals. It is universally accepted to have been a multicellular heterotroph — with the novelties of a germline and oogamy , an extracellular matrix (ECM) and basement membrane , cell-cell and cell-ECM adhesions and signaling pathways, collagen IV and fibrillar collagen , different cell types (as well as expanded gene and protein families), spatial regulation and a complex developmental plan, and relegated unicellular stages. [ 1 ] All animals are posited to have evolved from a flagellated eukaryote. Their closest known living relatives are the choanoflagellates , collared flagellates whose cell morphology is similar to the choanocyte cells of certain sponges . Molecular studies place animals in a supergroup called the opisthokonts , which also includes the choanoflagellates, fungi , and a few small parasitic protists . The name comes from the posterior location of the flagellum in motile cells, such as most animal spermatozoa, whereas other eukaryotes tend to have anterior flagella instead. Several different hypotheses for the animals' last common ancestor have been suggested.
https://en.wikipedia.org/wiki/Urmetazoan
Uroguanylin is a 16 amino acid peptide that is secreted by enterochromaffin cells in the duodenum and proximal small intestine . Guanylin acts as an agonist of the guanylyl cyclase receptor guanylate cyclase 2C (GC-C), and regulates electrolyte and water transport in intestinal and renal epithelia . By agonizing this guanylyl cyclase receptor, uroguanylin and guanylin cause intestinal secretion of chloride and bicarbonate to dramatically increase; this process is helped by the second messenger cGMP . [ 1 ] Its sequence is H-Asn-Asp-Asp-Cys(1)-Glu-Leu-Cys(2)-Val-Asn-Val-Ala-Cys(1)-Thr-Gly-Cys(2)-Leu-OH. In humans, the uroguanylin peptide is encoded by the GUCA2B gene . [ 2 ] [ 3 ] Uroguanylin may be involved in appetite and perceptions of 'fullness' after eating meals, as suggested by a study into mice. [ 4 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uroguanylin
Uropods , in immunology , refer to the hind part of polarized cells during cell migration that stabilize and move the cell. Polarized leukocytes move using amoeboid cell migration mechanisms, with a small leading edge, main cell body, and posterior uropod protrusion. [ 1 ] [ 2 ] Cytoskeleton contraction and extension, controlled by various polarized signals, helps propel the cell body forward. [ 1 ] [ 3 ] [ 4 ] [ 5 ] Leukocyte polarization is an important requirement for migration, activation and apoptosis in the adaptive and innate immune systems ; most leukocytes , including monocytes , granulocytes , and T and B lymphocytes migrate to and from primary and secondary lymphoid organs to tissues to initiate immune responses to pathogens. [ 1 ] [ 2 ] [ 3 ] [ 6 ] Amoeboid cell migration mechanisms enable rapid movement without strong adhesion to tissue and that doesn't harm cell tissues, as opposed to other types of cell migration . [ 1 ] [ 2 ] The cell is also able to interact and integrate environmental signals so it can quickly find and follow chemical signals left by other cells or pathogens. [ 2 ] [ 1 ] Amoeboid movement generally consists of four main stages of movement: In more detail, after receptors on the cell recognize extracellular signals, cell contents are polarized to create different front and rear environments. Already, adhesion forces between the substrate and cell are present in the form of integrin / ICAM binding between cells. The uropod protrusion extends from the cell body due to actin polymerization and actomyosin extension, as cellular signals interact with cell and membrane contents. Actomyosin contraction pushes the cell forward by squeezing the cell contents in the direction of cell movement, prompting release of adhesion forces between the cell and environment and resulting in an overall change in position towards the extracellular signals. [ 4 ] [ 5 ] [ 6 ] These cyclic steps ensure fast movement towards a specific stimulus, such as pathogenic proteins or other signals. The uropod protrudes backward from the nucleus and main cell body and contains specific organelles , densely packed adhesion and signaling proteins, and cytoskeletal proteins. [ 5 ] [ 4 ] Several cell organelles are present in the rear of the cell to aid in quick and efficient movement, including the microtubule-organizing center , the golgi apparatus , and the endoplasmic reticulum . [ 5 ] [ 2 ] Mitochondria also localize near the uropod to efficiently deliver ATP to ATP-dependent actomyosin contraction. [ 5 ] [ 2 ] This redistribution of cell contents towards polarized structures is also important for cell activation, cell communication, and apoptosis, and thus uropod formation plays a crucial part in these functions. [ 5 ] [ 3 ] Though research is ongoing, many cell signals and mechanisms are known to play a part in uropod formation and retraction. In leukocytes, polarized RhoA signaling regulates uropod formation and retraction, in comparison with CDC42 signaling in the leading edge pseudopods. These enzymes, both in the Rho family, interact with other factors such as GEFs, GAPs , myosin II and Rac proteins to control front and rear cytoskeletal elements and create the cycle of movement important to cell movement. [ 5 ] [ 4 ] [ 1 ] Cyclic GMP and AMP have been shown to affect uropod formation, and are generally important for cell polarization and chemotaxis . [ 5 ] Uropod membranes generally have high density of CD43 and CD44 and adhesion receptors ( ICAM-1 , ICAM-3 , B1 integrins, and ERM adaptor proteins). [ 5 ] [ 2 ] [ 1 ] These receptors mediate cell-matrix and cell-cell interactions during migration and have an anchoring function, which serves to steady the leukocyte and interact with tissue cells. [ 2 ] [ 6 ] [ 5 ] Lipid rafts segregated to the uropod and leading edge are also known to aid actomyosin activity. [ 5 ]
https://en.wikipedia.org/wiki/Uropod_(immunology)
Uroporphyrinogens are cyclic tetrapyrroles with four propionic acid groups ("P" groups) and four acetic acid groups ("A" groups). There are four forms, which vary based upon the arrangements of the "P" and "A" groups (in clockwise order): This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uroporphyrinogen
The Ursa Major (lit. Great Bear ) at Black Thunder Coal Mine , Wyoming, is the largest dragline excavator currently in use in North America and the third largest ever built. [ 2 ] [ 3 ] It is a Bucyrus-Erie 2570WS model and cost US$50 million. The Ursa Major was one of five large walking draglines operated at Black Thunder, with the next two largest in the dragline fleet being Thor, a B-E 1570W - which has a 97.5-metre (320 ft) boom and a 69-cubic-metre (2,400 cu ft) bucket - and Walking Stick, a B-E 1300W with a 92-metre (302 ft) boom and a 34-cubic-metre (1,200 cu ft) bucket. [ 2 ] Its bucket is 160 cubic yards (120 m 3 ), and it has a 360-foot (110 m) boom. It weighs 14.7 million pounds (6,700 t). [ 4 ] [ 5 ] Shortly before the scrapping of Big Muskie in 1999, the largest dragline ever built, construction of another ultraheavy dragline excavator commenced. Although not as large as the Big Muskie, the Ursa Major was still a large and substantial excavator. It first began operation around early 2001, the 160-cubic-yard (120 m 3 ) bucket which was newly cast at the time, was delivered at Black Thunder Mine. Operation to deliver the 165,000 pound bucket (82.5 tons) Bucyrus had to obtain special permits for an overweight and oversized load for it to be permissible to be transported. The company also had to check with the power company to make sure it won't hit any power lines on the way to the mine. [ 6 ]
https://en.wikipedia.org/wiki/Ursa_Major_(excavator)
In statistical mechanics , an Ursell function or connected correlation function , is a cumulant of a random variable . It can often be obtained by summing over connected Feynman diagrams (the sum over all Feynman diagrams gives the correlation functions ). The Ursell function was named after Harold Ursell , who introduced it in 1927. If X is a random variable, the moments s n and cumulants (same as the Ursell functions) u n are functions of X related by the exponential formula : (where E {\displaystyle \operatorname {E} } is the expectation ). The Ursell functions for multivariate random variables are defined analogously to the above, and in the same way as multivariate cumulants. [ 1 ] The Ursell functions of a single random variable X are obtained from these by setting X = X 1 = … = X n . The first few are given by Percus (1975) showed that the Ursell functions, considered as multilinear functions of several random variables, are uniquely determined up to a constant by the fact that they vanish whenever the variables X i can be divided into two nonempty independent sets.
https://en.wikipedia.org/wiki/Ursell_function
In fluid dynamics , the Ursell number indicates the nonlinearity of long surface gravity waves on a fluid layer. This dimensionless parameter is named after Fritz Ursell , who discussed its significance in 1953. [ 1 ] The Ursell number is derived from the Stokes wave expansion , a perturbation series for nonlinear periodic waves, in the long-wave limit of shallow water – when the wavelength is much larger than the water depth. Then the Ursell number U is defined as: which is, apart from a constant 3 / (32 π 2 ), the ratio of the amplitudes of the second-order to the first-order term in the free surface elevation. [ 2 ] The used parameters are: So the Ursell parameter U is the relative wave height H / h times the relative wavelength λ / h squared. For long waves ( λ ≫ h ) with small Ursell number, U ≪ 32 π 2 / 3 ≈ 100, [ 3 ] linear wave theory is applicable. Otherwise (and most often) a non-linear theory for fairly long waves ( λ > 7 h ) [ 4 ] – like the Korteweg–de Vries equation or Boussinesq equations – has to be used. The parameter, with different normalisation, was already introduced by George Gabriel Stokes in his historical paper on surface gravity waves of 1847. [ 5 ]
https://en.wikipedia.org/wiki/Ursell_number
Urthecast was a Canadian company that specialized in satellite imaging , data services and geo-analytics . The company operated two cameras on the International Space Station (ISS) and two satellites in low Earth orbit . [ 2 ] Urthecast also planned to launch two satellite constellations , OptiSAR and UrtheDaily, to provide global coverage and high-resolution imagery of the Earth. [ 3 ] However, the company faced financial difficulties and filed for creditor protection in 2020. A new start-up, EarthDaily Analytics, emerged from Urthecast’s insolvency in 2021. [ 4 ] Urthecast was founded in 2010 by Wade Larson and Scott Larson, with the vision of providing live video streaming of the Earth from space . The company partnered with the Russian Federal Space Agency (Roscosmos) to install two cameras on the ISS: a medium-resolution camera (MRC) and a high-resolution camera (HRC). The MRC could capture objects about 6 meters across or larger, while the HRC could capture objects of 1 meter across. [ 5 ] The cameras were launched in 2013 and became operational in 2014. [ 6 ] In 2015, Urthecast acquired Deimos Imaging , a Spanish -based earth observation company, and its two satellites: Deimos-1 and Deimos-2. [ 7 ] Deimos-1 had a resolution of 22 meters per pixel and could cover 650,000 square kilometers per day. Deimos-2 had a resolution of 75 centimeters per pixel and could cover 150,000 square kilometers per day. Urthecast also announced plans to launch a 16-satellite constellation called OptiSAR, which would combine synthetic aperture radar (SAR) and optical sensors to provide all-weather and day-night imaging capabilities. [ 8 ] In 2016, Urthecast announced another satellite constellation project called UrtheDaily, which would consist of eight satellites equipped with multispectral sensors to capture images of the entire Earth’s landmass every day at a resolution of 5 meters per pixel. [ 9 ] The company contracted Surrey Satellite Technology Ltd (SSTL) to manufacture the satellites. [ 10 ] Urthecast received funding from the Canadian government’s Strategic Aerospace & Defense Initiative (SADI) program for the development of OptiSAR. [ 11 ] In 2018, Urthecast acquired Geosys, an agricultural data analytics company, from Land O’Lakes, Inc. Geosys provided crop monitoring and yield forecasting services using satellite imagery and weather data. [ 12 ] In September 2020, UrtheCast filed for creditor protection under the Companies’ Creditors Arrangement Act (CCAA) in Canada and sought similar protection in the U.S. under Chapter 15 of the Bankruptcy Code. The company stated that it had been unable to secure sufficient financing or find a buyer for its assets amid the COVID-19 pandemic. [ 13 ] In February 2021, UrtheCast announced that it had entered into an asset purchase agreement with a consortium led by Antarctica Capital Management LLC, a U.S.-based private equity firm. The consortium agreed to acquire UrtheCast’s Deimos Imaging business and related assets for $3.2 million USD. The transaction was approved by the Canadian court overseeing UrtheCast’s CCAA proceedings and closed on February 26, 2021. [ 14 ] As part of the deal, Antarctica Capital formed a new company called EarthDaily Analytics, which took over UrtheCast’s Deimos Imaging business and its UrtheDaily satellite project. EarthDaily Analytics also retained most of UrtheCast’s employees and customers. The new company announced that it would focus on providing optical imagery and data products for the agriculture industry and other markets. [ 15 ] [ 16 ]
https://en.wikipedia.org/wiki/Urthecast
The Urucu Oil Province ( Portuguese : Provincia Petroleira de Urucu ) is a Brazilian oil and natural gas field located in the municipality of Coari , in the interior of the state of Amazonas . It is the largest proven onshore oil and natural gas reserve in Brazil, according to Petrobras . [ 1 ] The Urucu field was discovered on October 12, 1986, by Petrobras, although searches for oil in the Amazon date back to 1917. [ 2 ] [ 3 ] It is the first commercially viable reserve discovered in the Amazon region. [ 1 ] [ 4 ] The Amazon region, and in particular the Solimões Basin, is the largest onshore natural gas basin and the fourth largest oil basin in Brazil in terms of reserves and production. The state of Amazonas, which includes the Solimões Basin and the Amazonas Basin, concentrates 80% of Brazil's proven onshore gas reserves and 12% of proven onshore oil reserves. Today, the Solimões Basin produces an average of 40,000 barrels per day of oil and 11 million cubic meters per day of gas. The oil is of excellent quality, very light. The gas is quite wet, that is, it contains a high proportion of condensate and liquified petroleum gas ( LPG ). [ 5 ] Despite the logistical difficulty in the Amazon, the cost of extracting oil and natural gas from Urucu is among the lowest in Brazil. According to Petrobras, the oil province is profitable and important for the development of the local economy. [ 6 ] Urucu has a set of pipelines that allow the production to flow. Since 2009, the Urucu-Manaus pipeline has been operating, connecting the oil province to the capital of Amazonas, with a total length of 663 kilometers. The pipeline is capable of transporting up to 5.5 million cubic meters per day of natural gas from Urucu to the Amazonas capital. In addition, the field accounts for the daily production of approximately 1.2 tons of LPG, a volume capable of supplying seven states in the North region and others in the Northeast . [ 6 ] In the beginning, before the existence of the gas pipeline, it took more than a week to drain the production by small ferries down the Urucu River to the city of Coari, on the banks of the Solimões River and from there, in larger ferries, to the Manaus Refinery ( Refinaria de Manaus - Reman). [ 6 ] The municipality of Coari had the second highest GDP in Amazonas in 2018, with R$2 billion and a 2% share. In 2017 Coari was in fourth position among the state's municipalities, it moved to second position in 2018, where its GDP increase was occasioned by the strong boost from the extractive industry. [ 7 ]
https://en.wikipedia.org/wiki/Urucu_Oil_Province
The Uruz Project had the goal of breeding back the extinct aurochs ( Bos p. primigenius ). Uruz is the old Germanic word for aurochs. The Uruz Project was initiated in 2013 by the True Nature Foundation [ 1 ] and presented at TEDx DeExtinction, a day-long conference [ 2 ] organised by the Long Now Foundation with the support of TED and in partnership with National Geographic Society , [ 3 ] to showcase the prospects of bringing extinct species back to life. The de-extinction movement itself is spearheaded by the Long Now Foundation . Technically, Bos primigenius is not wholly extinct. The wild subspecies B. p. primigenius , indicus and africanus are, but the species is still represented by domestic cattle . Most, or all, of the relevant Aurochs characteristics, and therefore the underlying DNA , needed to "breed back" an aurochs-like cattle type can be found in B. p. taurus . Domestic cattle originated in the middle east, and there also has been introgression of European aurochs into domestic cattle in ancient times. [ 4 ] The Uruz Project's goal is to collect all relevant data and reunite scattered aurochs characteristics, and thus DNA, in one animal. Ecological restoration projects cannot be complete without bringing back those key elements that help shape and reshape wild landscapes. The European aurochs ( Bos p. primigenius ) was a large and long-horned wild bovine herbivore that existed from the most western tip of Europe until Siberia in present-day Russia . Aurochs have played a major role in human history . They are often depicted in rock-art , including the famous, well-conserved cave paintings made by Cro-Magnon people in the Lascaux Caves , estimated to be 17,300 years old. Aurochs and other large animals portrayed in Paleolithic cave art were often hunted for food. Hunting and habitat loss caused by humans , including agricultural land conversion, caused the aurochs to go extinct in 1627, when the last individual, a female, died in Poland ’s Jaktorów Forest. [ 5 ] The aurochs is one of the keystone species that is missing in Europe. Their grazing and browsing patterns, trampling of the soil and faeces had a profound impact on the vegetation and landscapes it inhabited. Grazing results in a greater variety of plant species , structures and ecological niches in a landscape that benefit both biodiversity and production . [ 6 ] Megaherbivores like the aurochs also controlled vegetation development. [ 7 ] The Uruz Project aims to breed an aurochs-like breed of cattle from a limited number of carefully selected primitive cattle breeds with known Aurochs characteristics. The project uses Sayaguesa cattle , Maremmana primitiva or Hungarian grey cattle , Chianina and Watusi . The genome of the Aurochs has been completely reconstructed and serves as the baseline for the reconstruction of the Aurochs. [ 4 ] [ 8 ]
https://en.wikipedia.org/wiki/Uruz_Project
In topology , Urysohn's lemma is a lemma that states that a topological space is normal if and only if any two disjoint closed subsets can be separated by a continuous function . [ 1 ] Urysohn's lemma is commonly used to construct continuous functions with various properties on normal spaces. It is widely applicable since all metric spaces and all compact Hausdorff spaces are normal. The lemma is generalised by (and usually used in the proof of) the Tietze extension theorem . The lemma is named after the mathematician Pavel Samuilovich Urysohn . Two subsets A {\displaystyle A} and B {\displaystyle B} of a topological space X {\displaystyle X} are said to be separated by neighbourhoods if there are neighbourhoods U {\displaystyle U} of A {\displaystyle A} and V {\displaystyle V} of B {\displaystyle B} that are disjoint. In particular A {\displaystyle A} and B {\displaystyle B} are necessarily disjoint. Two plain subsets A {\displaystyle A} and B {\displaystyle B} are said to be separated by a continuous function if there exists a continuous function f : X → [ 0 , 1 ] {\displaystyle f:X\to [0,1]} from X {\displaystyle X} into the unit interval [ 0 , 1 ] {\displaystyle [0,1]} such that f ( a ) = 0 {\displaystyle f(a)=0} for all a ∈ A {\displaystyle a\in A} and f ( b ) = 1 {\displaystyle f(b)=1} for all b ∈ B . {\displaystyle b\in B.} Any such function is called a Urysohn function for A {\displaystyle A} and B . {\displaystyle B.} In particular A {\displaystyle A} and B {\displaystyle B} are necessarily disjoint. It follows that if two subsets A {\displaystyle A} and B {\displaystyle B} are separated by a function then so are their closures. Also it follows that if two subsets A {\displaystyle A} and B {\displaystyle B} are separated by a function then A {\displaystyle A} and B {\displaystyle B} are separated by neighbourhoods. A normal space is a topological space in which any two disjoint closed sets can be separated by neighbourhoods. Urysohn's lemma states that a topological space is normal if and only if any two disjoint closed sets can be separated by a continuous function. The sets A {\displaystyle A} and B {\displaystyle B} need not be precisely separated by f {\displaystyle f} , i.e., it is not necessary and guaranteed that f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} and ≠ 1 {\displaystyle \neq 1} for x {\displaystyle x} outside A {\displaystyle A} and B . {\displaystyle B.} A topological space X {\displaystyle X} in which every two disjoint closed subsets A {\displaystyle A} and B {\displaystyle B} are precisely separated by a continuous function is perfectly normal . Urysohn's lemma has led to the formulation of other topological properties such as the 'Tychonoff property' and 'completely Hausdorff spaces'. For example, a corollary of the lemma is that normal T 1 spaces are Tychonoff . A topological space X {\displaystyle X} is normal if and only if, for any two non-empty closed disjoint subsets A {\displaystyle A} and B {\displaystyle B} of X , {\displaystyle X,} there exists a continuous map f : X → [ 0 , 1 ] {\displaystyle f:X\to [0,1]} such that f ( A ) = { 0 } {\displaystyle f(A)=\{0\}} and f ( B ) = { 1 } . {\displaystyle f(B)=\{1\}.} The proof proceeds by repeatedly applying the following alternate characterization of normality. If X {\displaystyle X} is a normal space, Z {\displaystyle Z} is an open subset of X {\displaystyle X} , and Y ⊆ Z {\displaystyle Y\subseteq Z} is closed, then there exists an open U {\displaystyle U} and a closed V {\displaystyle V} such that Y ⊆ U ⊆ V ⊆ Z {\displaystyle Y\subseteq U\subseteq V\subseteq Z} . Let A {\displaystyle A} and B {\displaystyle B} be disjoint closed subsets of X {\displaystyle X} . The main idea of the proof is to repeatedly apply this characterization of normality to A {\displaystyle A} and B ∁ {\displaystyle B^{\complement }} , continuing with the new sets built on every step. The sets we build are indexed by dyadic fractions . For every dyadic fraction r ∈ ( 0 , 1 ) {\displaystyle r\in (0,1)} , we construct an open subset U ( r ) {\displaystyle U(r)} and a closed subset V ( r ) {\displaystyle V(r)} of X {\displaystyle X} such that: Intuitively, the sets U ( r ) {\displaystyle U(r)} and V ( r ) {\displaystyle V(r)} expand outwards in layers from A {\displaystyle A} : This construction proceeds by mathematical induction . For the base step, we define two extra sets U ( 1 ) = B ∁ {\displaystyle U(1)=B^{\complement }} and V ( 0 ) = A {\displaystyle V(0)=A} . Now assume that n ≥ 0 {\displaystyle n\geq 0} and that the sets U ( k / 2 n ) {\displaystyle U\left(k/2^{n}\right)} and V ( k / 2 n ) {\displaystyle V\left(k/2^{n}\right)} have already been constructed for k ∈ { 1 , … , 2 n − 1 } {\displaystyle k\in \{1,\ldots ,2^{n}-1\}} . Note that this is vacuously satisfied for n = 0 {\displaystyle n=0} . Since X {\displaystyle X} is normal, for any a ∈ { 0 , 1 , … , 2 n − 1 } {\displaystyle a\in \left\{0,1,\ldots ,2^{n}-1\right\}} , we can find an open set and a closed set such that The above three conditions are then verified. Once we have these sets, we define f ( x ) = 1 {\displaystyle f(x)=1} if x ∉ U ( r ) {\displaystyle x\not \in U(r)} for any r {\displaystyle r} ; otherwise f ( x ) = inf { r : x ∈ U ( r ) } {\displaystyle f(x)=\inf\{r:x\in U(r)\}} for every x ∈ X {\displaystyle x\in X} , where inf {\displaystyle \inf } denotes the infimum . Using the fact that the dyadic rationals are dense , it is then not too hard to show that f {\displaystyle f} is continuous and has the property f ( A ) ⊆ { 0 } {\displaystyle f(A)\subseteq \{0\}} and f ( B ) ⊆ { 1 } . {\displaystyle f(B)\subseteq \{1\}.} This step requires the V ( r ) {\displaystyle V(r)} sets in order to work. The Mizar project has completely formalised and automatically checked a proof of Urysohn's lemma in the URYSOHN3 file .
https://en.wikipedia.org/wiki/Urysohn's_lemma
The Usa Marine Biological Institute ( UMBI ) (sometimes referred to as MBI-Japan , Japanese Marine Biological Institute , Usa Kaiyo Center or just Usa ) is one of the oldest and largest centers for phycology , marine biology research , graduate training, and public service in Japan. It is devoted to scientific research leading to MS and PhD degrees in phycology, marine biology and related fields. It grants degrees jointly with Kochi University . UMBI is located in the village of Usa cho , Kōchi Prefecture , Japan. The Usa Marine Biological Station was founded in 1953 as an independent research institute. [ 1 ] The Usa Fisheries Laboratory was founded in 1968. [ 1 ] In 1978, the two organizations were merged to form the Usa Marine Biological Institute. [ 1 ] In parallel, its main publication, Reports of the Usa Marine Biological Station (July 1954–1978), became the Reports of the Usa Marine Biological Institute starting with the March 1979 issue. [ 2 ] Under the directorship of Professor Masao Ohno, the institute established a Japan International Cooperation Agency (JICA) training program in marine biology, since when a large number of foreign researchers have come to the institute to pursue short-term research projects. The current director, Professor Izumi Kinoshita, supervises and coordinates the JICA training program. In 2004, UMBI started a new graduate program, Kuroshio Sciences, jointly with Kochi University, to study the Kuroshio Current from an interdisciplinary perspective. UMBI graduate students are supported by various financial aid schemes, especially the Monbukagakusho MEXT International PhD Program . UMBI operates several crewed research vessels and vehicles, owned by Kochi University or the Japanese Government: Usa Marine Biological Institute is renowned for marine phycological research. Emeritus Professor Masao Ohno was the first person in Japan to use an artificial seeding method for the commercial cultivation of green algae . The institute is one of the pioneer research institutes in the world for the study of Ulvophycean algae and has state-of-art facilities for marine phycological research. Contemporary marine phycology research in the UMBI focuses on culture studies to establish life-histories of Ulvophycean algae ( Ulva and Monostroma ), tank cultivation of Ulva using deep seawater (conducted jointly with the Deep Seawater Research Center , Muroto , Japan) and the ecology of Ecklonia and Sargasm species in the Pacific Ocean .
https://en.wikipedia.org/wiki/Usa_Marine_Biological_Institute
Usability inspection is the name for a set of methods where an evaluator inspects a user interface . This is in contrast to usability testing where the usability of the interface is evaluated by testing it on real users. Usability inspections can generally be used early in the development process by evaluating prototypes or specifications for the system that can't be tested on users. Usability inspection methods are generally considered to be less costly to implement than testing on users. [ 1 ] Usability inspection methods include: This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Usability_inspection
A usability lab is a place where usability testing is done. It is an environment where users are studied interacting with a system for the sake of evaluating the system's usability . Depending on the kind of system that is evaluated, the user sits in front of a personal computer or stands in front of the systems interface, alongside a facilitator who gives the user tasks to perform. Behind a one-way mirror , a number of observers watch the interaction, make notes, and ensure the activity is recorded . Very often the testing and the observing room are not placed alongside. In this case the video and audio observation are transmitted through a (wireless) network and broadcast via a video monitor or video beamer and loudspeakers. Usually, sessions will be filmed and the software will log interaction details. Usability is defined by how effectively users can use a product, a brochure, application, website, software package, or video game to achieve their goals. [ 1 ] Usability testing is a practice used within the field of user-centered design and user experience that allows for the designers to interact with the users directly about the product to make any necessary modifications to the prototype of the product, whether it be software, a device, or a website. The purpose of the practice is to discover any missed requirements or any kind of development that was seen to be intuitive but ended up confusing new users. By testing user needs and how they interact with the product, designers are able to assess on the product's capacity to meet its intended purpose. Usability labs help optimize UI designs, work flows, understanding the voice of the customers, and understanding what customers really do. [ 2 ] Through in-lab sessions at a specified location, designers, stakeholders and anyone else involved in the project, are observing the process of how a customer interacts with the current prototype . To understand user needs, engineers must observe people while they are actually using computer systems and collect data from on system usability. In-lab usability testing usually has small and specific sample sizes to better obtain qualitative data on the product. The participants cooperate with engineers to understand how the user interacts with the system being tested through hands-on testing. [ 3 ] "Through this process, developers are able to identify issues with the product. To aid fixing any problems, observers pay strict attention to: User experience is important to customer response in the market. The causes of failed designs and ad design decisions can usually be attributed to a lack of information. [ 5 ] A poor user experience can ruin a product launch, drive users away for good and impact the reputation of a company. [ 6 ] Usability tests are both formal and informal attempts to gather data about how users experience interfaces (Angelo [ 1 ] ), devices, software, sites, and many more. Usability tests have a wide range of involvement in other fields of product development. Usability labs usually feature two rooms. One room containing the lab with the system being tested for usability and all the other necessary equipment such as video and audio recording devices or eye motion trackers . Here, the participant is asked to come in and they are provided tasks to complete to test specific ideas of the product, but sometimes are allowed to explore the product by trying what a certain feature does. In formal labs, there is typically a second room with a one-way mirror . Here, the observation room is held that allows stakeholders, designers, developers, and other parties involved in the project to observe and understand that some things they might have found to be intuitive among their team to actually be more complex than the feature had to be. Choosing participants for lab testing involves consideration. Not just anyone is a suitable participant for the in-lab test. It is vital to recruit participants who are similar to the site users for usability testing. Developers and designers are not the users, so refrain from using internal staff as participants unless the individual has had no involvement in the design or development of the site or product and they represent a target audience. [ 4 ] It is also a good idea to compensate participants for taking time out of their schedule to involve their self in a voluntary experiment; however, there are restrictions. For example, federal employees cannot be paid for their time. The number of users to test is also an important consideration when recruiting participants. Usability tests cost money and resources which is unfortunately very limited, especially with smaller-scaled projects. One effective approach is to consider using five participants. "Zero users give you zero insights." [ 7 ] The moment a single user has been observed in a lab setting, insight on the product is immediately gained. Features in the current design need to be redesigned and revisited to essentially fix anything that was not helping users with their experience. However, there is a limit to how many users should be considered because "as you add more and more users, you learn less and less because you will keep seeing the same things again and again." User research is the process of observing and understanding how people interact with different objects in everyday life. These can range anywhere from websites and software products to hardware and other gadgets.
https://en.wikipedia.org/wiki/Usability_lab
The term use error has recently been introduced to replace the commonly used terms human error and user error . The new term, which has already been adopted by international standards organizations for medical devices (see #Use errors in health care below for references), suggests that accidents should be attributed to the circumstances, rather than to the human beings who happened to be there. The term "use error" was first used in May 1995 in an MD+DI guest editorial, "The Issue Is 'Use,' Not 'User,' Error", by William Hyman. [ 1 ] Traditionally, human errors are considered as a special aspect of human factors . Accordingly, they are attributed to the human operator , or user . When taking this approach, we assume that the system design is perfect, and the only source for the use errors is the human operator. For example, the U.S. Department of Defense (DoD) HFACS [ 2 ] classifies use errors attributed to the human operator, disregarding improper design and configuration setting, which often result in missing alarms , or in inappropriate alerting . The need for changing the term was due to a common malpractice of the stakeholders (the responsible organizations, the authorities, journalists) in cases of accidents. [ 3 ] Instead of investing in fixing the error-prone design, management attributed the error to the users. The need for the change has been pointed out by the accident investigators: A mishap is typically considered as either a use error or a force majeure: [ 8 ] In 1998, Cook, Woods and Miller presented the concept of hindsight bias , exemplified by celebrated accidents in medicine, by a workgroup on patient safety. [ 10 ] The workgroup pointed at the tendency to attribute accidents in health care to isolated human failures. They provide references to early research about the effect of knowledge of the outcome, which was unavailable beforehand, on later judgement about the processes that led up to that outcome. They explain that in looking back, we tend to oversimplify the situation that the actual practitioners faces. They conclude focusing on the hindsight knowledge prevents our understanding of the richer story, the circumstances of the human error. According to this position, the term Use Error is formally defined in several international standards , such as IEC 62366, ISO 14155 and ISO 14971 , to describe ISO standards about medical devices and procedures provide examples of use errors, which are attributed to human factors, include slips, lapses and mistakes. Practically, this means that they are attributed to the user, implying the user's accountability. The U.S. Food and Drug Administration glossary of medical devices provides the following explanation about this term: [ 11 ] With this interpretation by ISO and the FDA, the term use error is actually synonymous with user error . Another approach, which distinguishes 'use errors' from 'user errors', is taken by IEC 62366. Annex A includes an explanation justifying the new term: This explanation complies with "The New View", which Sidney Dekker suggested as an alternative to "The Old View". This interpretation favors investigations intended to understand the situation, rather than blaming the operators. In a 2011 report draft on health IT usability, the U.S. National Institute of Standards and Technology (NIST) defines "use error" in healthcare IT this way: "Use error is a term used very specifically to refer to user interface designs that will engender users to make errors of commission or omission. It is true that users do make errors, but many errors are due not to user error per se but due to designs that are flawed, e.g., poorly written messaging, misuse of color-coding conventions, omission of information, etc.". [ 12 ] An example of an accident due to a user error is the ecological disaster of 1967 caused by the Torrey Canyon supertanker. The accident was due to a combination of several exceptional events, the result of which was that the supertanker was heading directly to the rocks. At that point, the captain failed to change the course because the steering control lever was inadvertently set to the Control position, which disconnected the rudder from the wheel at the helm. [ 13 ] Examples of the second type are the Three Mile Island accident described above, the NYC blackout following a storm and the chemical plant disaster in Bhopal, India ( Bhopal Disaster ). The URM Model [ 14 ] characterizes use errors in terms of the user's failure to manage a system deficiency. Six categories of use errors are described in a URM document: Erik Hollnagel argues that going from and 'old' view to a 'new' view is not enough. One should go all the way to a 'no' view. This means that the notion of error, whether user error or use error might be destructive rather than constructive. Instead, he proposes to focus on the performance variability of everyday actions, on the basis that this performance variability is both useful and necessary. In most cases the result is that things go right, in a few cases that things go wrong. But the reason is the same. [ 15 ] Hollnagel expanded on this in his writings about the efficiency–thoroughness trade-off principle [ 16 ] of Resilience Engineering, [ 17 ] and the Resilient Health Care Net. [ 18 ]
https://en.wikipedia.org/wiki/Use_error
Explosive materials are produced in numerous physical forms for their use in mining, engineering, or military applications. The different physical forms and fabrication methods are grouped together in several use forms of explosives . Explosives are sometimes used in their pure forms, but most common applications transform or modify them. These use forms are commonly categorized as: [ 1 ] Castings, or castable explosives, are explosive materials or mixtures in which at least one component can be safely melted at a temperature which is safe to handle the other components, and which are normally produced by casting or pouring the molten mixture or material into a form or use container. In modern usage, Trinitrotoluene or TNT is the basic meltable explosive used in essentially all castable explosives. Other ingredients found in modern castable explosives include: [ 1 ] Common castable explosives include: Polymer-bonded explosives, also known as Plastic-bonded explosives or simply PBX , are a relatively solid and inflexible explosive form containing a powdered explosive material and a polymer (plastic) binder. These are usually carefully mixed, often with a very thin coating of the polymer onto the powder grains of the explosive material, and then hot pressed to form dense solid blocks of PBX material. There are numerous PBX explosives, mostly based on RDX , HMX , or TATB explosive materials. An extensive but by no means complete list of PBX materials is in the main Polymer-bonded explosive article. The major naming systems for PBX use: LX numbers range from 1 to 17. PBX system numbers start around 9000 and use numerous scattered numbers between there and 9700. Some commonly known PBXes are: PBXes are notable for their use in modern Nuclear weapons . Modern US and British nuclear warheads nearly all use insensitive PBX types using only TATB explosive, to increase safety in case of accidents. Technically known as putties , but more commonly Plastic explosives , these mixtures are a thick, flexible, moldable solid material that can be shaped and will retain that shape after forming, much like clay . Putties normally contain mostly RDX explosive, but may include some PETN (Semtex, for example). Some common putties are: Rubberized explosives are flat sheets of solid but flexible material, a mixture of a powdered explosive (commonly RDX or PETN ) and a synthetic or natural rubber compound. Rubberized sheet explosives are commonly used for explosive welding and for various other industrial and military applications. Rubberized explosives can be cut to specific shape, bent around solid surfaces, glued or taped in place, or simply laid on relatively flat surfaces. Some common rubberized explosives include: [ 1 ] Extrudable explosives are an extremely viscous liquid, similar in properties to silicone based caulking materials used in construction. It is used in similar ways - stored in a container, then extruded out a nozzle into thin cracks, holes, or along surfaces. Some extrudable explosives can then be hardened using a heat curing process. Others will remain a viscous fluid permanently. Common extrudable explosives include: Binary explosives are cap-sensitive (detonatable with a standard #8 blasting cap ) two-part explosives mixtures, shipped separately and combined at the use site. Many of these mixtures are based on Ammonium nitrate as an oxidizer plus a volatile fuel, but unlike ANFO (ammonium nitrate fuel oil explosive) these binaries can be detonated by blasting caps. ANFO requires high explosive boosters to detonate it. Most binary explosives are a slurry after mixing, but some form a fluid with solid components dissolved into liquid ones. Some common binary explosives include: The historical but now uncommon Astrolite explosive is also a binary explosive. This category is somewhat unusual in that a single explosives researcher, Gerald Hurst , was responsible for inventing and developing most of the explosive mixtures now in use. [ 2 ] Blasting agents are explosive materials or mixtures which are not detonatable by standard #8 blasting caps . The best known blasting agent is ANFO explosive, a mixture containing primarily ammonium nitrate with a small quantity (typically around 6%) of fuel oil, most commonly diesel fuel . Other fuels and additives are used as well. While ANFO is often made on-site using fertilizer grade ammonium nitrate, blasting agents can also be purchased in prepackaged form, usually in metal or cardboard cylinders. Some brand names of packaged blasting agents include: It is usually sold in the form of a stick roughly eight inches (20 cm) long and one inch (2.5 cm) in diameter but other sizes also exist. Dynamite is considered a "high explosive", which means it detonates rather than deflagrates. The chief uses of dynamite used to be in construction, mining and demolition. However, newer explosives and techniques have replaced dynamite in many applications. Dynamite is still used, mainly as bottom charge or in underwater blasting.
https://en.wikipedia.org/wiki/Use_forms_of_explosives
Major insurgent attacks Foreign interventions IS genocide of minorities IS war crimes Timeline Chemical weapon use in the War in Iraq (2013–2017) by IS has been confirmed by the OPCW [ 1 ] and US defense officials. [ 2 ] The table below lists reported chemical weapons attacks in the Iraqi Civil War . [ N 1 ] After around 35 Kurdish soldiers were injured during fighting against Islamic State militants southwest of Erbil in August 2015, samples were taken by OPCW in an investigation directed by the Iraqi government . In February 2016, a source at the OPCW confirmed that the samples tested positive for mustard gas . [ 1 ] [ 23 ]
https://en.wikipedia.org/wiki/Use_of_chemical_weapons_in_the_War_in_Iraq_(2013–2017)
Various governmental agencies involved with environmental protection and with occupational safety and health have promulgated regulations limiting the allowable concentrations of gaseous pollutants in the ambient air or in emissions to the ambient air. Such regulations involve a number of different expressions of concentration. Some express the concentrations as ppmv and some express the concentrations as mg/m 3 , while others require adjusting or correcting the concentrations to reference conditions of moisture content, oxygen content or carbon dioxide content. This article presents a set of useful conversions and formulas for air dispersion modeling of atmospheric pollutants and for complying with the various regulations as to how to express the concentrations obtained by such modeling. [ 1 ] The conversion equations depend on the temperature at which the conversion is wanted (usually about 20 to 25 degrees Celsius ). At an ambient air pressure of 1 atmosphere (101.325 kPa ), the general equation is: and for the reverse conversion: Notes: Atmospheric pollutant concentrations expressed as mass per unit volume of atmospheric air (e.g., mg/m 3 , μg/m 3 , etc.) at sea level will decrease with increasing altitude because the atmospheric pressure decreases with increasing altitude. The change of atmospheric pressure with altitude can be obtained from this equation: [ 2 ] Given an atmospheric pollutant concentration at an atmospheric pressure of 1 atmosphere (i.e., at sea level altitude), the concentration at other altitudes can be obtained from this equation: As an example, given a concentration of 260 mg/m 3 at sea level, calculate the equivalent concentration at an altitude of 1,800 meters: C a = 260 × 0.9877 18 = 208 mg/m 3 at 1,800 meters altitude A normal cubic meter (Nm 3 ) is the metric expression of gas volume at standard conditions and it is usually ( but not always ) defined as being measured at 0 °C and 1 atmosphere of pressure. A standard cubic foot (scf) is the USA expression of gas volume at standard conditions and it is often ( but not always ) defined as being measured at 60 °F and 1 atmosphere of pressure. There are other definitions of standard gas conditions used in the USA besides 60 °F and 1 atmosphere. That being understood: 1 Nm 3 of any gas (measured at 0 °C and 1 atmosphere of absolute pressure) equals 37.326 scf of that gas (measured at 60 °F and 1 atmosphere of absolute pressure). 1 kmol of any ideal gas equals 22.414 Nm 3 of that gas at 0 °C and 1 atmosphere of absolute pressure ... and 1 lbmol of any ideal gas equals 379.482 scf of that gas at 60 °F and 1 atmosphere of absolute pressure. Notes: Meteorological data includes wind speeds which may be expressed as statute miles per hour, knots , or meters per second. Here are the conversion factors for those various expressions of wind speed: 1 m/s = 2.237 statute mile/h = 1.944 knots 1 knot = 1.151 statute mile/h = 0.514 m/s 1 statute mile/h = 0.869 knots = 0.447 m/s Note: Many environmental protection agencies have issued regulations that limit the concentration of pollutants in gaseous emissions and define the reference conditions applicable to those concentration limits. For example, such a regulation might limit the concentration of NOx to 55 ppmv in a dry combustion exhaust gas corrected to 3 volume percent O 2 . As another example, a regulation might limit the concentration of particulate matter to 0.1 grain per standard cubic foot (i.e., scf) of dry exhaust gas corrected to 12 volume percent CO 2 . Environmental agencies in the USA often denote a standard cubic foot of dry gas as "dscf" or as "scfd". Likewise, a standard cubic meter of dry gas is often denoted as "dscm" or "scmd" (again, by environmental agencies in the USA). If a gaseous emission sample is analyzed and found to contain water vapor and a pollutant concentration of say 40 ppmv, then 40 ppmv should be designated as the "wet basis" pollutant concentration. The following equation can be used to correct the measured "wet basis" concentration to a " dry basis " concentration: [ 3 ] Thus, a wet basis concentration of 40 ppmv in a gas having 10 volume percent water vapor would have a dry basis concentration = 40 ÷ ( 1 - 0.10 ) = 44.44 ppmv. The following equation can be used to correct a measured pollutant concentration in an emitted gas (containing a measured O 2 content) to an equivalent pollutant concentration in an emitted gas containing a specified reference amount of O 2 : [ 4 ] Thus, a measured NO x concentration of 45 ppmv (dry basis) in a gas having 5 volume % O 2 is 45 × ( 20.9 - 3 ) ÷ ( 20.9 - 5 ) = 50.7 ppmv (dry basis) of NO x when corrected to a gas having a specified reference O 2 content of 3 volume %. The following equation can be used to correct a measured pollutant concentration in an emitted gas (containing a measured CO 2 content) to an equivalent pollutant concentration in an emitted gas containing a specified reference amount of CO 2 : [ 5 ] Thus, a measured particulates concentration of 0.1 grain per dscf in a gas that has 8 volume % CO 2 is 0.1 × ( 12 ÷ 8 ) = 0.15 grain per dscf when corrected to a gas having a specified reference CO 2 content of 12 volume %. Notes:
https://en.wikipedia.org/wiki/Useful_conversions_and_formulas_for_air_dispersion_modeling
A useless machine or useless box is a device whose only function is to turn itself off. The best-known useless machines are those inspired by Marvin Minsky 's design, in which the device's sole function is to switch itself off by operating its own "off" switch. Such machines were popularized commercially in the 1960s, sold as an amusing engineering hack , or as a joke. More elaborate devices and some novelty toys , which have an obvious entertainment function, have been based on these simple useless machines. The Italian artist Bruno Munari began building "useless machines" ( macchine inutili ) in the 1930s. He was a "third generation" Futurist and did not share the first generation's boundless enthusiasm for technology but sought to counter the threats of a world under machine rule by building machines that were artistic and unproductive. [ 1 ] The version of the useless machine that became famous in information theory (basically a box with a simple switch which, when turned "on", causes a hand or lever to appear from inside the box that switches the machine "off" before disappearing inside the box again [ 2 ] ) appears to have been invented by MIT professor and artificial intelligence pioneer Marvin Minsky , while he was a graduate student at Bell Labs in 1952. [ 3 ] Minsky dubbed his invention the "ultimate machine", but this nomenclature did not catch on. [ 3 ] The device has also been called the "Leave Me Alone Box". [ 4 ] Minsky's mentor at Bell Labs, information theory pioneer Claude Shannon (who later became an MIT professor himself), made his own versions of the machine. He kept one on his desk, where science fiction author Arthur C. Clarke saw it. Clarke later wrote, "There is something unspeakably sinister about a machine that does nothing—absolutely nothing—except switch itself off", and he was fascinated by the concept. [ 3 ] Minsky also invented a "gravity machine" that would ring a bell if the gravitational constant were to change, a theoretical possibility that is not expected to occur in the foreseeable future. [ 3 ] In the 1960s, a novelty toy maker called "Captain Co." sold a "Monster Inside the Black Box", featuring a mechanical hand that emerged from a featureless plastic black box and flipped a toggle switch , turning itself off. This version may have been inspired in part by " Thing ", the disembodied hand featured in the television sitcom The Addams Family . [ 3 ] Other versions have been produced. [ 5 ] In their conceptually purest form, these machines do nothing except switch themselves off. It is claimed that Don Poynter, who graduated from the University of Cincinnati in 1949 and founded Poynter Products, Inc. , first produced and sold the "Little Black Box", which simply switched itself off. He then added the coin snatching feature, dubbed his invention "The Thing", arranged licensing with the producers of the television show, The Addams Family , and later sold "Uncle Fester's Mystery Light Bulb" as another show spinoff product . [ 6 ] [ 7 ] Robert J. Whiteman, owner and president of Liberty Library Corporation , also claims credit for developing "The Thing". [ 8 ] [ 9 ] (Both companies were later to be co-defendants in landmark litigation initiated by Theodor Geisel ("Dr. Seuss") over copyright issues related to figurines .) [ 10 ] [ 6 ] Both the plain black box and the bank version were widely sold by Spencer Gifts , and appeared in its mail-order catalogs through the 1960s and early 1970s. As of 2015 [update] , a version of the coin snatching black box is being sold as the "Black Box Money Trap Bank" or "Black Box Bank". [ citation needed ] Do-it-yourself versions of the useless machine (often modernized with microprocessor controls) have been featured in a number of web videos [ 11 ] and inspired more complex machines that are able to move or which use more than one switch. [ 12 ] As of 2015 [update] , there are several completed or kit form devices being offered for sale. [ 13 ] In 2009, the artist David Moises exhibited his reconstruction of The Ultimate Machine aka Shannon's Hand , and explained the interactions of Claude Shannon, Marvin Minsky, and Arthur C. Clarke regarding the device. [ 14 ] Episode 3 of the third season of the FX show Fargo , "The Law of Non-Contradiction", features a useless machine [ 15 ] (and, in a story within the story , an android named MNSKY after Marvin Minsky). [ 16 ]
https://en.wikipedia.org/wiki/Useless_machine
User-centered design ( UCD ) or user-driven development ( UDD ) is a framework of processes in which usability goals, user characteristics, environment , tasks and workflow of a product , service or brand are given extensive attention at each stage of the design process . This attention includes testing which is conducted during each stage of design and development from the envisioned requirements , through pre-production models to post production. [ 1 ] [ 2 ] Testing [ 3 ] is beneficial as it is often difficult for the designers of a product to understand the experiences of first-time users and each user's learning curve . UCD is based on the understanding of a user, their demands, priorities and experiences, and can lead to increased product usefulness and usability. [ 4 ] UCD applies cognitive science principles to create intuitive, efficient products by understanding users' mental processes, behaviors, and needs. UCD differs from other product design philosophies in that it tries to optimize the product around how users engage with the product, in order that users are not forced to change their behavior and expectations to accommodate the product. The users are at the focus, followed by the product's context, objectives and operating environment, and then the granular details of task development, organization, and flow. [ 2 ] [ 3 ] The term user-centered design (UCD) was coined by Rob Kling in 1977 [ 5 ] and later adopted in Donald A. Norman's research laboratory at the University of California, San Diego . The concept became popular as a result of Norman's 1986 book User-Centered System Design: New Perspectives on Human-Computer Interaction [ 6 ] and the concept gained further attention and acceptance in Norman's 1988 book The Design of Everyday Things , in which Norman describes the psychology behind what he deems 'good' and 'bad' design through examples. He exalts the importance of design in our everyday lives and the consequences of errors caused by bad designs. Norman describes principles for building well-designed products. His recommendations are based on the user's needs, leaving aside what he considers secondary issues like aesthetics. The main highlights of these are: In a later book, Emotional Design , [ 7 ] : p.5 onwards Norman returns to some of his earlier ideas to elaborate what he had come to find as overly reductive. The UCD process considers user requirements from the beginning and throughout the product cycle. Requirements are noted and refined through investigative methods including: ethnographic study, contextual inquiry , prototype testing, usability testing and other methods. Generative methods may also be used including: card sorting , affinity diagramming and participatory design sessions. In addition, user requirements can be inferred by careful analysis of usable products similar to the product being designed. UCD takes inspiration from the following models: The following principles help in ensuring a design is user-centered: [ 11 ] The goal of UCD is to make products with a high degree of usability (i.e., convenience of use, manageability, effectiveness, and meeting the user's requirements). The general phases of the UCD process are: [ 13 ] [ 14 ] The above procedure is repeated to further refine the product. These phases are general approaches and factors such as design goals, team and their timeline, and environment in which the product is developed, determine the appropriate phases for a project and their order. Practical models include the waterfall model , agile model or any other software engineering practice. There are a number of tools that are used in the analysis of UCD, mainly: personas, scenarios, and essential use cases. During the UCD process, the design team may create a persona , an archetype representing a product user which helps guide decisions about product features, navigation, interactions, and aesthetics. In most cases, personas are synthesized from a series of ethnographic interviews with real people, then captured in one- or two-page descriptions that include behavior patterns, goals, skills, attitudes, and environment, and possibly fictional personal details to give it more character. [ 15 ]
https://en.wikipedia.org/wiki/User-centered_design
halcyon.exe is a computer art installation created in 2022 by Mark Fingerhut. It is described as a "software poem" that uses code to coordinate a computer's audio and video with realtime physical effects from lights, fans, rumbling vibrations, and water misters. [ 1 ] [ 2 ] It has been exhibited in New York City and Chicago. [ 3 ]
https://en.wikipedia.org/wiki/User:Kushie0/sandbox
Other sandboxes: Main sandbox | Template sandbox The Deep Synoptic Array , or DSA-2000 is a large radio telescope currently under construction in Nevada USA . The main goal is a sky survey , acting as a radio camera to produce images of the entire sky visible from that site. The completed array, scattered over an area of 19 x 15 km, will contain 2000 steerable 5 meter parabolic antennas that cover the 0.7 – 2 GHz frequency range. It is financed by Schmidt Sciences and expected to be operational in 2026. [ 1 ] The DSA-2000 incorporates two main technical advances, both related to its architecture of a large number of small antennas. The first is that having a large number of randomly distributed antennas make it much easier to convert the radio signals into images. This strategy had never been practical before, however, since antennas sensitive enough for radio astronomy historically required cooling to very low temperatures, which made each antenna too expensive to build such a large array. So the second advance was a receiver, using modern semiconductor technology, that could achieve the needed sensitivity without cooling. Traditional radio telescope arrays have had a relatively small number of relatively large antennas (the VLA, for example, has 27 dishes of 25 meters diameter). This results in a hideous point spread function , which requires considerable post-processing to turn into useful images. In particular, additional non-linear constraints (such as positivity) must be assumed, both vastly complicating the aperture synthesis calculations and making them dependent on the particular assumptions used. In turn the need for complex processing requires huge data storage and transport requirements, as the raw data (or the visibilities, the correlations between pairs of antennas) need to be saved and delivered to the end user for later post-processing. The DSA-2000, in comparison, will have near complete sampling of the uv-plane. This gives a native point spread function which is sufficient good that much less complex algorithms can be used create images in real time, acting as a “radio camera”. Traditional radio telescope receivers have required cooling (often to cryogenic temperatures) to get low enough noise to be useful for astronomical observations. This typically resulted in a cost of at least $100,000 per receiver, making arrays with a large number of antennas impractical. However, recent developments in indium phosphide technology have resulting in transistors with a low enough noise figure at room temperature [ 2 ] to remove the need for cooling. [ 3 ] Although the main goal is a sky survey, the DSA-2000 will pursue other projects as well:
https://en.wikipedia.org/wiki/User:LouScheffer/sandbox
Find sources: Google ( books · news · scholar · free images · WP refs ) · FENS · JSTOR · TWL Easy tools : Citation bot ( help ) | Advanced: Fix bare URLs This page was last edited by SharkApologist ( talk | contribs ) 0 seconds ago. ( Update timer ) In physics, binding energy (also separation energy, or, in chemistry, ionisation energy) is the smallest amount of energy required to remove a particle from a system of particles, or to disassemble a system of particles into its individual parts. [ 1 ] It is this energy that is released from a bound physical system where a change in composition occurs, such as combustion in chemistry and the triple-alpha process in the Solar core . Commonly applied in condensed matter, atomic, nuclear, and particle physics, as well as chemistry and astronomy, the binding energy of a system, and its relations to similar systems, shape the energetics of any reaction or process that transforms between two initial and final states of different compositions. Differences in binding energy between the initial and final states will influence whether the reaction is exo- or endothermic, and may impose a threshold condition that must be satisfied for the reaction to proceed or expose a preference for a particular reaction product when more than one is available. As a consequence of Mass-Energy equivalence (Einstein's famous 'E = mc 2 ' equation), mass is freely convertible into energy, and energy to mass. After the reaction the binding energy is radiated from the system, and therefore portion of the unbound particle masses must also be lost in order to provide this energy. The mass lost is fractionally small (and often ignored entirely) for most systems larger than an atomic nucleus, but highly significant for sub-atomic physics. Consider the formation of a simple two-particle system interacting through some attractive force, such that a collision is inevitable. In a lossless system, the particles must either collide elastically, or pass through each other with no effect. In either case, the Kinetic Energy of the collision is not dissipated, and the system will fly apart immediately. The particles will begin a fixed oscillation around a central point, and do not bind. In order for the bound state to form, the incoming kinetic energy must be dissipated by a resistive force . A successful bound state is formed when the two particles are slowed to remain in close proximity to one another, becoming increasingly difficult to tell apart, until eventually acting as a single object. The work done by this dissipating force becomes the binding energy, which is stored as internal energy and eventually escapes the system as heat or light. [ citation needed ] Considering the reverse process — that of unbinding the system — it is clear that an equivalent amount of energy needs to be "put back" in some form, to replace the energy lost. This energy is the binding energy. Stable states have positive binding energies, while unstable states may have negative values. The magnitude of the binding energy is strongly influenced by the strength of the dissipative force, which is determined by the fundamental force responsible for the bound state. Approximate relative strengths of the four fundamental forces are given in table x. States bound via the strong interaction — notably the atomic nucleus — therefore have the highest binding energies for their size, while those formed gravitationally are bound only weakly. It is thus much harder to remove a particle from a system bound via the strong interaction than one bound gravitationally, all other things being equal. A system is considered 'unbound' when the potential energy of the attractive force is zero. For forces with infinite range (the electromagnetic and gravitational interactions), the separation energy is the energy required to separate the two particles to infinity, and is therefore equivalent to the kinetic energy of an object at escape velocity. It should also be noted that while all states can be assigned an associated binding energy, not all processes transitioning between those states are possible. Selection, conservation, or transition rules may prevent a given reaction from taking place for reasons other than energetics. A consequence of Einstein's Postulates of Special Relativity, the Principle of Mass-Energy Equivalence describes mass simply as an additional form of energy, contributing part of the total energy content of the system and freely transferable into other forms.[cite Trägheit eines Körpers] All massive objects contain this intrinsic energy, and massless objects such as the photon may spontaneously generate mass from their own energy content via Pair Production . Immediately following the formation of a bound state, the binding energy remains stored in the system as potential energy. This energy however is readily lost as heat or light, and consequently, once sufficient time has passed, the bound state will be found to be less massive than the initial constituents as free particles — via the Principle of Mass-Energy Equivalence a portion of the initial mass has been converted to heat or light, escaping the system. Indeed, the energy lost by the system will reappear as additional mass in any system which absorbs the radiated energy, [ 2 ] the magnitude being predicted, in the rest frame, by the famous equation E = mc 2 . The total energy change, Q (and therefore termed the 'Q-Value'), is thus the difference in mass-energy between the initial and final states. Q = Δ m c 2 {\displaystyle Q=\Delta mc^{2}} Where Δ m {\displaystyle \Delta m} is the missing mass, termed the 'Mass Defect' (or equivalently, 'Mass Deficit') [cite Krane] Δ m = M i − M f {\displaystyle \Delta m=M_{i}-M_{f}} The higher the binding energy, the larger the mass defect, and the tighter the newly-formed state is bound. Positive values imply that energy is released during the reaction, and such processes may occur spontaneously, while negative values require a threshold of at least |Q| units of input energy, usually in the form of initial kinetic energy, for the reaction to proceed. Mass-Energy Equivalence applies equally to any change in binding energy. However, due to the numerically large c-squared conversion factor, even a large change in energy manifests as only a small change in mass — [ 933 MeV/amu | 1 Joule = 10 −17 kg ]. Relative to the total mass of the system, the change in mass is often negligible in contexts outside the subatomic (where systems with low starting masses are tightly bound by the strong interaction). The small, tightly bound nucleus of Helium-4 is <percentage> lighter than that predicted by adding the masses the constituent protons, neutrons, and electrons naïvely, [ citation needed ] while the electron cloud surrounding it is only [figure] lighter. At larger length scales it is only by considering the bulk effects of many such reactions that any effect can be noticed at all. While any energy released or absorbed by such systems, for example during a chemical reaction, is still ultimately sourced from the binding energy of the chemical bonds, the mass defect is too small to be measured with standard equipment. Instead standard calorimetric techniques are used to measure equivalent empirical data, applicable to the bulk change of many such reactions. <-- Warning: GCSE sentences ahead --> difference in binding energy between related systems that is of physical importance. Should any process exist providing a pathway for one bound system to evolve or react into another, the relative difference in binding energies contributes to the energy change(s) involved in that process, and may impose energetic restrictions affecting the yield of products. By careful choice of what is considered a 'particle', the concept of binding energy is applicable to any system where bound states are formed or dissolved. Being independent of the length- or time-scale, requiring no knowledge of the underlying process responsible, and given the prevalence of the formation or destruction of bound states, it is thus is a foundational concept with wide application that underpins many disciplines of research. Principally, it is employed in the understanding of nuclear, particle, and chemical reactions, where the choice of particle is, in each case, individual nucleons within the an atomic nucleus, fundamental subatomic particles, electrons within an atom, or atoms within a molecule or lattice, respectively. An atom — consisting of one or more electrons bound by the electromagnetic interaction to a large central nucleus (with the nucleus acting as a single entity) — is a quantised system where each of the electrons must sit within one of many well-defined energy levels. Each of these energy levels is a bound state, and is thus associated with a definite binding energy. The process of removing an electron from this bound state - Ionisation - therefore requires the input of energy equivalent to the binding energy of the energy level it is removed from, after which the electron is no longer bound and instead may have any continuous energy value as a 'free' electron. In chemistry and atomic physics, this binding energy is instead termed "Ionisation Energy" when referring specifically to the energy require to remove the least tightly bound electron. [ 3 ] "Electron binding energy" may also be used more generically to refer to the energy required to remove an electron from a specific, named atomic level, rather than the least bound level, which may occur in higher-energy processes such as X-Ray absorption. The closely-related (but distinct) term 'Work Function' may also be used in certain contexts. [ which? ] Typical values of first ionisation energies are typically around 1 eV — approximately in the infra-red or visible light bands. [ citation needed ] Binding energies of inner electrons may range into the hundreds-of-KeV range. However even at the top end of this range, the effects of mass-energy equivalence are minimal. As a basic, easily measured atomic quantity, any prospective atomic theory will be expected to reproduce the measured ionisation energies accurately. While general trends are explainable via a basic model of coulombic attraction and repulsion between a central nucleus and orbiting electron cloud, extending these models to derive predictions of specific binding energies fails to reproduce the observed data. [ citation needed ] Indeed this failure of several classical and semi-classical models of the atom (notably the Bohr model ) to accurately predict observed ionisation energies is one of many evidences that led to the rejection of classical theories of the atom in favour of Modern Quantum Mechanics in the early 1920s. [ citation needed ] While rarely contextualised in terms of the binding energy explicitly, [ who? ] the difference in energy between atomic levels is nevertheless implicitly the difference in binding energy between the initial and final levels. In transitioning to the new state the electron must raise or lower its own energy by absorbing or emitting a single photon with the precise energy difference between the two states. As there are a limited number of available states and transitions available, this leaves few, highly specific photon energies the electron may emit or absorb, with each chemical species having a unique arrangement of energy levels and allowable transitions, . The emitted spectral lines thus contain information on the energy-level structure of the species examined, the study of which is the domain of spectroscopy. the energy required to remove one atom from a bound molecule (bond enthalpy) or, the energy required to disassemble the whole molecule into individual atoms (energy of formation). The formation and breaking of a chemical bond within a molecule or lattice (bond dissociation energy, bond enthalpy), or to (dis)assemble a complete molecule ([negative of] energy of formation). Bond energies are typically in the range of a few eV per bond. The bond-dissociation energy of a carbon–carbon bond , for example, is about 3.6 eV. [ citation needed ] An example given is that of calculating the net energy output from complete combustion of one mole of Methane, the reaction equation being In which four C−H bonds and x O=O double bonds are first broken, separating a hydrogen 'particle' from its bound state and thus requiring an initial input of energy to initiate the reaction. This is followed by the formation of y C=O and z O−H bonds, forming bound states out of discrete particles and thus releasing energy. The net energy change of the reaction therefore releases <-A + B = this much>. Hence the reaction is exothermic — in this case visibly so, producing an obvious flame. In the nuclear context, both the terms 'Binding Energy' and 'Separation Energy' are used interchangeably to refer to the energy required to remove a single particle from a multiple-particle system (atomic nucleus). 'Binding energy' alone is used for disassembly of the complete system, while 'Separation Energy' is preferred where where the value for different particles in a composite system differs — for example, the proton separation energy of an atomic nucleus is usually much lower than the neutron separation energy, due to the effects of electrostatic repulsion. At this length scale the effects of the strong interaction dominate, and states may be very tightly bound, with typical separation energies in the MeV range. [ citation needed ] Applying Mass-Energy equivalence, mass defects must also be commensurately large, and the bound state will, once the system is allowed to relax back to the ground state, have lost significant mass compared to the unbound state. Mass-energy equivalence thus provides a mechanism by which nuclear reactions may burn mass into usable energy, or vice-versa. Reactions with a positive Q-Value, such as spontaneous fission of Uranium or fusion of Hydrogen into Helium-4, will release Q units of energy, while those with a negative Q-Value, such as many high-energy reactions studied in particle accelerators , require an input of Q-units of energy before initiating, provided by the kinetic energy of the beamline. Total binding energy increases approximately linearly with increasing number of particles in the system (nucleon number, A). Most nuclei [verb - not contain] approximately 8 MeV per nucleon of Binding Energy. By plotting the binding energy per nucleon of the most common isotopes against their nucleon numbers, the Nuclear Binding curve (Figure x) is obtained. On this scale, the 'up' direction corresponds to a more tightly bound nucleus — more energy required to separate a single particle from the bound state, and more energy released by the binding process. The difference in binding energy between each nuclide provides the mechanism by which radioactive decay and nuclear fusion are able to generate energy. To the lightest elements (small A), there is releasable binding energy available by moving up the graph, which, with few exceptions (principally around magic or doubly magic nuclei), can only be done by increasing nucleon number. Therefore lighter elements, such as those in a stellar core, will preferentially undergo fusion to heavier elements, releasing the change in binding energy as heat or light, which in turn may sustain further fusion. this quickly reaches a climax in the "Iron-group" centred around A = 60, where increasing nucleon number no longer delivers additional energy. As the size of the nucleus grows it begins to exceed the range of the short-range (on the order of 10E-15 m) strong nuclear force responsible for nuclear binding, which drops quickly to zero. Additional nucleons beyond the critical size will be bound only to their nearest neighbours and not to the nucleus as a whole. In contrast, the repellent electromagnetic force between protons is infinitely ranged, and all protons in the nucleus contribute. The attractive forces remain stagnant, while repellent forces grow with increasing nuclear charge. Thus, beyond A = 60, it becomes easier to remove a particle from the nucleus of heavier elements. These elements have releasable binding energy by decreasing nucleon number, and so will preferentially fission into lighter nuclei. It is this energy that powers Nuclear power plants, as well as natural deposits of uranium in the the Earth's core that ultimately drive the complex magma flows responsible for tectonic drift, the Earth's magnetic field, and volcanic eruptions.{{cite{Sci-American}} nuclear binding curve, based on experimental data, may be approximated by the Semi-Empirical Mass Formula, which attempts to treat the nucleus as if it were a drop of liquid, similar to a raindrop, and applying correction factors to emulate simple nuclear behaviour. [ citation needed ] Despite the nucleus categorically not holding the properties of rain [ citation needed ] or even having an identifiable surface or definite position as such, this approach works well as a first approximation. [ citation needed ] In older texts, and even certain recent texts, it is common to see Iron-56 listed as the "Most tightly bound nucleus" (highest binding energy per nucleon: the peak of the nuclear binding curve).[cite Fewell] This is a common error - 56 26 Fe is the third tightest bound nuclide, after 58 26 Fe and 62 28 Ni , respectively.[cite Wapstra & Bos 1977] But while 62 28 Ni is the tightest bound nuclide (highest binding energy per nucleon), 56 26 Fe holds the lowest mass per nucleon, and so has a lower energy of formation.[cite Fewell] Thus energy can be generated via burning of 62 28 Ni into 56 26 Fe , converting 4 neutrons into protons. Importantly, however, this additional energy comes not from the binding energy, but from the mass energy provided by the conversion of neutrons into protons, ( not the conversion of mass energy into binding energy. It is the mass defect — the difference in mass between the free-state and bound masses that contributes to the binding energy, not the mass itself. It remains easier to remove a nucleon from 56 26 Fe than from 62 28 Ni . |Nuclear binding energy Δ mc 2 = Δ E . [ 3 ]
https://en.wikipedia.org/wiki/User:SharkApologist/sandbox/Binding_energy
Other sandboxes: Main sandbox | Template sandbox In probability theory , the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product (e.g., breakfast cereals) contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as Θ ( n log ⁡ ( n ) ) {\displaystyle \Theta (n\log(n))} . [ a ] For example, when n = 50 it takes about 225 [ b ] trials on average to collect all 50 coupons. By definition of Stirling numbers of the second kind , the probability that exactly T draws are needed is S ( T − 1 , n − 1 ) n ! n T {\displaystyle {\frac {S(T-1,n-1)n!}{n^{T}}}} By manipulating the generating function of the Stirling numbers, we can explicitly calculate all moments of T : f k ( x ) := ∑ T S ( T , k ) x T = ∏ r = 1 k x 1 − r x {\displaystyle f_{k}(x):=\sum _{T}S(T,k)x^{T}=\prod _{r=1}^{k}{\frac {x}{1-rx}}} In general, the k -th moment is ( n − 1 ) ! ( ( D x x ) k f n − 1 ( x ) ) | x = 1 / n {\displaystyle (n-1)!((D_{x}x)^{k}f_{n-1}(x)){\Big |}_{x=1/n}} , where D x {\displaystyle D_{x}} is the derivative operator d / d x {\displaystyle d/dx} . For example, the 0th moment is ∑ T S ( T − 1 , n − 1 ) n ! n T = ( n − 1 ) ! f n − 1 ( 1 / n ) = ( n − 1 ) ! × ∏ r = 1 n − 1 1 / n 1 − r / n = 1 {\displaystyle \sum _{T}{\frac {S(T-1,n-1)n!}{n^{T}}}=(n-1)!f_{n-1}(1/n)=(n-1)!\times \prod _{r=1}^{n-1}{\frac {1/n}{1-r/n}}=1} and the 1st moment is ( n − 1 ) ! ( D x x f n − 1 ( x ) ) | x = 1 / n {\displaystyle (n-1)!(D_{x}xf_{n-1}(x)){\Big |}_{x=1/n}} , which can be explicitly evaluated to n H n {\displaystyle nH_{n}} , etc. Let time T be the number of draws needed to collect all n coupons, and let t i be the time to collect the i -th coupon after i − 1 coupons have been collected. Then T = t 1 + ⋯ + t n {\displaystyle T=t_{1}+\cdots +t_{n}} . Think of T and t i as random variables . Observe that the probability of collecting a new coupon is p i = n − ( i − 1 ) n = n − i + 1 n {\displaystyle p_{i}={\frac {n-(i-1)}{n}}={\frac {n-i+1}{n}}} . Therefore, t i {\displaystyle t_{i}} has geometric distribution with expectation 1 p i = n n − i + 1 {\displaystyle {\frac {1}{p_{i}}}={\frac {n}{n-i+1}}} . By the linearity of expectations we have: Here H n is the n -th harmonic number . Using the asymptotics of the harmonic numbers, we obtain: where γ ≈ 0.5772156649 {\displaystyle \gamma \approx 0.5772156649} is the Euler–Mascheroni constant . Using the Markov inequality to bound the desired probability: The above can be modified slightly to handle the case when we've already collected some of the coupons. Let k be the number of coupons already collected, then: And when k = 0 {\displaystyle k=0} then we get the original result. Using the independence of random variables t i , we obtain: since π 2 6 = 1 1 2 + 1 2 2 + ⋯ + 1 n 2 + ⋯ {\displaystyle {\frac {\pi ^{2}}{6}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{n^{2}}}+\cdots } (see Basel problem ). Bound the desired probability using the Chebyshev inequality : A stronger tail estimate for the upper tail be obtained as follows. Let Z i r {\displaystyle {Z}_{i}^{r}} denote the event that the i {\displaystyle i} -th coupon was not picked in the first r {\displaystyle r} trials. Then Thus, for r = β n log ⁡ n {\displaystyle r=\beta n\log n} , we have P [ Z i r ] ≤ e ( − β n log ⁡ n ) / n = n − β {\displaystyle P\left[{Z}_{i}^{r}\right]\leq e^{(-\beta n\log n)/n}=n^{-\beta }} . Via a union bound over the n {\displaystyle n} coupons, we obtain The Coupon Collector model is widely used in stochastic process scenarios that require "cover-all-categories":
https://en.wikipedia.org/wiki/User:Wenzexu/sandbox
Analytical Instrumentation in the Pharmaceutical Industry: Analytical instrumentation refers to the tools and techniques used to measure, monitor, and analyze various physical and chemical properties of substances. In the pharmaceutical industry, these instruments are crucial for ensuring the safety, efficacy, and quality of pharmaceutical products throughout their development and manufacturing processes. Role in Pharmaceutical Research and Production: In pharmaceutical research and production, analytical instrumentation is indispensable for various processes, including drug development, formulation, stability testing, and quality control. These instruments provide accurate data, enabling scientists and manufacturers to make informed decisions about the composition, purity, and strength of drugs. They are essential for ensuring compliance with regulatory standards and meeting the stringent quality assurance requirements of pharmaceutical products. Types of Analytical Instruments: Common types of analytical instrumentation in the pharmaceutical industry include High-Pressure/Performance Liquid Chromatography (HPLC): HPLC is widely used for separating, identifying, and quantifying components in pharmaceutical formulations. It provides high sensitivity and precision for complex mixtures. Gas Chromatography (GC): GC is utilized for separating volatile compounds. It is commonly used in testing the purity and composition of active pharmaceutical ingredients (APIs) and excipients. Mass Spectrometry (MS): MS is employed for identifying molecular structures and measuring the mass-to-charge ratio of ions. It is often used in conjunction with chromatography (LC-MS) for comprehensive analysis. Ion Chromatography (IC): This technique is used to measure ions in pharmaceutical samples, particularly for detecting trace amounts of contaminants or ensuring the ionic balance of drug formulations. Spectroscopy Techniques: Various spectroscopy methods, such as Near-Infrared (NIR) and Raman spectroscopy, are used for non-destructive analysis and real-time monitoring of drug formulations. Advancements and Innovations: Advancements in analytical instrumentation have significantly enhanced the capabilities of pharmaceutical testing and quality control. Modern instruments now integrate with advanced software for enhanced sensitivity, accuracy, and speed. For instance, techniques such as Liquid Chromatography-Mass Spectrometry (LC-MS) and LC-MS/MS have advanced the analysis of complex biological samples, allowing for better detection of biomarkers and drug metabolites. Moreover, automation and real-time monitoring are becoming more prevalent, streamlining workflows and increasing the efficiency of pharmaceutical analysis. These innovations are crucial for addressing the growing demand for high-quality medicines. Sustainability and Green Analytical Methods: The pharmaceutical industry is increasingly focusing on sustainability, with an emphasis on reducing the environmental impact of pharmaceutical manufacturing and testing processes. Analytical instrumentation is evolving to support these efforts, with a focus on greener methodologies, such as minimizing the use of solvents and reducing energy consumption. Integration of Artificial Intelligence (AI) and Machine Learning (ML): The integration of artificial intelligence (AI) and machine learning (ML) in analytical instrumentation is opening new avenues for predictive analytics and more sophisticated data analysis. These technologies allow for better interpretation of complex datasets, enabling quicker decision-making and improving the overall efficiency and accuracy of drug development and testing processes. Conclusion: In conclusion, analytical instrumentation is a cornerstone of the pharmaceutical industry, providing essential tools for drug development, manufacturing, and regulatory compliance. With continuous advancements, these instruments are playing an increasingly important role in driving innovation, ensuring high-quality pharmaceutical products, and contributing to sustainability goals in the industry.
https://en.wikipedia.org/wiki/User:Younagesh
Analytical instrumentation in the pharmaceutical industry refers to the use of various scientific instruments and techniques to monitor, measure, and control physical and chemical properties of pharmaceutical substances. These instruments are essential in drug discovery, development, manufacturing, and quality assurance processes. Analytical instruments are used throughout the pharmaceutical lifecycle, from early-stage research to final product quality control. Key parameters such as temperature, pressure, pH, and flow rate are routinely monitored. These measurements support both qualitative and quantitative analyses of pharmaceutical compounds, aiding in the assessment of drug purity, potency, stability, and content uniformity. Several analytical techniques are widely used in the pharmaceutical sector: Recent innovations include: Analytical instrumentation plays a central role in compliance with global regulatory standards such as Good Manufacturing Practices (GMP) and Good Laboratory Practices (GLP) . Regulatory agencies, including the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) , require validated methods and calibrated equipment to ensure data integrity and reproducibility. [ 4 ] These instruments are used in:
https://en.wikipedia.org/wiki/User:Younagesh/sandbox
UserBenchmark is a computer benchmarking website that provides users with performance scores for various hardware components. [ 1 ] It offers user-submitted reviews and dedicated tools to evaluate and compare the performance of individual components based on system tests. It is known for its controversies for producing biased computer hardware ranking charts which unfairly favour Intel and Nvidia hardware and disapproves of AMD hardware. [ 2 ] UserBenchmark is a website which offers a benchmarking program to run on the user's PC and then allows them to upload the results on the website. The website provides performance comparisons for CPUs , GPUs , SSDs , HDDs , RAM , and USB drives . [ 3 ] It works on a similar concept to 3DMark , another system benchmarking tool. [ 3 ] As UserBenchmark allows users to upload their hardware score results to the website, it makes it a source of unreleased hardware leaks. For example, Intel's engineering samples have been created with the designation of Intel 0000 and are being differentiated based on their configurations of CPU cores and threads. [ 4 ] Leaks have also been discovered for AMD [ 5 ] and Nvidia [ 6 ] hardware. In 2024, UserBenchmark required a $10 per year fee to allow usage of the program during periods of high use. Some people, without subscriptions, may be able to make use of free open testing slots. To test with the open free slots, non-subscribers must finish "a 3D captcha minigame" with the objective of shooting 13 ships to the ground. [ 3 ] In July 2019, UserBenchmark updated how it calculates the effective speed index [ 7 ] on its website's CPU hardware rankings, drastically affecting the ranking positions of CPUs, which penalized AMD processors. [ 8 ] This resulted in backlash on social media, with some hardware enthusiast boards banning links to the UserBenchmark website. [ 9 ] UserBenchmark has been accused of bias against AMD , [ 10 ] notably facing backlash over its review [ 11 ] of the high-end $479 MSRP [ 12 ] Ryzen 7 9800X3D , [ 13 ] [ 14 ] in which they claimed spending more on a gaming CPU is "pointless" and suggested cheaper alternatives. [ 15 ]
https://en.wikipedia.org/wiki/UserBenchmark
UserLAnd Technologies is a free and open-source compatibility layer mobile app that allows Linux distributions , computer programs , computer games and numerical computing programs to run on mobile devices without requiring a root account . UserLAnd also provides a program library of popular free and open-source Linux-based programs to which additional programs and different versions of programs can be added. The name "UserLAnd" is a reference to the concept of userland in modern computer operating systems . Unlike other Linux compatibility layer mobile apps , UserLAnd does not require a root account . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] UserLAnd's ability to function without root directories , also known as " rooting ," avoids " bricking " or the non-functionality of the mobile device while the Linux program is in use, which in addition to making the mobile device non-functional may void the device's warranty. [ 4 ] Furthermore, the requirement of programs other than UserLAnd to "root" your mobile device has proven a formidable challenge for inexperienced Linux users. [ 6 ] A prior application, GNURoot Debian, attempted to similarly run Linux programs on mobile devices, but it has ceased to be maintained and, therefore, is no longer operational. [ 6 ] UserLAnd allows those with a mobile device to run Linux programs, many of which aren't available as mobile apps. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 8 ] [ 10 ] Even for those Linux applications, e.g. Firefox , which have mobile versions available, people often find that their user experience with these mobile versions pales in comparison with their desktop . [ 11 ] UserLAnd allows its users to recreate that desktop experience on their mobile device. UserLAnd currently only operates on Android mobile devices. UserLAnd is available for download on Google Play and F-Droid . [ 12 ] [ 13 ] To use UserLAnd, one must first download – typically from F-Droid or the Google Play Store – the application and then install it. [ 4 ] [ 5 ] [ 6 ] [ 11 ] Once installed, a user selects an app to open. [ 4 ] [ 5 ] [ 6 ] [ 11 ] When a program is selected, the user is prompted to enter login information and select a connection type. [ 4 ] [ 5 ] [ 6 ] [ 11 ] Following this, the user gains access to their selected program. [ 4 ] [ 5 ] [ 6 ] [ 11 ] UserLAnd is pre-loaded with the distributions Alpine , Arch , Debian , Kali , and Ubuntu ; the web browser Firefox ; the desktop environments LXDE and Xfce ; the deployment environments Git and IDLE ; the text-based games Colossal Cave Adventure and Zork ; the numerical computing programs gnuplot , GNU Octave and R ; the office suite LibreOffice ; and the graphics editors GIMP and Inkscape . Further Linux programs and different versions of programs may be added to this program library . A review on Slant.co listed UserLAnd's "Pro's": support for VNC X sessions, no "rooting" required, easy setup , and that it's free and open-source ; and "Con's": its lack of support for Lollipop and the difficulty of use for non-technical users. [ 14 ] On the contrary, OS Journal found that the lack of a need to "root" your mobile device made using UserLAnd considerably easier than Linux compatibility layer applications, a position shared with SlashGear's review of UserLAnd. [ 6 ] [ 8 ] OS Journal went on to state that with UserLAnd one could do "almost anything" and "you’re (only) limited by your insanity" with respect to what you can do with the application. [ 6 ] Linux Journal stated that "UserLAnd offers a quick and easy way to run an entire Linux distribution, or even just a Linux application or game, from your pocket." [ 3 ] SlashGear stated that UserLAnd is "absolutely super simple to use and requires little to no technical knowledge to get off the ground running." [ 8 ]
https://en.wikipedia.org/wiki/UserLAnd_Technologies
A user is a person who uses a computer or network service . A user often has a user account and is identified to the system by a username (or user name ). [ a ] Some software products provide services to other systems and have no direct end users . End users are the ultimate human users (also referred to as operators ) of a software product. The end user stands in contrast to users who support or maintain the product such as sysops , database administrators and computer technicians . The term is used to abstract and distinguish those who only use the software from the developers of the system, who enhance the software for end users. [ 1 ] In user-centered design , it also distinguishes the software operator from the client who pays for its development and other stakeholders who may not directly use the software, but help establish its requirements . [ 2 ] [ 3 ] This abstraction is primarily useful in designing the user interface , and refers to a relevant subset of characteristics that most expected users would have in common. In user-centered design, personas are created to represent the types of users. It is sometimes specified for each persona which types of user interfaces it is comfortable with (due to previous experience or the interface's inherent simplicity), and what technical expertise and degree of knowledge it has in specific fields or disciplines . When few constraints are imposed on the end-user category, especially when designing programs for use by the general public, it is common practice to expect minimal technical expertise or previous training in end users. [ 4 ] The end-user development discipline blurs the typical distinction between users and developers. It designates activities or techniques in which people who are not professional developers create automated behavior and complex data objects without significant knowledge of a programming language. Systems whose actor is another system or a software agent have no direct end users. A user's account allows a user to authenticate to a system and potentially to receive authorization to access resources provided by or connected to that system; however, authentication does not imply authorization. To log into an account, a user is typically required to authenticate oneself with a password or other credentials for the purposes of accounting , security , logging, and resource management . Once the user has logged on, the operating system will often use an identifier such as an integer to refer to them, rather than their username, through a process known as identity correlation . In Unix systems, the username is correlated with a user identifier or user ID . Computer systems operate in one of two types based on what kind of users they have: Each user account on a multi-user system typically has a home directory , in which to store files pertaining exclusively to that user's activities, which is protected from access by other users (though a system administrator may have access). User accounts often contain a public user profile , which contains basic information provided by the account's owner. The files stored in the home directory (and all other directories in the system) have file system permissions which are inspected by the operating system to determine which users are granted access to read or execute a file, or to store a new file in that directory. While systems expect most user accounts to be used by only a single person, many systems have a special account intended to allow anyone to use the system, such as the username "anonymous" for anonymous FTP and the username "guest" for a guest account. On Unix systems, local user accounts are stored in the file /etc/passwd , while user passwords may be stored at /etc/shadow in its hashed form. [ 5 ] On Microsoft Windows , user passwords can be managed within the Credential Manager program. [ 6 ] [ better source needed ] The passwords are located in the Windows profile directory. [ 7 ] Various computer operating-systems and applications expect/enforce different rules for the format. In Microsoft Windows environments, for example, note the potential use of: [ 8 ] Some usability professionals have expressed their dislike of the term "user" and have proposed changing it. [ 9 ] Don Norman stated that "One of the horrible words we use is 'users'. I am on a crusade to get rid of the word 'users'. I would prefer to call them 'people'." [ 10 ] The term "user" may imply lack of the technical expertise required to fully understand how computer systems and software products work. [ 11 ] Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration . [ 12 ] [ 13 ]
https://en.wikipedia.org/wiki/User_(computing)
The User State Migration Tool ( USMT ) is a command line utility program developed by Microsoft that allows users comfortable with scripting languages to transfer files and settings between Windows PCs. This task is also performed by Windows Easy Transfer , which was designed for general users but then discontinued with the release of Windows 10, [ 1 ] where they instead partnered with Laplink. [ 2 ] Starting with Windows 8, many settings and data are now being synchronized in cloud services via a Microsoft Account and OneDrive . USMT allows a high-volume, automated deployment of files and settings, [ 3 ] and is also useful in migrating user settings and files during OS upgrades. Because USMT has high complexity and a command line interface, there have been several attempts to provide access to its useful functionality by creating GUI wrappers for it. 32-bit to 64-bit migrations are supported, but 64-bit to 32-bit are not. USMT 4 is included in the Windows Automated Installation Kit . USMT 5 is included in the Windows Assessment and Deployment Kit (ADK). [ 4 ] Versions of the USMT are included in the Windows ADKs for Windows 10, versions 1511 and 1607. [ 5 ] USMT consists of two separate programs. Scanstate.exe scans the source PC for the data and settings and stores it in a .MIG file. Loadstate migrates the data and settings from the .MIG file onto the target PC. What to transfer is specified as commandline switches in the configuration XML files migapp.xml, migsys.xml, miguser.xml and other optional Config.xml files. Which Users (and their data) to transfer is controlled by other switches. An example of a "load data on to PC" command could look like this (in one line – newlines and indents added here for readability): The "Scanstate" command is similar in complexity. Both commands require strict adherence to syntax. USMT transfers [ 10 ] Because of the complexity of USMT command-line input, there have been third-party attempts to create GUI front-ends for it. These include (but are not limited to): Both Workstation Migration Assistant and USMT XML Builder are out of date but there are up to date commercial GUI's for USMT.
https://en.wikipedia.org/wiki/User_State_Migration_Tool
User advocacy is a user experience design principle concerned with representing user perspectives in product design. One definition states that user advocacy is the practice of using designated spokespeople to facilitate interaction between users and designers of the products they use. Another more broadly defines user advocacy as the practice of advocating for the user, regardless of whether one is a user, designer, developer, researcher, manager, etc. User advocates typically may suspend their own personal or functional point of view, and attempt to see the product through the eyes of, and the experience of, the user of that product. The ability to take on the user's point of view, without personal judgement or bias, allows the advocate to see things as the user might see them, enabling them to ultimately make observations and perhaps recommendations to improve the user experience. Similarly, some user advocates will take a neutral, scientific point of view, and will observe and collect data from users that will suggest that the product or user experience could be changed or improved in a way that users would prefer or benefit from. User advocates may be scientists or engineers who use the scientific method to make improvements that result in increased ease of use, time savings, improved levels of user satisfaction, or other user-centered metrics. An advocate is a person who argues for or supports a cause or policy. A user advocate could either be a person , as in a research study; a persona , fictional characters that represent a typical customer or segment of the customer base; or a community , such as participants of a public discussion board. The idea of user advocates originated from large-scale software development projects. In such teams, a consensus is reached regarding the roles of a product designer (or systems designer ) and the user experience (UX) analyst, that the two roles (designer and UX analyst) can no longer be effectively performed by the same individual(s) due to inherent conflicting interests. An example of such a conflict of interest would be a designer having to defend his own design decision about a product improvement, versus an alternative decision that could lead to a better user experience, but would negate the designer's original decision about how to improve the product. Designer interaction with actual users on such large-scale projects would often be viewed as expensive and inefficient for some designers, but an alternative view suggests that some designers and some companies do not value interaction with actual users. Some experts have suggested that designers and developers may even have contempt for actual users. In large-scale projects there is often a practical necessity for the division of labor, but that should not be used to justify lack of user involvement at each step of the software development life cycle. The person responsible for designing a product may be far removed from the development of a product, and even further removed from traditional user experience studies, which analyze how users interact with a product versus how it was designed to be used. The degrees of separation inherent in these large-scale projects can create a disconnect between ambitious designers who risk creating ineffective products that they prefer to design, instead of designing what users want and need. Care must be taken to involve the user at each step of the design cycle, lest the needs of the user be ignored and a sub-standard product design result. The idea of a consultant with expertise working with a client group has theoretical origins in models of process consulting, [ 1 ] which focus on developing close relationships to work out joint solutions. One practice of user advocacy asks designers to define their users collectively, as one person or a persona, and attach common attributes and characteristics of their typical users, taking into consideration many types of use case scenarios users typically encounter to aid with anticipating the needs and expectations of a user. From this persona and its associated traits, tendencies, use case scenarios, functional requirements and user expectations can be derived and provide refined specifications for product improvements to be developed. Such a practice essentially channels access product designers to users by representing their needs in the form a persona or fictional character. Technical communicators act as user advocates in the design of information products. [ 2 ] They mediate between subject-matter experts and users through user personas, usability testing , and user-centered design . Technical communicators advocate for users based on three standards for product design: Risk communication is a user advocacy subfield centered on communicating and mitigating risks in products and environments. User advocacy also helps make the effects of design decisions easier to measure because the traits and characteristics of user personas often consist of crowdsourced suggestions from actual users. Suggestions for improvement are generalized and prioritized according to frequency, severity, or an alignment with corporate initiatives. As a result, design decisions become less about a designer and more about fulfilling the needs of users, as the suggestions for improvement are typically provided directly by the users themselves. [ 3 ]
https://en.wikipedia.org/wiki/User_advocacy
User behavior analytics ( UBA ) or user and entity behavior analytics ( UEBA ), [ 1 ] is the concept of analyzing the behavior of users, subjects, visitors, etc. for a specific purpose. [ 2 ] It allows cybersecurity tools to build a profile of each individual's normal activity, by looking at patterns of human behavior , and then highlighting deviations from that profile (or anomalies) that may indicate a potential compromise. [ 3 ] [ 4 ] [ 5 ] The reason for using UBA, according to Johna Till Johnson from Nemertes Research, is that " security systems provide so much information that it is tough to uncover information that truly indicates a potential for a real attack. Analytics tools help make sense of the vast amount of data that SIEM , IDS /IPS, system logs , and other tools gather. UBA tools use a specialized type of security analytics that focuses on the behavior of systems and the people using them. UBA technology first evolved in the field of marketing, to help companies understand and predict consumer- buying patterns . But as it turns out, UBA can be extraordinarily useful in the security context too." [ 6 ] The E in UEBA extends the analysis to include entity activities that take place but that are not necessarily directly linked or tied to a user's specific actions but that can still correlate to a vulnerability, reconnaissance, intrusion breach or exploit occurrence. [ 2 ] The term "UEBA" was coined by Gartner in 2015. UEBA tracks the activity of devices, applications, servers and data. UEBA systems produce more data and provide more complex reporting options than UBA systems. [ 1 ] UEBA tools differ from endpoint detection and response (EDR) capabilities in that UEBA is an analytic focus on the user behavior whereas EDR has an analytic focus on the endpoint . [ 3 ] Cybersecurity solutions, like EDR and XDR, typically prioritize detection and response to external threats once an incident has occurred. EUBA and IRM solutions are looking for prevent potential risks internally by analyzing employee behavior.
https://en.wikipedia.org/wiki/User_behavior_analytics
User innovation refers to innovation by intermediate users (e.g. user firms ) or consumer users (individual end-users or user communities ), rather than by suppliers (producers or manufacturers ). [ 1 ] This is a concept closely aligned to co-design and co-creation , and has been proven to result in more innovative solutions than traditional consultation methodologies. [ 2 ] Eric von Hippel [ 3 ] and others [ 4 ] [ 5 ] [ 6 ] observed that many products and services are actually developed or at least refined, by users, at the site of implementation and use. These ideas are then moved back into the supply network. This is because products are developed to meet the widest possible need; when individual users face problems that the majority of consumers do not, they have no choice but to develop their own modifications to existing products, or entirely new products, to solve their issues. Often, user innovators will share their ideas with manufacturers in hopes of having them produce the product, a process called free revealing. However, user innovators also generate their own firms to commercialize their innovations and generate new markets, a process called "consumer-led market emergence." For example, research on how users innovated in multiple boardsports shows that some users capitalized on their innovations, founding firms in sports that became global markets. [ 7 ] Based on research on the evolution of Internet technologies and open source software Ilkka Tuomi ( Tuomi 2002 ) further highlighted the point that users are fundamentally social. User innovation, therefore, is also socially and socio-technically distributed innovation. According to Tuomi, [ 8 ] key uses are often unintended uses invented by user communities that reinterpret and reinvent the meaning of emerging technological opportunities. The existence of user innovation, for example, by users of industrial robots, rather than the manufacturers of robots ( Fleck 1988 ) is a core part of the argument against the Linear Innovation Model , i.e. innovation comes from research and development, is then marketed and 'diffuses' to end-users. Instead innovation is a non-linear process involving innovations at all stages. [ 9 ] In 1986 Eric von Hippel introduced the lead user method that can be used to systematically learn about user innovation in order to apply it in new product development . In 2007 another specific type of user innovator, the creative consumer was introduced. These are consumers who adapt, modify, or transform a proprietary offering as opposed to creating completely new products. [ 10 ] User innovation has a number of degrees: innovation of use, [ 11 ] innovation in services, innovation in configuration of technologies, and finally the innovation of novel technologies themselves. While most user innovation is concentrated in use and configuration of existing products and technologies, and is a normal part of long term innovation, new technologies that are easier for end-users to change and innovate with, and new channels of communication are making it much easier for user innovation to occur and have an impact. Recent research has focused on Web-based forums that facilitate user (or customer) innovation - referred to as virtual customer environment , these forums help companies partner with their customers in various phases of product development as well as in other value creation activities. For example, Threadless , a T-shirt manufacturing company, relies on the contribution of online community members in the design process. The community includes a group of volunteer designers who submit designs and vote on the designs of others. In addition to free exposure, designers are provided monetary incentives including a $2,500 base award as well as a percentage of T-shirt sales. These incentives allow Threadless to encourage continual user contribution. [ 12 ]
https://en.wikipedia.org/wiki/User_innovation
In the industrial design field of human–computer interaction , a user interface ( UI ) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems , hand tools , heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology . Generally, the goal of user interface design is to produce a user interface that makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result (i.e. maximum usability ). This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the user. User interfaces are composed of one or more layers, including a human–machine interface ( HMI ) that typically interfaces machines with physical input hardware (such as keyboards, mice, or game pads) and output hardware (such as computer monitors , speakers, and printers ). A device that implements an HMI is called a human interface device (HID). User interfaces that dispense with the physical movement of body parts as an intermediary step between the brain and the machine use no input or output devices except electrodes alone; they are called brain–computer interfaces (BCIs) or brain–machine interfaces (BMIs). Other terms for human–machine interfaces are man–machine interface ( MMI ) and, when the machine in question is a computer, human–computer interface . Additional UI layers may interact with one or more human senses, including: tactile UI ( touch ), visual UI ( sight ), auditory UI ( sound ), olfactory UI ( smell ), equilibria UI ( balance ), and gustatory UI ( taste ). Composite user interfaces ( CUIs ) are UIs that interact with two or more senses. The most common CUI is a graphical user interface (GUI), which is composed of a tactile UI and a visual UI capable of displaying graphics . When sound is added to a GUI, it becomes a multimedia user interface (MUI). There are three broad categories of CUI: standard , virtual and augmented . Standard CUI use standard human interface devices like keyboards, mice, and computer monitors. When the CUI blocks out the real world to create a virtual reality , the CUI is virtual and uses a virtual reality interface . When the CUI does not block out the real world and creates augmented reality , the CUI is augmented and uses an augmented reality interface . When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia . [ citation needed ] CUI may also be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense (3S) Standard CUI with visual display, sound and smells; when virtual reality interfaces interface with smells and touch it is said to be a 4-sense (4S) virtual reality interface; and when augmented reality interfaces interface with smells and touch it is said to be a 4-sense (4S) augmented reality interface. The user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch. [ 1 ] In complex systems, the human–machine interface is typically computerized. The term human–computer interface refers to this kind of system. In the context of computing, the term typically extends as well to the software dedicated to control the physical elements used for human–computer interaction . The engineering of human–machine interfaces is enhanced by considering ergonomics ( human factors ). The corresponding disciplines are human factors engineering (HFE) and usability engineering (UE) which is part of systems engineering . Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science , such as computer graphics , operating systems , programming languages . Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics. [ citation needed ] Multimodal interfaces allow users to interact using more than one modality of user input. [ 2 ] There is a difference between a user interface and an operator interface or a human–machine interface (HMI). In science fiction , HMI is sometimes used to refer to what is better described as a direct neural interface . However, this latter usage is seeing increasing application in the real-life use of (medical) prostheses —the artificial extension that replaces a missing body part (e.g., cochlear implants ). [ 7 ] [ 8 ] In some circumstances, computers might observe the user and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces . [ 9 ] [ 10 ] The history of user interfaces can be divided into the following phases according to the dominant type of user interface: In the batch era, computing power was extremely scarce and expensive. User interfaces were rudimentary. Users had to accommodate computers rather than the other way around; user interfaces were considered overhead, and software was designed to keep the processor at maximum utilization with as little overhead as possible. The input side of the user interfaces for batch machines was mainly punched cards or equivalent media like paper tape . The output side added line printers to these media. With the limited exception of the system operator's console , human beings did not interact with batch machines in real time at all. Submitting a job to a batch machine involved first preparing a deck of punched cards that described a program and its dataset. The program cards were not punched on the computer itself but on keypunches , specialized, typewriter-like machines that were notoriously bulky, unforgiving, and prone to mechanical failure. The software interface was similarly unforgiving, with very strict syntaxes designed to be parsed by the smallest possible compilers and interpreters. Once the cards were punched, one would drop them in a job queue and wait. Eventually, operators would feed the deck to the computer, perhaps mounting magnetic tapes to supply another dataset or helper software. The job would generate a printout, containing final results or an abort notice with an attached error log. Successful runs might also write a result on magnetic tape or generate some data cards to be used in a later computation. The turnaround time for a single job often spanned entire days. If one was very lucky, it might be hours; there was no real-time response. But there were worse fates than the card queue; some computers required an even more tedious and error-prone process of toggling in programs in binary code using console switches. The very earliest machines had to be partly rewired to incorporate program logic into themselves, using devices known as plugboards . Early batch systems gave the currently running job the entire computer; program decks and tapes had to include what we would now think of as operating system code to talk to I/O devices and do whatever other housekeeping was needed. Midway through the batch period, after 1957, various groups began to experiment with so-called " load-and-go " systems. These used a monitor program which was always resident on the computer. Programs could call the monitor for services. Another function of the monitor was to do better error checking on submitted jobs, catching errors earlier and more intelligently and generating more useful feedback to the users. Thus, monitors represented the first step towards both operating systems and explicitly designed user interfaces. Command-line interfaces ( CLIs ) evolved from batch monitors connected to the system console. Their interaction model was a series of request-response transactions, with requests expressed as textual commands in a specialized vocabulary. Latency was far lower than for batch systems, dropping from days or hours to seconds. Accordingly, command-line systems allowed the user to change their mind about later stages of the transaction in response to real-time or near-real-time feedback on earlier results. Software could be exploratory and interactive in ways not possible before. But these interfaces still placed a relatively heavy mnemonic load on the user, requiring a serious investment of effort and learning time to master. [ 11 ] The earliest command-line systems combined teleprinters with computers, adapting a mature technology that had proven effective for mediating the transfer of information over wires between human beings. Teleprinters had originally been invented as devices for automatic telegraph transmission and reception; they had a history going back to 1902 and had already become well-established in newsrooms and elsewhere by 1920. In reusing them, economy was certainly a consideration, but psychology and the rule of least surprise mattered as well; teleprinters provided a point of interface with the system that was familiar to many engineers and users. The widespread adoption of video-display terminals (VDTs) in the mid-1970s ushered in the second phase of command-line systems. These cut latency further, because characters could be thrown on the phosphor dots of a screen more quickly than a printer head or carriage can move. They helped quell conservative resistance to interactive programming by cutting ink and paper consumables out of the cost picture, and were to the first TV generation of the late 1950s and 60s even more iconic and comfortable than teleprinters had been to the computer pioneers of the 1940s. Just as importantly, the existence of an accessible screen—a two-dimensional display of text that could be rapidly and reversibly modified—made it economical for software designers to deploy interfaces that could be described as visual rather than textual. The pioneering applications of this kind were computer games and text editors; close descendants of some of the earliest specimens, such as rogue (6), and vi (1), are still a live part of Unix tradition. In 1985, with the beginning of Microsoft Windows and other graphical user interfaces , IBM created what is called the Systems Application Architecture (SAA) standard which include the Common User Access (CUA) derivative. CUA successfully created what we know and use today in Windows, and most of the more recent DOS or Windows Console Applications will use that standard as well. This defined that a pulldown menu system should be at the top of the screen, status bar at the bottom, shortcut keys should stay the same for all common functionality (F2 to Open for example would work in all applications that followed the SAA standard). This greatly helped the speed at which users could learn an application so it caught on quick and became an industry standard. [ 12 ] Primary methods used in the interface design include prototyping and simulation. Typical human–machine interface design consists of the following stages: interaction specification, interface software specification and prototyping: In broad terms, interfaces generally regarded as user friendly, efficient, intuitive, etc. are typified by one or more particular qualities. For the purpose of example, a non-exhaustive list of such characteristics follows: The principle of least astonishment (POLA) is a general principle in the design of all kinds of interfaces. It is based on the idea that human beings can only pay full attention to one thing at one time, [ 20 ] leading to the conclusion that novelty should be minimized. If an interface is used persistently, the user will unavoidably develop habits for using the interface. The designer's role can thus be characterized as ensuring the user forms good habits. If the designer is experienced with other interfaces, they will similarly develop habits, and often make unconscious assumptions regarding how the user will interact with the interface. [ 20 ] [ 21 ] Peter Morville of Google designed the User Experience Honeycomb framework in 2004 when leading operations in user interface design. The framework was created to guide user interface design. It would act as a guideline for many web development students for a decade. [ 23 ]
https://en.wikipedia.org/wiki/User_interface
Plasticity is the capacity of a user interface to withstand variations of both the system's physical characteristics and the environment while preserving usability . [ 1 ] A so-called " responsive web site " is an instance of a plastic user interface. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/User_interface_plasticity
User research focuses on understanding user behaviors, needs and motivations through interviews, surveys, usability evaluations and other forms of feedback methodologies. [ 1 ] It is used to understand how people interact with products and evaluate whether design solutions meet their needs. [ 2 ] This field of research aims at improving the user experience (UX) of products, services, or processes [ 3 ] by incorporating experimental and observational research [ 4 ] methods to guide the design, development, and refinement of a product. User research is used to improve a multitude of products like websites, mobile phones, medical devices, banking, government services and many more. It is an iterative process that can be used at anytime during product development and is a core part of user-centered design . [ 5 ] Data from users can be used to identify a problem for which solutions may be proposed. From these proposals, design solutions are prototyped and then tested with the target user group even before launching the product in the market. This process is repeated as many times as necessary. [ 6 ] After the product is launched in the market, user research can also be used to understand how to improve it or create a new solution. User research also helps to uncover problems faced by users when they interact with a product and turn them into actionable insights. User research is beneficial in all stages of product development from ideation to market release. [ 7 ] Mike Kuniavsky further notes that it is "the process of understanding the impact of design on an audience." The types of user research you can or should perform will depend on the type of site, system or app you are developing, your timeline, and your environment. [ 1 ] Professionals who practice user research often use the job title 'user researcher'. User researchers are becoming very common especially in the digital and service industries, even in the government. [ 8 ] User researchers often work alongside designers, engineers, and programmers in all stages of product development. With respect to user research in the field of design, research is typically approached with an empathetic perspective in order to humanize data collected about people. This method can also be referred to a human-centred approach to problem-solving. User researcher aims to uncover the barriers or frustrations users face as they interact with products, services, or systems. A unique facet of user research is the brand of user experience (UX) research which focuses on the feelings, thoughts, and situations users go through as they interact with products, services, and systems. Many businesses focus on creating enjoyable experiences for their users; however, not including users in their development process can result in failed products. [ 9 ] Involving users in the development process helps design better products, adapt products to change in behaviors and needs, and design the right products and desirable experiences for the users. [ 9 ] User research helps businesses and organizations improve their products and services by helping them better understand: [ 9 ] There are various benefits to conducting user research more than just designing better products and services. Understanding what people want before releasing products in the market will help save money. [ 9 ] Additionally, user research helps to gather data that can help influence stakeholders' decisions based on evidence and not opinions. User research is interrelated with the field of design. In many cases, someone working in the field can take on both roles of researcher and designer. Alternatively, these roles may also be separated and teams of designers and researchers must collaborate through their projects. [ 10 ] User research is commonly used in: There is pure and applied research, user research utilizes applied research to make better products. There are many ways of classifying research, Erika Hall in her book 'Just Enough Research' mentions four ways of classifying user research. [ 5 ] Generative research or exploratory research is done to understand and define the problems to solve for users in the first place. It can be used during the initial stages of product development to create new solutions or it can be applied to an existing product to identify improvements and enhancements. Interviews , observational studies , secondary research , etc., are some of the common methods used during this phase. These methods are used to answer broad and open questions, where the aim is to identify problems users might be experiencing. Usually, the data collected through generative research must be synthesized in order to formulate the problems to be solved, for whom and why it is important. [ 11 ] Descriptive research or explanatory research helps to define the characteristics of the problem and populations previously identified. It is used to understand the context of the problem and the context in which users have the problem. The methods in this phase can be very similar to the methods used in the generative research phase. However, this phase helps to identify what is the best way to solve a problem as opposed to what problem to solve. [ 5 ] During this phase, experts in the problem area are consulted to fill knowledge gaps that will be required to create a solution. This phase is required to avoid making assumptions about the problem or people that might otherwise result in a biased solution. The aim of this phase is to get a good understanding of the problem, to get the right solution ideas. Evaluative research is used to test the solution ideas to ensure they work and solve the problems identified. Ideas are usually tested by representatives from the target population. This is an iterative process and can be done on prototype versions of the solution. [ 11 ] The commonly used method in this phase is called usability testing and it focuses on measuring if the solution addressed the intended problem. [ 5 ] Users can also be asked to provide their subjective opinion about the solution, or they can be given a set of tasks to observe if the solution is intuitive and easy to use. In simple words, evaluative research assess whether the solution fits the problem and whether the right problems were addressed. [ 12 ] [ 11 ] Causal research typically answers why something is happening. Once the solution is up and running, one can observe how people are using it in real time and understand why it is or isn't used the way the solution was envisioned. One of the common methods used in this phase is A/B testing . [ 5 ] [ 13 ] The user research process follows a traditional iterative design approach that is common to user-centered design and design thinking . [ 14 ] User research can be applied anywhere in the design cycle. Typically software projects start conducting user research at the requirement gathering stage to involve users right from the start of the projects. There are various design models that can be used in an organization, they includeA wide range of research methods are used in the field of user research. The Nielsen Norman group has provided a framework to better understand when to use which method, it is helpful to view them along a 3-dimensional framework with the following axes: [ 15 ] (moderated/ unmoderated) Natural use Natural use User research deliverables helps summarize research and make insights digestible to the audience. There are multiple formats of presenting research deliverables, regardless of the format the deliverable has to be engaging, actionable and cater to the audience. [ 26 ] The following are some most common user research deliverables: In 2018, a group of like-minded professionals in the user research industry called the ResearchOps Community defined a new practice called Research Ops to operationalize user research practice in companies. [ 27 ] ResearchOps is similar to DevOps , DesignOps and SalesOps where the goal is to support practitioners by removing some operational tasks from their daily work. [ 28 ] The goal of ResearchOps is to enable researchers be efficient in their roles by saving time taken for data collection and processing data for analysis. ResearchOps aims to support researchers in all facets of user research starting from planning, conducting, analyzing, and maintaining user research data. [ 29 ] The ResearchOps Community defines it as the people, mechanisms, and strategies that set user research in motion - providing the roles, tools and processes needed to support researchers in delivering and scaling the impact of the craft across an organization. [ 27 ] ResearchOps focuses on standardizing research methods across the organization, providing support documentation like scripts, templates, consent forms, etc, to ensure quick application of research, managing participants and recruitment in studies, providing governance, having oversight of research ethics, ensuring research insights are accessible to the organization. [ 28 ] [ 27 ] In private companies there are no clear regulations and ethics committee approval when conducting user research, unlike academic research. [ 30 ] [ 31 ] In 2014, facebook conducted an emotional contagion experiment where they manipulated the newsfeed of 689,000 users by showing either positive or negative content than the average user. [ 32 ] [ 33 ] [ 34 ] The experiment lasted for a week and facebook found out that users who were shown positive posts posted more positive content and the users who were shown negative posts posted more sadder content than previously. [ 30 ] This study was criticized because the users were not presented with an informed consent and were unaware that they were a part of the experiment. [ 34 ] However, this study seemed to be legal under facebook's terms and conditions because facebook's users relinquish the use of their data for data analysis, testing and research. [ 33 ] The criticism was mainly due to the manipulative nature of the study, harm caused to the participants who were shown negative content and a lack of explicit informed consent. [ 32 ] Since then, facebook has an Institutional review board (IRB), however, not all studies undergo an ethics approval. [ 35 ] User Researcher often gather and analyze data from their users, however, such activity does not fall under the legal definition of research according to the U.S. Department of Health and Human Services ' requirements for common rule (46.102.l). [ 36 ] According to them, the legal definition of research is a " systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge ". [ 36 ] Most of the user research studies do not contribute to generalizable knowledge but companies use the data to improve their products and offerings. [ 31 ] Design research organizations like IDEO have compiled a guidebook for conducting ethical design research. [ 37 ] Their principles are Respect for users, Responsibility to protect peoples' interests, Honesty in truthful and timely communication. [ 37 ] However, there is no official framework or process that exists for ethical approval of user research in companies. [ 38 ]
https://en.wikipedia.org/wiki/User_research
Radioactive sources are used for logging formation parameters. Radioactive tracers, along with the other substances in hydraulic-fracturing fluid, are sometimes used to determine the injection profile and location of fractures created by hydraulic fracturing. [ 1 ] Sealed radioactive sources are routinely used in formation evaluation of both hydraulically fractured and non-fracked wells. The sources are lowered into the borehole as part of the well logging tools, and are removed from the borehole before any hydraulic fracturing takes place. Measurement of formation density is made using a sealed caesium-137 source. This bombards the formation with high energy gamma rays . The attenuation of these gamma rays gives an accurate measure of formation density; this has been a standard oilfield tool since 1965. Another source is americium berylium (Am-Be) neutron source used in evaluation of the porosity of the formation, and this has been used since 1950. In a drilling context, these sources are used by trained personnel, and radiation exposure of those personnel is monitored. Usage is covered by licenses from International Atomic Energy Agency (IAEA) guidelines, SU or European Union protocols, and the Environment Agency in the UK. Licenses are required for access, transport, and use of radioactive sources. These sources are very large, and the potential for their use in a 'dirty bomb' means security issues are considered as important. There is no risk to the public, or to water supplies under normal usage. They are transported to a well site in shielded containers, which means exposure to the public is very low, much lower than the background radiation dose in one day. The oil and gas industry in general uses unsealed radioactive solids (powder and granular forms), liquids and gases to investigate or trace the movement of materials. The most common use of these radiotracers is at the well head for the measurement of flow rate for various purposes. A 1995 study found that radioactive tracers were used in over 15% of stimulated oil and gas wells. [ 2 ] Use of these radioactive tracers is strictly controlled. It is recommended that the radiotracer is chosen to have readily detectable radiation, appropriate chemical properties, and a half life and toxicity level that will minimize initial and residual contamination. [ 3 ] Operators are to ensure that licensed material will be used, transported, stored, and disposed of in such a way that members of the public will not receive more than 1 mSv (100 mrem) in one year, and the dose in any unrestricted area will not exceed 0.02 mSv (2 mrem) in any one hour. They are required to secure stored licensed material from access, removal, or use by unauthorized personnel and control and maintain constant surveillance of licensed material when in use and not in storage. [ 4 ] Federal and state nuclear regulatory agencies keep records of the radionuclides used. [ 4 ] As of 2003 the isotopes Antimony-124 , argon-41 , cobalt-60 , iodine-131 , iridium-192 , lanthanum-140 , manganese-56 , scandium-46 , sodium-24 , silver-110m , technetium-99m , and xenon-133 were most commonly used by the oil and gas industry because they are easily identified and measured. [ 3 ] [ 5 ] Bromine-82 , Carbon-14 , hydrogen-3 , iodine-125 are also used. [ 3 ] [ 4 ] Examples of amounts used are: [ 4 ] In hydraulic fracturing, plastic pellets coated with Silver-110m or sand labelled with Iridium-192with may be added to a proppant when it is required to evaluate whether a fracturing process has penetrated rocks in the pay zone. [ 4 ] Some radioactivity may by brought to the surface at the well head during testing to determine the injection profile and location of fractures. Typically this uses very small (50 kBq) Cobalt-60 sources and dilution factors are such that the activity concentrations will be very low in the topside plant and equipment. [ 3 ] The NRC and approved state agencies regulate the use of injected radionuclides in hydraulic fracturing in the United States . [ 4 ] The US EPA sets radioactivity standards for drinking water. [ 6 ] Federal and state regulators do not require sewage treatment plants that accept gas well wastewater to test for radioactivity. In Pennsylvania, where the hydraulic fracturing drilling boom began in 2008, most drinking-water intake plants downstream from those sewage treatment plants have not tested for radioactivity since before 2006. [ 7 ] The EPA has asked the Pennsylvania Department of Environmental Protection to require community water systems in certain locations, and centralized wastewater treatment facilities to conduct testing for radionuclides. [ 7 ] [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Uses_of_radioactivity_in_oil_and_gas_wells
Useware is a term introduced in 1998 to encompass all hardware and software components of a technical system designed for interactive use. It focuses on technological design in relation to human abilities and needs. A promising method [ 1 ] to design technical products is to understand human abilities and limitations and tailor the technology to them. Today, useware necessitates its own development needs, which are sometimes greater than those in classical development fields. [ 2 ] Therefore, usability is increasingly recognized as a value-adding factor. Often, the useware of machines with similar or equal technical functions is the only characteristic that sets them apart. [ 3 ] Similar to software engineering , useware engineering implies the standardized production of useware by engineers and the associated processes (see Fig. 1). The aim of useware engineering is to develop interfaces that are easy to understand and efficient to use, tailored to human work tasks. Additionally, the interfaces represent machine functionality without overemphasizing it. Therefore, the objective of systematic useware engineering guarantees high usability based on the actual tasks of the users. However, it requires an approach that comprises active and iterative participation of different groups of people. The professional associations GfA (Gesellschaft für Arbeitswissenschaft), GI ( Gesellschaft für Informatik ), VDE-ITG (The Information Technology Society in VDE), and VDI/VDE GMA (The Society for Measurement and Automatic Control in the VDI/VDE) agreed in 1998 on defining useware as a new term. The term "useware" was intentionally selected in linguistic analogy to hardware and software. Consequently, useware engineering developed in a similar way to the development of engineering processes (see Fig. 2). This reinforces the principal demand for structured development of user-centered user interfaces , as advocated by Ben Shneiderman . [ 4 ] After many years of function-oriented development, human abilities and needs are brought into focus. The only promising method to develop future technology products and systems is to understand the users’ abilities and limitations and to aim the technology in that direction. [ 1 ] The useware development process involves the following steps: analysis, structural design, design, realization, and evaluation. These steps should not be considered in isolation but rather as overlapping stages. Maintaining continuity throughout the process and employing appropriate tools, such as those based on the Extensible Markup Language (XML), helps prevent information loss and breaks in media. Understanding that humans have varied learning, thinking, and working styles is crucial when creating a user interface. The first step is to analyze users, their tasks, and their work settings to figure out what they really need. This analysis is key to designing an interface that's focused on both the user and the task, treating humans and machines as partners in interaction. Techniques like structured interviews, observations, and card sorting help get a full picture of users and their behaviour, which is essential for grasping their tasks, user groups, and work environments fully. Engaging multiple experts such as engineers , computer scientists , and psychologists is crucial, particularly in the analysis phase, to generate task models for documentation and interface design, which inherently include a functional model of the process and/or machine. [ 5 ] The results of the analysis phase inform the structuring phase, where an abstract use model [ 6 ] is developed based on this information, which is platform-independent . This use model serves as the foundation for the future user interface, providing a formal representation of use contexts, tasks, and information required for the machine's functionality. Modeled using the Useware Markup Language (useML) within a model-based development environment , the use model defines the basic structure of the interface. [ 7 ] During the structuring phase, a hardware platform for the useware must be chosen in parallel. This selection considers both the environmental demands of machine usage, such as pollution, noise, and vibration, and the users' requirements, including display size and optimal interaction devices. Additionally, economic factors play a role. For extensively networked models or those comprising numerous elements, adequate display size is essential for visualizing information structures. These considerations are influenced by user groups and usage contexts. [ 8 ] During prototyping, developers need to choose a development tool . If the selected environment allows for imports, the developed use model can be brought in, facilitating the creation of the user interface. This typically involves refining dynamic components and dialogue design. Often, there's a disconnect between the structuring and fine design phases. The current array of development tools offers a broad range of notations. Developers must represent the useware through prototypes, such as paper prototypes or Microsoft PowerPoint prototypes. Continuous evaluation throughout the development process enables the early detection of product issues, thereby reducing development costs. [ 9 ] It's crucial to assess not only design aspects but also structural elements like navigational concepts during evaluation. Research indicates that 60% of all usage errors stem from structural deficiencies rather than poor design. Consequently, the evaluation phase must be viewed as a cross-sectional task throughout the entire development process. Therefore, integrating users into product development is paramount.
https://en.wikipedia.org/wiki/Useware
In analytic philosophy , [ 1 ] a fundamental distinction is made between the use of a term and the mere mention of it. [ 2 ] [ 3 ] Many philosophical works have been "vitiated by a failure to distinguish use and mention." [ 2 ] The distinction can sometimes be pedantic, especially in simple cases where it is obvious. [ 2 ] [ 4 ] The distinction between use and mention can be illustrated with the word "cheese": [ 2 ] [ 3 ] The first sentence is a statement about the substance called "cheese": it uses the word "cheese" to refer to that substance. The second is a statement about the word "cheese" as a signifier : it mentions the word without using it to refer to anything other than itself. In written language, mentioned words or phrases often appear between single or double quotation marks or in italics . In philosophy, single quotation marks are typically used, while in other fields (such as linguistics) italics are more common. [ 5 ] Some style authorities, such as Strunk and White , emphasize that mentioned words or phrases should be visually distinct. On the other hand, used words or phrases do not carry typographic markings. [ 6 ] The phenomenon of a term having different references in various contexts was referred to as suppositio (substitution) by medieval logicians. [ 7 ] A substitution describes how a term is substituted in a sentence based on its referent. For nouns, a term can be used in different ways: The use–mention distinction is particularly significant in analytic philosophy . [ 8 ] Confusing use with mention can lead to misleading or incorrect statements, such as category errors . Self-referential statements also engage the use–mention distinction and are often central to logical paradoxes, such as Quine's paradox . In mathematics, this concept appears in Gödel's incompleteness theorem , where the diagonal lemma plays a crucial role. Stanisław Leśniewski extensively employed this distinction, noting the fallacies that can result from confusing it in Russell and Whitehead 's Principia Mathematica . [ 9 ] Donald Davidson argued that quotation cannot always be treated as mere mention, giving examples where quotations carry both use and mention functions. [ 10 ] Douglas Hofstadter explains the distinction between use and mention as follows: [ 11 ] When a word is used to refer to something, it is being used . When a word is quoted , the focus is on its surface aspects, such as typography or phonetics, and it is being mentioned . Issues arise when a mention itself is mentioned. Notating this with italics or repeated quotation marks can lead to ambiguity. [ 12 ] Some analytic philosophers have said the distinction "may seem rather pedantic". [ 2 ] In a 1977 response to analytic philosopher John Searle , Jacques Derrida mentioned the distinction as "rather laborious and problematical". [ 4 ]
https://en.wikipedia.org/wiki/Use–mention_distinction
Usha Ranjan Ghatak (26 February 1931 – 18 June 2005) was an Indian synthetic organic chemist , stereochemist and the director of the Indian Association for the Cultivation of Science (IACS). [ 1 ] He was known for his contributions in developing novel protocols of stereoselective synthesis of diterpenoids . [ 2 ] He was an elected fellow of the Indian Academy of Sciences [ 3 ] and the Indian National Science Academy . [ 4 ] The Council of Scientific and Industrial Research , the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology , one of the highest Indian science awards, in 1974, for his contributions to chemical sciences. [ 5 ] U. R. Ghatak was born on 26 February 1931 at Brahmanbaria , a town of historic importance in the undivided Bengal of British India (presently in Bangladesh) to Hem Ranjan Ghatak-Soudamini Devi couple as one among their seven children. [ 1 ] He did his schooling locally and after passing matriculation examination in 1947, he completed his intermediate studies in Agartala in 1949. His graduate studies (BSc hons) were at Asutosh College in chemistry and secured his master's degree from Rajabazar Science College in 1953, winning Motilal Mullick Medal and University Gold Medal for standing first in the examination. Subsequently, he enrolled for doctoral studies at Indian Association for the Cultivation of Science (IACS) and studied under the guidance of P. C. Dutta, a synthetic organic chemist, and obtained a PhD from Rajabazar Science College , Calcutta University in 1957. [ 4 ] He stayed with IACS for two more years before moving to the US for his post-doctoral studies at three centres viz. University of Maine , the University of California, Berkeley and St. John’s University . [ 1 ] He returned to India to IACS in 1963 to resume his career there and worked there till his official retirement from service in 1996; in between, he served as the head of the department of organic chemistry (1977–89) and as the director (1989–96). [ 6 ] Later, he was associated with the Indian Institute of Chemical Biology as an INSA Senior Scientist. [ 4 ] Ghatak was married to Anindita and the couple lived in Kolkata. It was here he died, succumbing a massive heart attack, on 18 June 2005, at the age of 76, survived by his wife. [ 1 ] Ghatak's contributions were primarily on stereochemically controlled organic synthesis and he was known developing methodologies for the synthesis of polycarbocyclic diterpenoids and bridged-ring compounds. [ 7 ] His work on the four possible racemates of deoxypodocarpic acid, deisopropyl dehydroabietic acid and the corresponding 5-epimers reportedly clarified some of the stereochemical uncertainties existed till then. [ 1 ] He demonstrated total synthesis of compounds related to gibberellins , a group of growth-regulating plant hormones. [ 8 ] The regio- and stereo-specific intramolecular alkylation rearrangements through diazoketones as well as new annulation reactions involving cationic and radical processes he developed widened the understanding of free radical cyclization chemistry. [ 4 ] Ghatak documented his researches by way of a book, A Century, 1876-1976 [ 9 ] and a number of articles published in peer-reviewed journals; [ 10 ] [ note 1 ] ResearchGate , an online article repository, has listed 148 of them. [ 11 ] He mentored several doctoral scholars in their researches and his works have been cited by several authors. [ note 2 ] He was associated with journals such as Indian Journal of Chemistry (Sec B), Proceedings of Indian Academy of Sciences (Chem Sci) and Proceedings of the Indian National Science Academy as a member of their editorial boards and served as a member of the Indian National Science Academy Council from 1994 to 1996. [ 4 ] The Council of Scientific and Industrial Research awarded Ghatak the Shanti Swarup Bhatnagar Prize , one of the highest Indian science awards, in 1974. [ 12 ] The Indian Academy of Sciences elected him as a fellow in 1976 [ 3 ] and he became a fellow of the Indian National Science Academy in 1980. [ 4 ] The Chemical Research Society of India awarded him the Lifetime Achievement Award in 2003. [ 13 ] Among the several award orations he delivered were Professor K. Venkataraman Endowment Lecture (1982), Acharya P. C. Ray Memorial Lecture of Indian Chemical Society (1985), Professor N. V. Subba Rao Memorial Lecture (1986), Prof. R. C. Shah Memorial Lecture of Indian Science Congress Association (1986), T. R. Sheshadri Memorial Lecture of Delhi University (1987), Baba Kartar Singh Memorial Lecture of Panjab University (1990) and S. Swaminathan Sixtieth Birthday Commemoration Lecture of Indian National Science Academy (1994). [ 7 ] He was also associated with the Royal Society of Chemistry and Chemical Society of London as an associate member. [ 4 ] The Indian Association for the Cultivation of Science have instituted an annual oration, Professor U. R. Ghatak Endowment Lecture, in honor of Ghatak. [ 14 ]
https://en.wikipedia.org/wiki/Usha_Ranjan_Ghatak
In mathematics, particularly in the study of functions of several complex variables , Ushiki's theorem , named after S. Ushiki, states that certain well-behaved functions cannot have certain kinds of well-behaved invariant manifolds. A biholomorphic mapping F : C n → C n {\displaystyle F:\mathbb {C} ^{n}\to \mathbb {C} ^{n}} cannot have a 1-dimensional compact smooth invariant manifold . In particular, such a map cannot have a homoclinic connection or heteroclinic connection . Invariant manifolds typically appear as solutions of certain asymptotic problems in dynamical systems . The most common is the stable manifold or its kin, the unstable manifold. Ushiki's theorem was published in 1980. [ 1 ] The theorem appeared in print again several years later, in a certain Russian journal, by an author apparently unaware of Ushiki's work. The standard map cannot have a homoclinic or heteroclinic connection. The practical consequence is that one cannot show the existence of a Smale's horseshoe in this system by a perturbation method, starting from a homoclinic or heteroclinic connection. Nevertheless, one can show that Smale's horseshoe exists in the standard map for many parameter values, based on crude rigorous numerical calculations.
https://en.wikipedia.org/wiki/Ushiki's_theorem
Using the Borsuk–Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry is a graduate-level mathematics textbook in topological combinatorics . It describes the use of results in topology , and in particular the Borsuk–Ulam theorem , to prove theorems in combinatorics and discrete geometry . It was written by Czech mathematician Jiří Matoušek , and published in 2003 by Springer-Verlag in their Universitext series ( ISBN 978-3-540-00362-5 ). [ 1 ] [ 2 ] The topic of the book is part of a relatively new field of mathematics crossing between topology and combinatorics, now called topological combinatorics . [ 2 ] [ 3 ] The starting point of the field, [ 3 ] and one of the central inspirations for the book, was a proof that László Lovász published in 1978 of a 1955 conjecture by Martin Kneser , according to which the Kneser graphs K G 2 n + k , n {\displaystyle KG_{2n+k,n}} have no graph coloring with k + 1 {\displaystyle k+1} colors. Lovász used the Borsuk–Ulam theorem in his proof, and Matoušek gathers many related results, published subsequently, to show that this connection between topology and combinatorics is not just a proof trick but an area. [ 4 ] The book has six chapters. After two chapters reviewing the basic notions of algebraic topology , and proving the Borsuk–Ulam theorem , the applications to combinatorics and geometry begin in the third chapter, with topics including the ham sandwich theorem , the necklace splitting problem , Gale's lemma on points in hemispheres , and several results on colorings of Kneser graphs . [ 1 ] [ 2 ] After another chapter on more advanced topics in equivariant topology , two more chapters of applications follow, separated according to whether the equivariance is modulo two or using a more complicated group action . [ 5 ] Topics in these chapters include the van Kampen–Flores theorem on embeddability of skeletons of simplices into lower-dimensional Euclidean spaces , and topological and multicolored variants of Radon's theorem and Tverberg's theorem on partitions into subsets with intersecting convex hulls. [ 1 ] [ 2 ] The book is written at a graduate level, and has exercises making it suitable as a graduate textbook. Some knowledge of topology would be helpful for readers but is not necessary. Reviewer Mihaela Poplicher writes that it is not easy to read, but is "very well written, very interesting, and very informative". [ 2 ] And reviewer Imre Bárány writes that "The book is well written, and the style is lucid and pleasant, with plenty of illustrative examples." Matoušek intended this material to become part of a broader textbook on topological combinatorics, to be written jointly with him, Anders Björner , and Günter M. Ziegler . [ 2 ] [ 5 ] However, this was not completed before Matoušek's untimely death in 2015. [ 6 ]
https://en.wikipedia.org/wiki/Using_the_Borsuk–Ulam_Theorem
An Ussing chamber is an apparatus for measuring epithelial membrane properties. It can detect and quantify transport and barrier functions of living tissue. The Ussing chamber was invented by the Danish zoologist and physiologist Hans Henriksen Ussing in 1946. [ 1 ] The technique is used to measure the short-circuit current as an indicator of net ion transport taking place across an epithelium . Ussing chambers are used to measure ion transport in native tissue , such as gut mucosa , and in a monolayer of cells grown on permeable supports. The Ussing chamber provides a system to measure the transport of ions, nutrients, and drugs across various epithelial tissues, [ 2 ] (although can generate false-negative results for lipophilic substances [ 3 ] ). It consists of two halves separated by the epithelia (sheet of mucosa or monolayer of epithelial cells grown on permeable supports). Epithelia are polar in nature, i.e., they have an apical or mucosal side and a basolateral or serosal side. An Ussing chamber can isolate the apical side from the basolateral side. The two half chambers are filled with equal amounts of symmetrical Ringer solution to remove chemical, mechanical or electrical driving forces. Ion transport takes place across any epithelium. Transport may be in either direction. Ion transport produces a potential difference (voltage difference) across the epithelium. The voltage is measured using two voltage electrodes placed near the tissue/epithelium. This voltage is cancelled out by injecting current, using two other current electrodes placed away from the epithelium. This short-circuit current (Isc) is the measure of net ion transport. Measuring epithelial ion transport is helped by Ussing chambers. The voltage result from this ion transport is easy to accurately measure. The epithelium pumps ions from one side to the other and the ions leak back through so-called tight junctions that are situated between the epithelial cells. To measure ion transport, an external current is applied. Simply cancelling the voltage underestimates the true value. This is corrected by a short circuit at the voltage measuring electrodes. The resistance between the external voltage electrodes must be considered. The Isc underestimates the ion transport: the error can be as much as 10-fold. The type of chambers suggested by Ussing produces large errors. This error is often estimated by measuring the voltage without tissue present, leading to uncertain values. Better methods involve using alternating current in the form of sinuous-shaped current using several frequencies, square wave pulses, sharp impulses and random noise. [ 1 ] The main types of Ussing chamber systems are: [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Ussing_chamber
Utako Okamoto ( 岡本歌子 , Okamoto Utako , 1 April 1918 – 21 April 2016) was a Japanese medical doctor working as a medical scientist who discovered tranexamic acid in the 1950s in her quest to find a drug that would treat bleeding after childbirth ( post-partum haemorrhage ). After publishing results in 1962 she became a chair at Kobe Gakuin University , where she worked from 1966 until her retirement in 1990. Okamoto's career was hampered by a very male dominated environment. During her lifetime she was unable to persuade obstetricians at Kobe to trial the antifibrinolytic agent, which had become a drug on the WHO list of essential medicines in 2009. She lived to see the 2010 beginning of the study of tranexamic acid in 20.000 women with post-partum haemorrhage, but died before its completion in 2016 and the publication of tranexamic acids fatality preventing results in 2017, that she had predicted. Okamoto began studying dentistry in 1936. She very soon switched to medicine enrolling at the Tokyo Women's Medical University and graduated in December 1941. [ 1 ] In January 1942, Okamoto started out as a research assistant at Tokyo Women's Medical University researching the cerebellum [ 1 ] under a neurophysiologist who "created many more opportunities for [women] than were otherwise available at the time." [ 2 ] After World War II and the Second Sino-Japanese War respectively in 1945, she moved to Keio University in Shinanomachi in Tokyo. As resources were scarce, she and her husband Shosuke Okamoto changed to research on blood: "If there was not enough we could simply use our own". They hoped to find a treatment for post-partum haemorrhage , a potent drug to stop bleeding after childbirth. They began by studying epsilon-amino-caproic acid (EACA). They then studied a related chemical, 1-(aminomethyl)-cyclohexane-4-carboxylic acid (AMCHA), also known as tranexamic acid . The Okamotos found it was 27 times as powerful and thus a promising hemostatic agent and published their findings in the Keio Journal of Medicine in 1962. [ 1 ] In 1966, Okamoto was granted a chair at Kobe Gakuin University . In 1980, she founded a local Committee for Projects on Thrombosis and Haemostasis with Shosuke, who also worked at Kobe. She retired from the university in 1990. After her husband died in 2004, she led the committee until 2014. She could never persuade obstetricians to trial the drug in post-partum hemorrhage. [ 1 ] Tranexamic acid's value remained unappreciated for years, and it was not until 2009, that it was included on the WHO list of essential medicines to be used during cardiac surgery. [ 3 ] In 2010, a large randomised controlled trial in trauma patients showed its remarkable benefit if given within 3 hours of injury. [ 4 ] Also in 2010, the WOMAN (World Maternal Antifibrinolytic) trial began, a randomised, double-blind, placebo-controlled study of tranexamic acid in 20 060 women with post-partum haemorrhage. Enrollment was completed in 2016, [ 5 ] and in April 2017, the results were published and showed that tranexamic acid reduced death in the 10,036 treated women versus the 9985 on placebo with no adverse effects. [ 6 ] In male dominated Japan, Okamoto had to fight against sexism . She had a supervisor sympathetic to women in science during the early stages of her career. [ 1 ] However she and a coworker were asked to leave a pediatric conference, because the event was not for "women and children" (onna kodomo), [ 1 ] a term she said in a 2012 interview she had never heard before. [ 7 ] : 4:00 [ 8 ] After she had presented her research for the first time, the male audience members ridiculed her by asking if she was going to dance for them. [ 2 ] [ 7 ] : 5:17-5:38 In the video interview, Okamoto said: "Men are always aware of the fundamental differences between men and women, and so cannot help but think of themselves as superior. So I used that to my advantage by stroking their egos. [...] Until [I had a child] I could compensate for the disadvantages of being a woman by working longer hours—10 hours per day instead of the 8 that the men worked." [ 1 ] At Keio University, she could not find day care for her daughter and brought her to the laboratory, "[hoping] that she would behave herself". [ 1 ] [ 2 ] She carried her on her back as an infant while working in the lab. [ 7 ] : 5:48 Utako Okamoto was married to Shosuke Okamoto and at her death was survived by one daughter, Kumi Nakamura. [ 1 ] She had one miscarriage , which she said was not related to overworking but "coming home late from work". [ 7 ] : 7:30 Ian Roberts, Professor of Epidemiology and Public Health at the London School of Hygiene & Tropical Medicine who had been coordinating the 2010 trauma trial visited Okamoto, then about 92 in Japan. He said that he "found a fascinating character, really lively and vigorous and still very much engaged with research, meeting with researchers, and reading journal articles". [ 1 ]
https://en.wikipedia.org/wiki/Utako_Okamoto
The uterine microbiome refers to the community of commensal , nonpathogenic microorganisms—including bacteria , viruses , and yeasts /fungi—present in a healthy uterus , as well as in the amniotic fluid and endometrium . These microorganisms coexist in a specific environment within the uterus, playing a vital role in maintaining reproductive health. [ 1 ] In the past, the uterus was believed to be a sterile environment, free of any microbial life . Recent advancements in microbiological research, particularly the improvement of 16S rRNA gene sequencing techniques, have challenged this long-held belief. These advanced techniques have made it possible to detect bacteria and other microorganisms present in very low numbers. [ 2 ] Using this procedure that allows the detection of bacteria that cannot be cultured outside the body, studies of microbiota present in the uterus are expected to increase. [ 3 ] In the past, the uterine cavity had been traditionally considered to be sterile, but potentially susceptible to be affected by vaginal bacteria . However, this idea has been disproved. Moreover, it's been shown that endometrial and vaginal microbiota can differ in structure and composition in some women. The microbiome of the innermost layer of the uterus, the endometrium , may influence its capacity to allow an embryo to implant. The existence of more than 10% of non- Lactobacillus bacteria in the endometrium is correlated with negative impacts on reproductive function and should be considered as an emerging cause of implantation failure and pregnancy loss. [ 4 ] Bacteria, viruses and one genus of yeasts are a normal part of the uterus before and during pregnancy . [ 5 ] The uterus has been found to possess its own characteristic microbiome that differs significantly from the vaginal microbiome , consisting primarily of lactobacillus species, and at far fewer numbers. [ 6 ] In addition, the immune system is able to differentiate between those bacteria normally found in the uterus and those that are pathogenic. Hormonal changes have an effect on the microbiota of the uterus. [ 7 ] The organisms listed below have been identified as commensals in the healthy uterus. Some also have the potential for growing to the point of causing disease: Other taxa can be present, without causing disease or an immune response. Their presence is associated with negative birth outcomes. [ 5 ] [ 7 ] Prophylactic antibiotics have been injected into the uterus to treat infertility. This has been done before the transfer of embryos with the intent to improve implantation rates. No association exists between successful implantation and antibiotic treatment. [ 12 ] Infertility treatments often progress to the point where a microbiological analysis of the uterine microbiota is performed. Preterm birth is associated with certain species of bacteria that are not normally part of the healthy uterine microbiome. [ 5 ] The uterine microbiome appears to be altered in female patients who experience endometrial cancer , endometriosis , chronic endometriosis, and related gynecological pathologies, suggesting the clinical relevance of the uterine microbiome’s composition. [ 13 ] Next-generation sequencing has revealed the presence of certain bacterial taxa , such as Alteromonas , to be present in patients presenting with gynecological conditions. [ 14 ] Clinically speaking, there is no universal protocol on how to treat uterine dysbiosis . However, use of antibiotics has been widespread. In the context of infertility, researchers have studied the effects of a treatment plan of antibiotics in conjunction with prebiotics and probiotics to increase Lactobacillus colonization in the endometrium. It was found that, while there was a Lactobacillus- dominated endometrium correlated with increased pregnancy rates, the data was not statistically significant. [ 15 ] Antibiotics have also been used to treat chronic endometritis and endometriosis. [ 13 ] Interestingly, a link between the oral microbiome and the uterine microbiome has been uncovered. Fusobacterium nucleatum , a Gram-negative bacteria commensal to the oral microbiome, is associated with periodontal disease and has been linked with a wide variety of health outcomes, including unfavorable pregnancy outcomes. [ 16 ] [ 17 ] The immune response becomes more pronounced when bacteria are found that are not commensal. [ 5 ] Investigations into reproductive-associated microbiomes began around 1885 by Theodor Escherich . He wrote that meconium from the newborn was free of bacteria. There was a general consensus at the time and even recently that the uterus was sterile and this was referred to as the sterile womb paradigm. Other investigations used sterile diapers for meconium collection. No bacteria were able to be cultured from the samples. Other studies showed that bacteria were detected and were directly proportional to the time between birth and the passage of meconium. [ 1 ] Investigations into the role of the uterine microbiome in the development of the infant microbiome are ongoing. [ 1 ] In recent years, the number of articles and review publications discussing the uterine microbiome has grown. Based on a Web of Science analysis, the highest number of documents published on the topic was in 2023, with a total of 23 papers. The Daunert Lab , based at the University of Miami ’s Sylvester Comprehensive Cancer Center, focuses on the role of the microbiome in endometrial cancer and the role the uterine microbiome plays in the success of an IVF cycle . Similarly, Dr. Maria Walther-Antonio’s lab at the Mayo Clinic focuses on the microbiome’s role in endometrial cancer. Notably, Dr. Walther- Antonio has confirmed that Porphyromas somerae is able to invade endometrial cells , indicating a possibility that this microbe contributes to the pathogenesis of endometrial cancer. [ 18 ] The Carlos Simon Foundation , based in Valencia , Spain , is an women’s health research organization founded by reproductive endocrinologist Carlos Simon, MD PhD . A research team led by Dr. Inmaculada Moreno at the Carlos Simon Foundation studies the role of the endometrial microbiome in human reproduction. When research on the uterine microbiome was scarce, Dr. Moreno and her team analyzed the endometrial microbiota and discovered that there was a correlation between certain endometrial microbiota compositions and the outcome of implantation success or failure. [ 4 ] Six years later, they followed up with a paper revealing that specific pathogenic bacteria and depletion of Lactobacillus spp. in the endometrium correlated with impaired fertility. [ 11 ]
https://en.wikipedia.org/wiki/Uterine_microbiome
7356 22287 ENSG00000149021 ENSMUSG00000024653 P11684 Q06318 NM_003357 NM_011681 NP_003348 NP_035811 Uteroglobin , or blastokinin , also known as secretoglobin family 1A member 1 (SCGB1A1), is a protein that in humans is encoded by the SCGB1A1 gene . [ 5 ] SCGB1A1 is the founding member of the secretoglobin family of small, secreted, disulfide-bridged dimeric proteins found only in mammals. [ 6 ] This antiparallel disulfide linked homodimeric protein is multifunctional and found in various tissues in various names such as: uteroglobin (UG, UGB), uteroglobin-like antigen (UGL), blastokinin, club-cell secretory protein (CCSP), Clara-cell 16 kD protein (17 in rat/mice), club-cell-specific 10 kD protein (CC10), human protein 1, urine protein 1 (UP-1), polychlorinated biphenyl-binding protein (PCB-BP), human club cell phospholipid-binding protein (hCCPBP), secretoglobin 1A member 1 (SCGB1A1). [ 7 ] This protein is specifically expressed in club cells in the lungs. [ 8 ] The precise physiological role of uteroglobin is not yet known. Putative functions are: This article on a gene on human chromosome 11 is a stub . You can help Wikipedia by expanding it . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uteroglobin
The Uthapuram caste wall , called by various names as the wall of shame, the wall of untouchability is a 12 ft high and 600 meter long wall built by dominant caste villagers reportedly to segregate the Dalit population in the Village of Uthapuram in Tamil Nadu . The village witnessed violence between Dalits and the dominant castes during 1948, 1964 and 1989 and was also known for its caste based discrimination. Protests started in 2008 campaigning to demolish the wall led mostly by the Communist Party of India (Marxist) and left-wing organizations. Later a small portion of the wall was demolished by the government to allow entry to the Dalits to access the main road. Many dominant caste villagers left the village and moved 3 km away with their belongings reportedly as a protest for demolishing the wall. 70 houses belonging to the Dalits were attacked in October 2008 reportedly in retaliation for the demolition of the wall and a Dalit man was shot dead by the police. Tensions continued until 2015, when during a clash between the communities several vehicles were set on fire and many were hospitalized. The Village of Uthapuram in the Madurai district has two major castes, the dominant caste Vellala Pillaimar and the Dalit Pallar caste. The village was known for its caste tensions and there were violent conflicts between the castes during the years 1948, 1964 and 1989. [ 1 ] [ 2 ] The dominant caste villagers reportedly blocked attempts of the Dalits to build a bus stop and increased the elevation of a parapet close to the bus stop to discourage the Dalits from sitting before them. The tea-shops managed by caste Hindus are not visited by the Dalits. The Dalits are not permitted to enter an dominant caste-dominated streets and are refused space in the community halls and in the village squares and were also denied entry to burial sites . [ 3 ] [ 4 ] The wall which was 600 meters long and 12 ft high was described in variously as a caste wall, a wall of shame, a wall of bias and a wall of untouchability, was built by caste-Hindus in 1989 after a caste violence in the village. The passes through areas intended for common use by members of all the castes. It also barred Dalits from directly entering the main road. [ 5 ] [ 6 ] [ 1 ] Dalits have to use a circular path and walk a some more miles to get to the main road. [ 7 ] [ 8 ] The fourth conflict began in 2008 after a period of 20 years, and kept going in numerous ways for another 5 years. It began in April 2008 when the caste Hindus used iron rods to electrify the 600 meter wall to prevent the Dalits from entering into the dominant caste areas during night times. [ 3 ] Initially, the Dalits were hesitant to contend but the Tamil Nadu Untouchability Eradication Front (TNUEF), Communist Party of India (Marxist) (CPM), Communist Party of India (CPI) and All India Democratic Women's Association (AIDWA) opposed this action by the dominant caste villagers vigorously. [ 9 ] A member of the TNUEF alleged that two cows were electrocuted by the electrified wall. [ 10 ] Following the state-wide protests of the progressive organisations, the electricity minister of Tamil Nadu called for the removal of the power line. The CPI(M) along with local Dalits started a campaign for the destruction of the caste wall. The Dalits orchestrated a demonstration at the front of the Taluk office calling for the wall to be pulled down. The CPI(M)'s general secretary, N. Varadarajan said that his party cadre will demolish the wall on their own if the government did not take any actions. [ 1 ] [ 11 ] [ 4 ] On 6 May, the district administration got involved and destroyed a 15-foot portion of the wall to allow the Dalits to travel in the presence of a few hundred policemen and the supervision of the district officials. In an act of protest, some caste Hindus returned their ration cards to the Tehsildar . [ 7 ] About 600 dominant caste members left the village during the demolition and moved to Thalaiyoothu, a place 3 km from the village with their livestock and declared that they would not return. [ 1 ] [ 12 ] [ 13 ] [ 14 ] The problem became tense again when the dominant caste villagers who left the village didn't listen to a request from the District Collector to come back soon so that everyone in the village can live in peace. When district officials met with them, they made several demands including a patta for a temple where they had been worshiping for more than 400 years, a permanent police outpost in the village, and new housing for people whose residences which they claimed were destroyed by Dalit anti-socials during the riots of 1989. [ 1 ] At Thalaiyoothu on May 12, The leader of the village's dominant caste group, told Frontline that his people left the village more out of panic than as a mark of rebellion. After the wall was taken down, he said they felt insecure. He claimed the Dalits live better now with most of them having government jobs or being land owners. He also claimed that since the Dalits were actually on a buying spree and the dominant caste members fear that they might be forced to sell their property to Dalits. He also claimed that the wall was built to protect the dominant caste villagers. However this version is not accepted by the village's Dalits. They assert they were at the receiving side of hostility, instead of the other way around. [ 1 ] On 1 October 2008, more than 70 Dalits houses were attacked as a response to the destruction of the wall and a Dalit youth was shot dead by the police as a result of the tensions on November 4, 2008. [ 15 ] [ 16 ] On 10 November 2011, several Dalits entered a temple controlled by dominant caste with police protection. Although several dominant caste members welcomed them with folded arms, there were women crying in the streets opposing their entry. [ 17 ] [ 18 ] In 2012, the Dalits were not allowed to participate in the temple's consecration ceremony and in 2013 the Dalits did not attend the temple festivals. In April 2014, the dominant caste villagers locked the temple and left the village opposing the High court order for allowing the Dalits for Temple entry. [ 16 ] In October 2015, the Dalits and the dominant caste villagers clashed during a temple festival which started over a dispute over placing a garland over a tree. Six motor-bikes were set ablaze and the tehsildars vehicle was also damaged. The police filed cases on 70 people belonging to both the castes and arrested 21. Several injured during the clashes were hospitalized. [ 19 ] [ 20 ]
https://en.wikipedia.org/wiki/Uthapuram_caste_wall
Utilitarian design is an art concept that argues for the products to be designed based on the utility (as opposed to the "contemplated pleasure" of beauty ). For example, an object intended for a narrow and practical purpose does not need to be aesthetically pleasing, but it must be effective for its task [ 2 ] and inexpensive: a steel power pylon carries electric wires just as well as a marble column would, and at a much lower cost. [ 3 ] While an artefact designed with complete disregard of appearance ( purely or strictly utilitarian design ) can be imagined, David Pye argues that such objects do not exist, as the human nature makes it impossible to design anything without even a slightest consideration of its appearance. [ 4 ] As far back as in the Paleolithic Age , the stone tools were sometimes manufactured with better quality than the one required for the task. According to Pye, in practice the "purely utilitarian" objects are the ones made to fit the purpose at the lowest possible cost, from scaffolding to an oil refinery . In many cases making things more pleasing to the eye incurs no extra cost, and the techniques that result in better appearance are chosen in these cases. For example, the proper application of plaster to brick walls fulfills both functional (stopping the drafts) and aesthetic (smooth surface) goals. [ 5 ] There is no clear boundary between the result of the utilitarian design and an object of art, with a classic example provided by cars . An automobile is simultaneously a very utilitarian mean of transportation and a highly personalized extension of ego . [ 6 ] Since the innovations in the utility and appearance are covered by two different mechanisms of intellectual property protection ( patents for functionality, copyrights and trademarks for aesthetics), issues of the utilitarian design are of great interest to the courts and legal scholars. [ 7 ] [ 8 ] The concept of utilitarian design is strongly associated with the Bauhaus school that championed it in the early 20th century. [ 9 ] The rise of modernism in the late 19th and early 20th century caused utilitarian design, based on utility and economy, to be declared beautiful through a new aesthetic doctrine, functionalism . The initial stance of functionalists was uncompromising: a design using extravagant materials or ornamental elements cannot be beautiful; Adolf Loos titled his 1908 essay " Ornament and Crime " While this idealistic position softened with time, the " form follows function " idea remains highly influential, especially in architecture . [ 10 ] Charles and Ray Eames stated that, when it comes to furniture, utility is more durable than appearance: "what works good is better than what looks good, the looks good can change, but what works, works". [ 11 ] The functionalism of furniture is pervasive since the advent of the International Style and especially noticeable in Scandinavian Modern . [ 12 ] In the United States , the "utilitarian article" (defined by 17 U.S.C. § 101 as an article of manufacture with an "intrinsic utilitarian function") may, in addition to patents, be protected by copyright per Copyright Act of 1976 if it possesses pictorial, graphic, or sculptural (PGS) features. [ 8 ] [ 13 ] For the copyright laws to apply to the PGS features, it should be possible to separate them from the pure utilitarian design. [ 14 ] [ 15 ] The US courts hold the position that trademark protection is only possible for features that are not "functional" and therefore "dispensable", like an identifying name. Granting trademark protection for functional features, "essential to the use or purpose of the article" or "[affecting] the cost or quality of the article" would effectively grant a patent of unlimited duration and thus create a monopoly. This antitrust stand, a so-called " functionality doctrine ", is especially pronounced since 1995 ( US Supreme Court decision in Qualitex Co. v. Jacobson Products Co. ). [ 16 ] In the EU, the legal treatment of the designs was harmonized in 1998 via the Directive on the legal protection of designs 98/71/EC. Similarly to the US, details of appearance that are dictated by the utility are excluded from protection. [ 17 ]
https://en.wikipedia.org/wiki/Utilitarian_design
The Utilite is a small, fanless nettop computer manufactured by the Israeli company CompuLab. [ 1 ] It was announced in July 2013 and is based upon the Freescale i.MX6 SoC . It is available in Utilite Value, Utilite Standard and Utilite Pro models. [ 2 ] The Utilite is delivered with: Other available operating systems: There exists also three Linux based operating systems specialized on media playback: Both the Bootloader (U-Boot) and the Kernel are open source and can be found on Gitorious and GitHub. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Utilite
The Utility Radio or Wartime Civilian Receiver was a valve domestic radio receiver , manufactured in Great Britain during World War II starting in July 1944. It was designed by G.D. Reynolds of Murphy Radio . Both AC and battery-operated versions were made. [ 1 ] [ 2 ] [ 3 ] When war broke out in 1939, British radio manufacturers devoted their resources to producing a range of military radio equipment required for the armed forces. This resulted in a shortage of consumer radio sets and spare parts, particularly valves, as all production was for the services. The war also prompted a shortage of radio repairmen, as virtually all of them were needed in the services to maintain vital radio and radar equipment. This meant it was very difficult for the average citizen to get a radio repaired, and with very few new sets available, there was a desperate need to overcome the problem. [ 4 ] The government solved this by arranging for over forty radio manufacturers to produce sets to a standard design with as few components as possible consistent with the ability to source them. Earlier, the government had introduced the " Utility " brand to ensure that all clothing, which was rationed , was produced to a reasonable quality standard as, prior to its introduction, a lot of shoddy goods had appeared on the market; the brand was therefore adopted for this wartime radio. [ 3 ] [ 1 ] The Utility Set had limited reception on medium wave and lacked a longwave band to simplify the design. The tuning scale listed only BBC stations. After the war a version with LW was made available and modification kits to retrofit existing sets were marketed. [ 5 ] About 175,000 sets were sold, at a price of £12 3s 4d each (equivalent to £690 in 2023). [ 4 ] The set is sometimes characterized as the British equivalent of the German Volksempfänger ("Peoples' Receiver"); however there were dissimilarities. The Volksempfänger were radio sets designed to be inexpensive enough for any German citizen to purchase one but higher quality consumer radios were always available to Germans who could afford to pay higher prices. By contrast, the Utility Set was the only consumer radio receiver available for purchase on the British market for much of the latter part of the war. [ 6 ] Starting in June 1942, manufacture of consumer radio receivers in the United States also ceased due to military production needs. [ 7 ] [ 8 ] The sets used a four-valve superhet circuit with an audio output of 4 watts at 10% total harmonic distortion ; they performed as well as many pre-war sets. The valve complement consisted of a triode-hexode frequency mixer , a variable-μ RF pentode IF amplifier and a high slope output pentode. A "Westector" solid-state copper oxide diode was used for demodulation , which saved one valve and allowed use of an available type of pentode for the audio stage. [ 1 ] The HT line was derived from a full wave rectifier . All valves were on International Octal sockets apart from the rectifier which was on a British 4-pin base. There were minor variations between set makers; for instance Philips used IF transformers with adjustable ferrite cores (so-called slug tuning) rather than the conventional trimmer capacitors. [ 3 ] More than 40 manufacturers, such as Pye Ltd. and Marconiphone , made Utility radios. The manufacturer of a particular set was not readily apparent to the general public, although each manufacturer stamped a code letter on the radio to identify themselves to dealers. UK makers often used different designations for the same valve (tube), while octal tubes might be of USA origin. All valves in the Utility radio used standard designations prefixed by BVA (for British Valve Association ). They were produced by valve makers such as Mullard , MOV , Cossor , Mazda and Brimar. Dealers, knowing the maker of a set and which valve manufacturer that maker used, could easily deduce which pre-war types these were and make warranty claims on the manufacturer. [ 5 ]
https://en.wikipedia.org/wiki/Utility_Radio
Utility computing , or computer utility , is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing (such as grid computing), the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources , such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented. This repackaging of computing services became the foundation of the shift to " on demand " computing, software as a service and cloud computing models that further propagated the idea of computing, application and network as a service. There was some initial skepticism about such a significant shift. [ 1 ] However, the new model of computing caught on and eventually became mainstream. IBM, HP and Microsoft were early leaders in the new field of utility computing, with their business units and researchers working on the architecture, payment and development challenges of the new computing model. Google, Amazon and others started to take the lead in 2008, as they established their own utility services for computing, storage and applications. Utility computing can support grid computing which has the characteristic of very large computations or sudden peaks in demand which are supported via a large number of computers. "Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the "back end" to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer . The technique of running a single calculation on multiple computers is known as distributed computing . The term " grid computing " is often used to describe a particular form of distributed computing, where the supporting nodes are geographically distributed or cross administrative domains . To provide utility computing services, a company can "bundle" the resources of members of the public for sale, who might be paid with a portion of the revenue from clients. One model, common among volunteer computing applications, is for a central server to dispense tasks to participating nodes, on the behest of approved end-users (in the commercial case, the paying customers). Another model, sometimes called the virtual organization (VO), [ citation needed ] is more decentralized, with organizations buying and selling computing resources as needed or as they go idle. The definition of "utility computing" is sometimes extended to specialized tasks, such as web services . Utility computing merely means "Pay and Use", with regards to computing power. Utility computing is not a new concept, but rather has quite a long history. Among the earliest references is: If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility... The computer utility could become the basis of a new and important industry. IBM and other mainframe providers conducted this kind of business in the following two decades, often referred to as time-sharing, offering computing power and database storage to banks and other large organizations from their worldwide data centers. To facilitate this business model, mainframe operating systems evolved to include process control facilities, security, and user metering. The advent of mini computers changed this business model, by making computers affordable to almost all companies. As Intel and AMD increased the power of PC architecture servers with each new generation of processor, data centers became filled with thousands of servers. In the late 1990s utility computing re-surfaced. InsynQ, Inc. launched [on-demand] applications and desktop hosting services in 1997 using HP equipment. In 1998, HP set up the Utility Computing Division in Mountain View, California, assigning former Bell Labs computer scientists to begin work on a computing power plant, incorporating multiple utilities to form a software stack. Services such as "IP billing-on-tap" were marketed. HP introduced the Utility Data Center in 2001. Sun announced the Sun Cloud service to consumers in 2000. In December 2005, Alexa launched Alexa Web Search Platform, a Web search building tool for which the underlying power is utility computing. Alexa charges users for storage, utilization, etc. There is space in the market for specific industries and applications as well as other niche applications powered by utility computing. For example, PolyServe Inc. offers a clustered file system based on commodity server and storage hardware that creates highly available utility computing environments for mission-critical applications including Oracle and Microsoft SQL Server databases, as well as workload optimized solutions specifically tuned for bulk storage, high-performance computing, vertical industries such as financial services, seismic processing, and content serving. The Database Utility and File Serving Utility enable IT organizations to independently add servers or storage as needed, retask workloads to different hardware, and maintain the environment without disruption. In spring 2006 3tera announced its AppLogic service and later that summer Amazon launched Amazon EC2 (Elastic Compute Cloud). These services allow the operation of general purpose computing applications. Both are based on Xen virtualization software and the most commonly used operating system on the virtual computers is Linux, though Windows and Solaris are supported. Common uses include web application, SaaS, image rendering and processing but also general-purpose business applications. Decision support and business intelligence 8th edition page 680 ISBN 0-13-198660-0
https://en.wikipedia.org/wiki/Utility_computing
A utility cut is a cut and excavation to an existing road surface to install or repair subterranean public utility conduits and equipment. After the utility is installed or repaired, the road needs to be restored which will result in patches on the road surface. Due to a different settling rate of the backfill material relative to the original pavement, the road surface condition may be deteriorated after the road restoration. This will require ongoing maintenance and repairs. [ 1 ] Some municipalities require contractors to install utility repair tags to identify responsible parties of the deteriorated patches. [ 2 ] This road-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Utility_cut
Utility fog (also referred to as foglets ) is a hypothetical collection of tiny nanobots that can replicate a physical structure. [ 1 ] [ 2 ] [ 3 ] [ 4 ] As such, it is a form of self-reconfiguring modular robotics . The term was coined by John Storrs Hall in 1989. [ 5 ] Hall thought of it as a nanotechnological replacement for car seatbelts . The robots would be microscopic , with extending arms reaching in several different directions, and could perform three-dimensional lattice reconfiguration. Grabbers at the ends of the arms would allow the robots (or foglets) to mechanically link to one another and share both information and energy, enabling them to act as a continuous substance with mechanical and optical properties that could be varied over a wide range. Each foglet would have substantial computing power, and would be able to communicate with its neighbors. In the original application as a replacement for seatbelts, the swarm of robots would be widely spread out, and the arms loose, allowing air flow between them. In the event of a collision the arms would lock into their current position, as if the air around the passengers had abruptly frozen solid. The result would be to spread any impact over the entire surface of the passenger's body. While the foglets would be micro-scale, construction of the foglets would require full molecular nanotechnology . Hall suggests that each bot may be in the shape of a dodecahedron with twelve arms extending outwards. Each arm would have four degrees of freedom . The foglets' bodies would be made of aluminum oxide rather than combustible diamond to avoid creating a fuel air explosive . [ 4 ] Hall and his correspondents soon realized that utility fog could be manufactured en masse to occupy the entire atmosphere of a planet and replace any physical instrumentality necessary to human life. By foglets exerting concerted force, an object or human could be carried from location to location. Virtual buildings could be constructed and dismantled within moments, enabling the replacement of existing cities and roads with farms and gardens. While molecular nanotech might also replace the need for biological bodies, utility fog would remain a useful peripheral with which to perform physical engineering and maintenance tasks. Thus, utility fog also came to be known as "the machine of the future". [ 6 ]
https://en.wikipedia.org/wiki/Utility_fog
In economics , a utility representation theorem shows that, under certain conditions, a preference ordering can be represented by a real-valued utility function , such that option A is preferred to option B if and only if the utility of A is larger than that of B. The most famous example of a utility representation theorem is the Von Neumann–Morgenstern utility theorem , which shows that any rational agent has a utility function that measures their preferences over lotteries . Suppose a person is asked questions of the form "Do you prefer A or B?" (when A and B can be options, actions to take, states of the world, consumption bundles, etc.). If the agent prefers A to B, we write A ≻ B {\displaystyle A\succ B} . The set of all such preference-pairs forms the person's preference relation . Instead of recording the person's preferences between every pair of options, it would be much more convenient to have a single utility function - a function u that assigns a real number to each option, such that u ( A ) > u ( B ) {\displaystyle u(A)>u(B)} if and only if A ≻ B {\displaystyle A\succ B} . Not every preference-relation has a utility-function representation. For example, if the relation is not transitive (the agent prefers A to B, B to C, and C to A), then it has no utility representation, since any such utility function would have to satisfy u ( A ) > u ( B ) > u ( C ) > u ( A ) {\displaystyle u(A)>u(B)>u(C)>u(A)} , which is impossible. A utility representation theorem gives conditions on a preference relation, that are sufficient for the existence of a utility representation. Often, one would like the representing function u to satisfy additional conditions, such as continuity. This requires additional conditions on the preference relation. The set of options is a topological space denoted by X . In some cases we assume that X is also a metric space ; in particular, X can be a subset of a Euclidean space R m , such that each coordinate in {1,..., m} represents a commodity, and each m -vector in X represents a possible consumption bundle. A preference relation is a subset of X × X {\displaystyle X\times X} . It is denoted by either ≻ {\displaystyle \succ } or ⪰ {\displaystyle \succeq } : Given a weak preference relation ⪰ {\displaystyle \succeq } , one can define its "strict part" ≻ {\displaystyle \succ } and "indifference part" ≃ {\displaystyle \simeq } as follows: Given a strict preference relation ≻ {\displaystyle \succ } , one can define its "weak part" ⪰ {\displaystyle \succeq } and "indifference part" ≃ {\displaystyle \simeq } as follows: For every option A ∈ X {\displaystyle A\in X} , we define the contour sets at A : Sometimes, the above continuity notions are called semicontinuous , and a ⪰ {\displaystyle \succeq } is called continuous if it is a closed subset of X × X {\displaystyle X\times X} . [ 1 ] A preference-relation is called: As an example, the strict order ">" on real numbers is separable, but not countable. A utility function is a function u : X → R {\displaystyle u:X\to \mathbb {R} } . Debreu [ 2 ] [ 3 ] proved the existence of a continuous representation of a weak preference relation ⪰ {\displaystyle \succeq } satisfying the following conditions: Jaffray gives an elementary proof to the existence of a continuous utility function. [ 5 ] Preferences are called incomplete when some options are incomparable, that is, neither A ⪰ B {\displaystyle A\succeq B} nor B ⪰ A {\displaystyle B\succeq A} holds. This case is denoted by A ⋈ B {\displaystyle A\bowtie B} . Since real numbers are always comparable, it is impossible to have a representing function u with u ( A ) ≥ u ( B ) ⟺ A ⪰ B {\displaystyle u(A)\geq u(B)\iff A\succeq B} . There are several ways to cope with this issue. Peleg defined a utility function representation of a strict partial order ≻ {\displaystyle \succ } as a function u : X → R {\displaystyle u:X\to \mathbb {R} } such that A ≻ B ⟹ u ( A ) > u ( B ) {\displaystyle A\succ B\implies u(A)>u(B)} , that is, only one direction of implication should hold. [ 6 ] Peleg proved the existence of a one-dimensional continuous utility representation of a strict preference relation ≻ {\displaystyle \succ } satisfying the following conditions: If we are given a weak preference relation ⪰ {\displaystyle \succeq } , we can apply Peleg's theorem by defining a strict preference relation: A ≻ B {\displaystyle A\succ B} if and only if A ⪰ B {\displaystyle A\succeq B} and not B ⪰ A {\displaystyle B\succeq A} . [ 6 ] The second condition ( ≻ {\displaystyle \succ } is separable) is implied by the following three conditions: A similar approach was taken by Richter. [ 7 ] Therefore, this one-directional representation is also called a Richter-Peleg utility representation. [ 8 ] Jaffray defines a utility function representation of a strict partial order ≻ {\displaystyle \succ } as a function u : X → R {\displaystyle u:X\to \mathbb {R} } such that both A ≻ B ⟹ u ( A ) > u ( B ) {\displaystyle A\succ B\implies u(A)>u(B)} , and A ≈ B ⟹ u ( A ) = u ( B ) {\displaystyle A\approx B\implies u(A)=u(B)} , where the relation A ≈ B {\displaystyle A\approx B} is defined by: for all C, A ≻ C ⟺ B ≻ C {\displaystyle A\succ C\iff B\succ C} and C ≻ A ⟺ C ≻ B {\displaystyle C\succ A\iff C\succ B} (that is: the lower and upper contour sets of A and B are identical). [ 9 ] He proved that, for every partially-ordered space ( X , ≻ ) {\displaystyle (X,\succ )} that is perfectly-separable, there exists a utility function that is upper- semicontinuous in any topology stronger than the upper order topology . [ 9 ] : Sec.4 An analogous statement states the existence of a utility function that is lower-semicontinuous in any topology stronger than the lower order topology. Sondermann defines a utility function representation similarly to Jaffray. He gives conditions for existence of a utility function representation on a probability space , that is upper semicontinuous or lower semicontinuous in the order topology. [ 10 ] Herdendefines a utility function representation of a weak preorder ⪰ {\displaystyle \succeq } as an isotone function u : ( X , ⪰ ) → ( R , ≥ ) {\displaystyle u:(X,\succeq )\to (\mathbb {R} ,\geq )} such that A ≻ B ⟹ u ( A ) > u ( B ) {\displaystyle A\succ B\implies u(A)>u(B)} . Herden [ 11 ] : Thm.4.1 proved that a weak preorder ⪰ {\displaystyle \succeq } on X has a continuous utility function, if and only if there exists a countable family E of separable systems on X such that, for all pairs A ≻ B {\displaystyle A\succ B} , there is a separable system F in E, such that B is contained in all sets in F, and A is not contained in any set in F. He shows that this theorem implies Peleg's representation theorem. In a follow-up paper [ 12 ] he clarifies the relation between this theorem and classical utility representation theorems on complete orders. A multi-utility representation (MUR) of a relation ⪰ {\displaystyle \succeq } is a set U of utility functions, such that A ⪰ B ⟺ ∀ u ∈ U : u ( A ) ≥ u ( B ) {\displaystyle A\succeq B\iff \forall u\in U:u(A)\geq u(B)} . In other words, A is preferred to B if and only if all utility functions in the set U unanimously hold this preference. The concept was introduced by Efe Ok. [ 13 ] Every preorder (reflexive and transitive relation) has a trivial MUR. [ 1 ] : Prop.1 Moreover, every preorder with closed upper contour sets has an upper-semicontinuous MUR, and every preorder with closed lower contour sets has a lower-semicontinuous MUR. [ 1 ] : Prop.2 However, not every preorder with closed upper and lower contour sets has a continuous MUR. [ 1 ] : Exm.1 Ok and Evren present several conditions on the existence of a continuous MUR: All the representations guaranteed by the above theorems might contain infinitely many utilities, and even uncountably many utilities. In practice, it is often important to have a finite MUR - a MUR with finitely many utilities. Evren and Ok prove there exists a finite MUR where all utilities are upper[lower] semicontinuous for any weak preference relation ⪰ {\displaystyle \succeq } satisfying the following conditions: [ 1 ] : Thm 3 Note that the guaranteed functions are semicontinuous, but not necessarily continuous, even if all upper and lower contour sets are closed. [ 13 ] : Exm.2 Evren and Ok say that "there does not seem to be a natural way of deriving a continuous finite multi-utility representation theorem, at least, not by using the methods adopted in this paper".
https://en.wikipedia.org/wiki/Utility_representation_theorem
A utility vault is an underground room providing access to subterranean public utility equipment, such as valves for water or natural gas pipes, or switchgear for electrical or telecommunications equipment. A vault is often accessible directly from a street, sidewalk or other outdoor space, thereby distinct from a basement of a building. [ 1 ] [ 2 ] Utility vaults are commonly constructed out of reinforced concrete boxes , poured concrete or brick. Small ones are usually entered through a manhole or grate on the topside and closed up by a manhole cover . Such vaults are considered confined spaces and can be hazardous to enter. Large utility vaults are similar to mechanical or electrical rooms in design and content. This article related to a type of room in a building is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Utility_vault
Uwe Marx (born 26 June 1964) is a German physician and biotechnologist, and one of the world’s leading researchers in the fields of organ-on-a-chip technology and antibody production. He gained his qualification as a medical doctor with a specialization in biochemistry in 1988 and his MD in 1991 at Charité – Universitätsmedizin Berlin (a joint medical faculty of the Humboldt University and Free University Berlin) with his thesis on Human Monoclonal Antibodies. Uwe Marx was the head of the Department of Immunobiotechnology at the Institute of Medical Immunology, Charite – Universitätsmedizin Berlin between 1991 and 1995. He was the head of the Department of Medical Biotechnology at the Institute of Immunology and Transfusion Medicine, University of Leipzig between 1995 and 2000. He was the head of the GO-Bio Multi-Organ Chip Program “Multi-organ-bioreactors for the predictive substance testing in chip format,” which was supported by the German Federal Ministry of Education and Research (BMBF), at the Institute of Biotechnology, Technische Universität Berlin between 2010 and 2019. This led to the establishment of a multi-organ chip technology capable of maintaining more than 15 miniature human organ equivalents, such as liver, brain, skin, intestine and pancreatic islets, at a homeostatic steady state over periods of at least four weeks. Uwe Marx was appointed an Honorary Professor of Medical Biotechnology at the Technische Universität Berlin in 2022. Uwe Marx is a world-renowned German physician and biotechnologist, specializing in the fields of organ-on-a-chip technology and antibody production. He is a physician, biotechnologist, successful serial entrepreneur, author, and inventor of as many as 30 patent families, [ 1 ] which have led to more than 140 patents granted. He has published more than 130 peer-reviewed scientific papers. He is the co-founder of the German biotech companies: VITA 34 A.G., ProBioGen A.G. and TissUse GmbH.He has extensive experience in human biology and medicine, and is currently one of the world’s leading researchers in the field of microphysiological systems (MPS) technology, pioneering the field of human multi-organ chips. He was an early innovator in employing relevant non-animal models to develop more predictive human data with a vision to define human risk better and make safe drugs available to the patients faster. When he began his career in the early 1990s, rodents and higher species were used, to a large extent, to test the effects of pharmaceutical products, cosmetic compounds and other chemicals. He quickly recognized that the results of animal testing may not guarantee the same responses in humans. Improving the relevance to humans and potentially replacing animal testing with a more accurate and controllable approach was an urgent need. In 1989, while studying for his MD, he planned to recreate human immune organs [ 2 ] and mimic organ functions and interactions outside a living organism. One of his first research projects in 1991 involved the “in vitro production of monoclonal antibodies (mAbs)” to replace the standard method to “culture […] hybridoma cells producing monoclonal antibodies in the ascites mouse.” He and his team developed a hollow-fiber culture system for culturing hybridoma cells, which produce mAbs. [ 3 ] He was, therefore, able to show that an in vivo method could replace the highly criticized ascites mouse method by applying advanced biotechnology to cell culture. [ 4 ] In 1994, Uwe Marx was one of the founders of ProBioGen AG, in Berlin. He joined the company as Chief Scientific Officer in 2000. Within this biotech company, he developed, among other products, a Human Artificial Lymph Node (HuALN) model [ 5 ] [ 6 ] which enables the prediction of human immune responses triggered by drug candidates in vitro . Using primary cells, this three-dimensional matrix bioreactor technology emulates in vivo -like reactions of a human lymph node to analyze, for example, immunogenicity, immune functions and the immunotoxicity of substances. He was the CSO of ProBioGen AG until 2010. In 1997, Uwe Marx was one of the founders of VITA 34 AG, in Leipzig, where he established the GMP-compliant cryopreservation process for the umbilical cord blood samples. Between 1999 and 2003, he was one of the founders and a supervisory board member of Novoplant AG, in Gatersleben, which ceased trading in 2008. In 2007, while still at ProBioGen AG, Uwe Marx published his vision of the multi-organ-chip (MOC) concept for the first time in his book “Drug Testing In Vitro – Breakthroughs and Trends in Cell Culture Technology”. [ 7 ] He describes the concept of the “micro-organoids” on which the perfused human MOC is based, in Chapter 11 (p. 318), “How drug development of the 21st century could benefit from human micro-organoid in vitro technologies,” as follows: In order to overcome the limited predictive power of preclinical testing in animal models, which has been the main dilemma of drug development, Uwe Marx proposed developing the perfused MOC using human cells, tissues and organoids. He is internationally recognized as one of the inventors of the perfused MOC technology. [ 8 ] Since 2009, Uwe Marx has been working together with other scientists to reproduce the human organism on a microfluidic chip at a scale of 1:100,000. [ 9 ] The aim is to shorten the entire drug development process and reduce animal experiments and drug testing in humans during clinical trials. In 2010, Uwe Marx founded TissUse GmbH, the first MOC company worldwide, [ 10 ] as a spin-off biotech company of the Department of Medical Biotechnology of the Technische Universität Berlin (chair: Prof. Roland Lauster), which has been at the forefront of “pioneering human-on-a-chip developments.” He was CEO of the company from 2010 until 2020, when he became CSO. In 2013, TissUse GmbH published the proof of concept of the perfused human two-tissue MOC, “A dynamic multi-organ-chip for long-term cultivation and substance testing proven by 3D human liver and skin tissue co-culture.” [ 11 ] The final aim is to combine different “organoids” to generate a human-on-a-chip, which allows studies of complex physiological organs interactions. The inclusion of human organ equivalents for liver, intestine, kidney and skin for ADME and toxicity (ADMET) testing was developed in a four-organ chip. The system is being developed for disease models for preclinical efficacy and toxicity testing of new drugs. An example is the human microfluidic two-organ chip model which maintained a functional circulation between pancreatic islet micro-tissues and liver spheroids in an insulin-free medium; [ 12 ] this is a promising simulation of human type 2 diabetes mellitus. Uwe Marx’s team at TissUse GmbH has also succeeded in generating different human organoids for induced pluripotent stem cells (hiPSC) lines from donors, which were generated by reprogramming peripheral blood mononuclear cells with episomal vectors. [ 13 ] This technology allows one to study drug effects in MOCs on individuals with different genetic backgrounds, for example, human disease models. Meanwhile, Tissue has developed a wide range of fit-for-purpose MOCS for the drug industry and four of these assays are used for internal portfolio decision-making in drug development: the bone marrow model and three of the two-organ models – the liver-pancreas, the liver-thyroid and the skin-tumor model. These four commercial assays have entered industrial decision-making at six sites of international drug companies, such as AstraZeneca, Roche and Bayer. Since 2014, Uwe Marx has successfully promoted the use of human MOCs by industry and regulators around the world as a keynote speaker and workshop organizer. He gave a keynote lecture entitled “Human-on-a-chip – a paradigm shift from animal testing” at the 9th World Congress on Alternatives in Prague, in 2014, and he was co-organizer of the round table discussion “Human-on-a-chip – Advancing regulatory science through innovation and worldwide networking for alternative testing” at the same congress. In 2014, Uwe Marx hosted the t4 workshop “Biology-inspired micro-physiological system approaches to solve the predictive dilemma of substance testing”, [ 14 ] and in 2019, hosted the t4 workshop “Biology-inspired micro-physiological systems to advance medicines for patient benefit and animal welfare.” [ 15 ] In 2021, Uwe Marx and his colleagues introduced the “Organismoid Theory” based on the human-on-a-chip concepts of the past in Frontiers in Medicine. His team “describe the current concept and principles to create a series of organismoids – minute, mindless and emotion-free physiological in vitro equivalents of an individual’s mature human body – by an artificially short process of morphogenetic self-assembly mimicking an individual’s ontogenesis from egg cell to sexually mature organism. Subsequently, we provide the concept and principles to maintain such an individual’s set of organismoids at self-sustained functional healthy homeostasis over very long time frames in vitro . Principles how to perturb a subset of healthy organismoids by means of the natural or artificial induction of diseases are enrolled to emulate an individual’s disease process. Finally, we discuss using such series of healthy and perturbed organismoids in predictively selecting, scheduling and dosing an individual patient’s personalized therapy or medicine precisely. The potential impact of the organismoid theory on our healthcare system generally and the rapid adoption of disruptive personalized T-cell therapies particularly is highlighted.” [ 16 ] Uwe Marx is continuing to develop solutions for patients’ benefit and the reduction and replacement of animal experimentation. His work regarding the latter has led to his receiving several awards in the field, including the most prestigious 2021 Russell & Burch Award from the Humane Society of the United States of America. [ 17 ] The most challenging aspect in achieving a benefit for a patient by utilizing MPS-based technologies is bringing such platforms into general use in medicine. Scientific, standardization and regulatory acceptance hurdles are still waiting to be overcome. Uwe Marx’s active work in the stakeholder community contributes, for example, by hosting the 2nd MPS World Summit in Berlin in 2023, [ 18 ] Germany, and the CAAT stakeholder workshops in the MPS field.
https://en.wikipedia.org/wiki/Uwe_Marx
Uwe Storch (born 12 July 1940, Leopoldshall – Lanzarote , 17 September 2017) was a German mathematician . His field of research was commutative algebra and analytic and algebraic geometry , in particular derivations , divisor class group , resultants . Storch studied mathematics , physics and mathematical logic in Münster and in Heidelberg . He got his PhD 1966 under the supervision of Heinrich Behnke with a thesis on almost (or Q) factorial rings . 1972 Habilitation in Bochum , 1974 professor in Osnabrück and since 1981 professor for algebra and geometry in Bochum . 2005 Emeritation . Uwe Storch was married and had four sons. The Theorem of Eisenbud - Evans -Storch states that every algebraic variety in n- dimensional affine space is given geometrically (i.e. up to radical ) by n polynomials . Günther Scheja and Uwe Storch, Lehrbuch der Algebra , 2 volumes, Stuttgart 1980 (1st edition was in 3 volumes), 1988. Uwe Storch and Hartmut Wiebe, Lehrbuch der Mathematik , 4 volumes.
https://en.wikipedia.org/wiki/Uwe_Storch
Uwingu is a private, for-profit company founded by Alan Stern , a former NASA associate administrator. The company lets the public nominate names for exoplanets and craters of Mars on Uwingu's new Mars map, in return for a fee. [ 1 ] [ 2 ] Uwingu’s mission is to create new ways for people to personally connect with space exploration and astronomy. The profits of the company are dedicated to funding space researchers, educators, and projects. [ 3 ] [ 4 ] Since launching the Mars Crater Map feature in February, 2014, over 19,000 craters names have been named on the Uwingu Mars Map. Almost $100,000 has been generated for the Uwingu fund space research, education, and exploration. Uwingu Grant Fund recipients include Astronomers Without Borders, The Galileo Teacher Training Program, SEDS , Allen Telescope Array , International Dark-Sky Association and Explore Mars. Uwingu has also awarded over $15,000 in Student Research Travel Grants in 2014. The selected students, both men and women, completed their PhDs in 2014. Their research topics range from Martian and lunar science, to astrobiology, to the study of planets around other stars. The awarded Uwingu travel grants will enable these students to report their results and further their professional advancement in the scientific community. [ 5 ] On 3 March 2014, the company announced a partnership with Mars One , which is planning on using Uwingu's map of Mars in its mission. [ 6 ] [ 7 ] It was announced on 12 June 2014 that a second space mission, Time Capsule to Mars, is carrying Uwingu's Mars Map to Mars as well. [ 8 ] Uwingu’s board of advisors consists of talented and influential thinkers from the astronomy, planetary science, IT, and business worlds: The International Astronomical Union has strongly condemned Uwingu, calling it a "scam" for charging money to buy planet names, stressing that the IAU is the only official authority on naming astronomical objects. [ 10 ] [ 11 ] Uwingu denies these accusations, saying it represents the "people's choice" rather than any official position. [ 12 ]
https://en.wikipedia.org/wiki/Uwingu
Uzawa's theorem , also known as the steady-state growth theorem , is a theorem in economic growth that identifies the necessary functional form of technological change for achieving a balanced growth path in the Solow–Swan and Ramsey–Cass–Koopmans growth models. It was proved by Japanese economist Hirofumi Uzawa in 1961. [ 1 ] A general version of the theorem consists of two parts. [ 2 ] [ 3 ] The first states that, under the normal assumptions of the Solow-Swan and Ramsey models, if capital, investment, consumption, and output are increasing at constant exponential rates, these rates must be equivalent. The second part asserts that, within such a balanced growth path, the production function , Y = F ~ ( A ~ , K , L ) {\displaystyle Y={\tilde {F}}({\tilde {A}},K,L)} (where A {\displaystyle A} is technology, K {\displaystyle K} is capital, and L {\displaystyle L} is labor), can be rewritten such that technological change affects output solely as a scalar on labor (i.e. Y = F ( K , A L ) {\displaystyle Y=F(K,AL)} ) a property known as labor-augmenting or Harrod-neutral technological change. Uzawa's theorem demonstrates a limitation of the Solow-Swan and Ramsey models. Imposing the assumption of balanced growth within such models requires that technological change be labor-augmenting. Conversely, a production function that cannot represent the effect of technology as a scalar augmentation of labor cannot produce a balanced growth path. [ 2 ] Throughout this page, a dot over a variable will denote its derivative concerning time (i.e. X ˙ ( t ) ≡ d X ( t ) d t {\displaystyle {\dot {X}}(t)\equiv {dX(t) \over dt}} ). Also, the growth rate of a variable X ( t ) {\displaystyle X(t)} will be denoted g X ≡ X ˙ ( t ) X ( t ) {\displaystyle g_{X}\equiv {\frac {{\dot {X}}(t)}{X(t)}}} . Uzawa's theorem The following version is found in Acemoglu (2009) and adapted from Schlicht (2006): Model with aggregate production function Y ( t ) = F ~ ( A ~ ( t ) , K ( t ) , L ( t ) ) {\displaystyle Y(t)={\tilde {F}}({\tilde {A}}(t),K(t),L(t))} , where F ~ : R + 2 × A → R + {\displaystyle {\tilde {F}}:\mathbb {R} _{+}^{2}\times {\mathcal {A}}\to \mathbb {R} _{+}} and A ~ ( t ) ∈ A {\displaystyle {\tilde {A}}(t)\in {\mathcal {A}}} represents technology at time t (where A {\displaystyle {\mathcal {A}}} is an arbitrary subset of R N {\displaystyle \mathbb {R} ^{N}} for some natural number N {\displaystyle N} ). Assume that F ~ {\displaystyle {\tilde {F}}} exhibits constant returns to scale in K {\displaystyle K} and L {\displaystyle L} . The growth in capital at time t is given by K ˙ ( t ) = Y ( t ) − C ( t ) − δ K ( t ) {\displaystyle {\dot {K}}(t)=Y(t)-C(t)-\delta K(t)} where δ {\displaystyle \delta } is the depreciation rate and C ( t ) {\displaystyle C(t)} is consumption at time t. Suppose that population grows at a constant rate, L ( t ) = exp ⁡ ( n t ) L ( 0 ) {\displaystyle L(t)=\exp(nt)L(0)} , and that there exists some time T < ∞ {\displaystyle T<\infty } such that for all t ≥ T {\displaystyle t\geq T} , Y ˙ ( t ) / Y ( t ) = g Y > 0 {\displaystyle {\dot {Y}}(t)/Y(t)=g_{Y}>0} , K ˙ ( t ) / K ( t ) = g K > 0 {\displaystyle {\dot {K}}(t)/K(t)=g_{K}>0} , and C ˙ ( t ) / C ( t ) = g C > 0 {\displaystyle {\dot {C}}(t)/C(t)=g_{C}>0} . Then 1. g Y = g K = g C {\displaystyle g_{Y}=g_{K}=g_{C}} ; and 2. There exists a function F : R + 2 → R + {\displaystyle F:\mathbb {R} _{+}^{2}\to \mathbb {R} _{+}} that is homogeneous of degree 1 in its two arguments such that, for any t ≥ T {\displaystyle t\geq T} , the aggregate production function can be represented as Y ( t ) = F ( K ( t ) , A ( t ) L ( t ) ) {\displaystyle Y(t)=F(K(t),A(t)L(t))} , where A ( t ) ∈ R + {\displaystyle A(t)\in \mathbb {R} _{+}} and g ≡ A ˙ ( t ) / A ( t ) = g Y − n {\displaystyle g\equiv {\dot {A}}(t)/A(t)=g_{Y}-n} . For any constant α {\displaystyle \alpha } , g X α Y = α g X + g Y {\displaystyle g_{X^{\alpha }Y}=\alpha g_{X}+g_{Y}} . Proof : Observe that for any Z ( t ) {\displaystyle Z(t)} , g Z = Z ˙ ( t ) Z ( t ) = d ln ⁡ Z ( t ) d t {\displaystyle g_{Z}={\frac {{\dot {Z}}(t)}{Z(t)}}={\frac {d\ln Z(t)}{dt}}} . Therefore, g X α Y = d d t ln ⁡ [ ( X ( t ) ) α Y ( t ) ] = α d ln ⁡ X ( t ) d t + d ln ⁡ Y ( t ) d t = α g X + g Y {\displaystyle g_{X^{\alpha }Y}={\frac {d}{dt}}\ln[(X(t))^{\alpha }Y(t)]=\alpha {\frac {d\ln X(t)}{dt}}+{\frac {d\ln Y(t)}{dt}}=\alpha g_{X}+g_{Y}} . We first show that the growth rate of investment I ( t ) = Y ( t ) − C ( t ) {\displaystyle I(t)=Y(t)-C(t)} must equal the growth rate of capital K ( t ) {\displaystyle K(t)} (i.e. g I = g K {\displaystyle g_{I}=g_{K}} ) The resource constraint at time t {\displaystyle t} implies By definition of g K {\displaystyle g_{K}} , K ˙ ( t ) = g K K ( t ) {\displaystyle {\dot {K}}(t)=g_{K}K(t)} for all t ≥ T {\displaystyle t\geq T} . Therefore, the previous equation implies for all t ≥ T {\displaystyle t\geq T} . The left-hand side is a constant, while the right-hand side grows at g I − g K {\displaystyle g_{I}-g_{K}} (by Lemma 1). Therefore, 0 = g I − g K {\displaystyle 0=g_{I}-g_{K}} and thus From national income accounting for a closed economy , final goods in the economy must either be consumed or invested, thus for all t {\displaystyle t} Differentiating with respect to time yields Dividing both sides by Y ( t ) {\displaystyle Y(t)} yields Since g Y , g C {\displaystyle g_{Y},g_{C}} and g I {\displaystyle g_{I}} are constants, C ( t ) Y ( t ) {\displaystyle {\frac {C(t)}{Y(t)}}} is a constant. Therefore, the growth rate of C ( t ) Y ( t ) {\displaystyle {\frac {C(t)}{Y(t)}}} is zero. By Lemma 1, it implies that Similarly, g Y = g I {\displaystyle g_{Y}=g_{I}} . Therefore, g Y = g C = g K {\displaystyle g_{Y}=g_{C}=g_{K}} . Next we show that for any t ≥ T {\displaystyle t\geq T} , the production function can be represented as one with labor-augmenting technology. The production function at time T {\displaystyle T} is The constant return to scale property of production ( F ~ {\displaystyle {\tilde {F}}} is homogeneous of degree one in K {\displaystyle K} and L {\displaystyle L} ) implies that for any t ≥ T {\displaystyle t\geq T} , multiplying both sides of the previous equation by Y ( t ) Y ( T ) {\displaystyle {\frac {Y(t)}{Y(T)}}} yields Note that Y ( t ) Y ( T ) = K ( t ) K ( T ) {\displaystyle {\frac {Y(t)}{Y(T)}}={\frac {K(t)}{K(T)}}} because g Y = g K {\displaystyle g_{Y}=g_{K}} (refer to solution to differential equations for proof of this step). Thus, the above equation can be rewritten as For any t ≥ T {\displaystyle t\geq T} , define and Combining the two equations yields By construction, F ( K , A L ) {\displaystyle F(K,AL)} is also homogeneous of degree one in its two arguments. Moreover, by Lemma 1, the growth rate of A ( t ) {\displaystyle A(t)} is given by
https://en.wikipedia.org/wiki/Uzawa's_theorem
Uzel was the Soviet Union 's first digital computer used on submarines , to assist in tracking multiple targets and calculate torpedo solutions. Uzel's design team was headed by two American defectors to the Soviet Union, Alfred Sarant (a.k.a. Philip Staros) and Joel Barr (a.k.a. Joseph Berg). [ 1 ] An upgraded version of the Uzel computer is still in use on the Kilo class submarine today. This computing article is a stub . You can help Wikipedia by expanding it . This Soviet Union –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uzel_(computer)
V(D)J recombination (variable–diversity–joining rearrangement) is the mechanism of somatic recombination that occurs only in developing lymphocytes during the early stages of T and B cell maturation. It results in the highly diverse repertoire of antibodies/immunoglobulins and T cell receptors (TCRs) found in B cells and T cells , respectively. The process is a defining feature of the adaptive immune system . V(D)J recombination in mammals occurs in the primary lymphoid organs ( bone marrow for B cells and thymus for T cells) and in a nearly random fashion rearranges variable (V), joining (J), and in some cases, diversity (D) gene segments. The process ultimately results in novel amino acid sequences in the antigen-binding regions of immunoglobulins and TCRs that allow for the recognition of antigens from nearly all pathogens including bacteria , viruses , parasites , and worms as well as "altered self cells" as seen in cancer . The recognition can also be allergic in nature ( e.g. to pollen or other allergens ) or may match host tissues and lead to autoimmunity . In 1987, Susumu Tonegawa was awarded the Nobel Prize in Physiology or Medicine "for his discovery of the genetic principle for generation of antibody diversity". [ 1 ] Human antibody molecules (including B cell receptors ) are composed of heavy and light chains, each of which contains both constant (C) and variable (V) regions, genetically encoded on three loci : Each heavy chain or light chain gene contains multiple copies of three different types of gene segments for the variable regions of the antibody proteins. For example, the human immunoglobulin heavy chain region contains 2 Constant (Cμ and Cδ) gene segments and 44 Variable (V) gene segments, plus 27 Diversity (D) gene segments and 6 Joining (J) gene segments. [ 2 ] The light chain genes possess either a single (Cκ) or four (Cλ) Constant gene segments with numerous V and J gene segments but do not have D gene segments. [ 3 ] DNA rearrangement causes one copy of each type of gene segment to go in any given lymphocyte, generating an enormous antibody repertoire; roughly 3×10 11 combinations are possible, although some are removed due to self reactivity. Most T cell receptors are composed of a variable alpha chain and a beta chain. The T cell receptor genes are similar to immunoglobulin genes in that they too contain multiple V, D, and J gene segments in their beta chains (and V and J gene segments in their alpha chains) that are rearranged during the development of the lymphocyte to provide that cell with a unique antigen receptor. The T cell receptor in this sense is the topological equivalent to an antigen-binding fragment of the antibody, both being part of the immunoglobulin superfamily. An autoimmune response is prevented by eliminating cells that self-react. For T cells, this occurs in the thymus by testing the cell against an array of self antigens expressed through the function of the autoimmune regulator (AIRE). The immunoglobulin lambda light chain locus contains protein-coding genes that can be lost with its rearrangement. This is based on a physiological mechanism and is not pathogenetic for leukemias or lymphomas. A cell persists if it creates a successful product that does not self-react, otherwise it is pruned via apoptosis . In the developing B cell , the first recombination event to occur is between one D and one J gene segment of the heavy chain locus. Any DNA between these two gene segments is deleted. This D-J recombination is followed by the joining of one V gene segment, from a region upstream of the newly formed DJ complex, forming a rearranged VDJ gene segment. All other gene segments between V and D segments are now deleted from the cell's genome. Primary transcript (unspliced RNA) is generated containing the VDJ region of the heavy chain and both the constant mu and delta chains (C μ and C δ ). (i.e. the primary transcript contains the segments: V-D-J-C μ -C δ ). The primary RNA is processed to add a polyadenylated (poly-A) tail after the C μ chain and to remove sequence between the VDJ segment and this constant gene segment. Translation of this mRNA leads to the production of the IgM heavy chain protein. The kappa (κ) and lambda (λ) chains of the immunoglobulin light chain loci rearrange in a very similar way, except that the light chains lack a D segment. In other words, the first step of recombination for the light chains involves the joining of the V and J chains to give a VJ complex before the addition of the constant chain gene during primary transcription. Translation of the spliced mRNA for either the kappa or lambda chains results in formation of the Ig κ or Ig λ light chain protein. Assembly of the Ig μ heavy chain and one of the light chains results in the formation of membrane bound form of the immunoglobulin IgM that is expressed on the surface of the immature B cell. During thymocyte development, the T cell receptor (TCR) chains undergo essentially the same sequence of ordered recombination events as that described for immunoglobulins. D-to-J recombination occurs first in the β-chain of the TCR. This process can involve either the joining of the D β 1 gene segment to one of six J β 1 segments or the joining of the D β 2 gene segment to one of six J β 2 segments. [ 3 ] DJ recombination is followed (as above) with V β -to-D β J β rearrangements. All gene segments between the V β -D β -J β gene segments in the newly formed complex are deleted and the primary transcript is synthesized that incorporates the constant domain gene (V β -D β -J β -C β ). mRNA transcription splices out any intervening sequence and allows translation of the full length protein for the TCR β-chain. The rearrangement of the alpha (α) chain of the TCR follows β chain rearrangement, and resembles V-to-J rearrangement described for Ig light chains (see above). The assembly of the β- and α- chains results in formation of the αβ-TCR that is expressed on a majority of T cells . The process of V(D)J recombination is mediated by VDJ recombinase, which is a diverse collection of enzymes. The key enzymes involved are recombination activating genes 1 and 2 (RAG), terminal deoxynucleotidyl transferase (TdT), and Artemis nuclease, a member of the ubiquitous non-homologous end joining (NHEJ) pathway for DNA repair. [ 4 ] Several other enzymes are known to be involved in the process and include DNA-dependent protein kinase (DNA-PK), X-ray repair cross-complementing protein 4 (XRCC4), DNA ligase IV , non-homologous end-joining factor 1 (NHEJ1; also known as Cernunnos or XRCC4-like factor [XLF]), the recently discovered Paralog of XRCC4 and XLF (PAXX), and DNA polymerases λ and μ. [ 5 ] Some enzymes involved are specific to lymphocytes ( e.g. , RAG, TdT), while others are found in other cell types and even ubiquitously ( e.g. , NHEJ components). To maintain the specificity of recombination, V(D)J recombinase recognizes and binds to recombination signal sequences (RSSs) flanking the variable (V), diversity (D), and joining (J) genes segments. RSSs are composed of three elements: a heptamer of seven conserved nucleotides, a spacer region of 12 or 23 basepairs in length, and a nonamer of nine conserved nucleotides. While the majority of RSSs vary in sequence, the consensus heptamer and nonamer sequences are CACAGTG and ACAAAAACC, respectively; and although the sequence of the spacer region is poorly conserved, the length is highly conserved. [ 6 ] [ 7 ] The length of the spacer region corresponds to approximately one (12 basepairs) or two turns (23 basepairs) of the DNA helix. Following what is known as the 12/23 Rule, gene segments to be recombined are usually adjacent to RSSs of different spacer lengths ( i.e. , one has a "12RSS" and one has a "23RSS"). [ 8 ] This is an important feature in the regulation of V(D)J recombination. [ 9 ] V(D)J recombination begins when V(D)J recombinase (through the activity of RAG1) binds a RSS flanking a coding gene segment (V, D, or J) and creates a single-strand nick in the DNA between the first base of the RSS (just before the heptamer) and the coding segment. This is essentially energetically neutral (no need for ATP hydrolysis ) and results in the formation of a free 3' hydroxyl group and a 5' phosphate group on the same strand. The reactive hydroxyl group is positioned by the recombinase to attack the phosphodiester bond of opposite strand, forming two DNA ends: a hairpin (stem-loop) on the coding segment and a blunt end on the signal segment. [ 10 ] The current model is that DNA nicking and hairpin formation occurs on both strands simultaneously (or nearly so) in a complex known as a recombination center . [ 11 ] [ 12 ] [ 13 ] [ 14 ] The blunt signal ends are flush ligated together to form a circular piece of DNA containing all of the intervening sequences between the coding segments known as a signal joint (although circular in nature, this is not to be confused with a plasmid ). While originally thought to be lost during successive cell divisions, there is evidence that signal joints may re-enter the genome and lead to pathologies by activating oncogenes or interrupting tumor suppressor gene function(s)[Ref]. The coding ends are processed further prior to their ligation by several events that ultimately lead to junctional diversity. [ 15 ] Processing begins when DNA-PK binds to each broken DNA end and recruits several other proteins including Artemis, XRCC4, DNA ligase IV, Cernunnos, and several DNA polymerases. [ 16 ] DNA-PK forms a complex that leads to its autophosphorylation , resulting in activation of Artemis. The coding end hairpins are opened by the activity of Artemis. [ 17 ] If they are opened at the center, a blunt DNA end will result; however in many cases, the opening is "off-center" and results in extra bases remaining on one strand (an overhang). These are known as palindromic (P) nucleotides due to the palindromic nature of the sequence produced when DNA repair enzymes resolve the overhang. [ 18 ] The process of hairpin opening by Artemis is a crucial step of V(D)J recombination and is defective in the severe combined immunodeficiency (scid) mouse model . Next, XRCC4, Cernunnos, and DNA-PK align the DNA ends and recruit terminal deoxynucleotidyl transferase (TdT), a template-independent DNA polymerase that adds non-templated (N) nucleotides to the coding end. The addition is mostly random, but TdT does exhibit a preference for G/C nucleotides. [ 19 ] As with all known DNA polymerases, the TdT adds nucleotides to one strand in a 5' to 3' direction. [ 20 ] Lastly, exonucleases can remove bases from the coding ends (including any P or N nucleotides that may have formed). DNA polymerases λ and μ then insert additional nucleotides as needed to make the two ends compatible for joining. This is a stochastic process, therefore any combination of the addition of P and N nucleotides and exonucleolytic removal can occur (or none at all). Finally, the processed coding ends are ligated together by DNA ligase IV. [ 21 ] All of these processing events result in a paratope that is highly variable, even when the same gene segments are recombined. V(D)J recombination allows for the generation of immunoglobulins and T cell receptors to antigens that neither the organism nor its ancestor(s) need to have previously encountered, allowing for an adaptive immune response to novel pathogens that develop or to those that frequently change ( e.g. , seasonal influenza ). However, a major caveat to this process is that the DNA sequence must remain in-frame in order to maintain the correct amino acid sequence in the final protein product. If the resulting sequence is out-of-frame, the development of the cell will be arrested, and the cell will not survive to maturity. V(D)J recombination is therefore a very costly process that must be (and is) strictly regulated and controlled.
https://en.wikipedia.org/wiki/V(D)J_recombination
V-Key is a software-based digital security provider. Headquartered in Singapore , it provides products to financial institutions, mobile payment providers and governments to implement cloud-based payments, authentication for mobile banking , and secured mobile applications for user access and data protection. [ 1 ] [ 2 ] V-Key was founded in 2011 by entrepreneurs Eddie Chau, Benjamin Mah and Joseph Gan. Eddie Chau, who are the founders of digital agency Brandtology, [ 3 ] acquired by iSentia in 2014, started V-Key primarily to secure mobile devices and applications with patented technology. [ 4 ] [ 5 ] Benjamin Mah is the co-founder and chief executive officer of V-Key. He was general manager at e-Cop (acquired by a wholly owned subsidiary of Temasek Holdings ) and regional director at Encentuate (acquired by IBM ), before he co-founded V-Key. [ 6 ] He is a concurrently venture partner of Venture Craft, chairman of Jump Start Asia and a mentor at UOB Finlabs. [ 7 ] Joseph Gan is the third co-founder of V-Key. Before joining V-Key, he was at the Center for Strategic Info comm Technologies (CSIT) as the head of the Cryptography Lab, where he oversaw research and development into cryptographic software for the Ministry of Defence (Singapore) . [ 8 ] [ 9 ] Companies that have funded V-Key are IPV Capital and Ant Financial Services , which runs the Alipay mobile wallet app. [ 10 ] [ 11 ] V-Key provides security to businesses to support cloud-based payments, digital identity and authentication for mobile banking as well as other secured mobile applications [ 11 ] [ 12 ] via its core technology—V-OS. V-Key's partners are financial institutions, governments and mobile payment providers in various markets. [ 13 ] Its technology has been used by:
https://en.wikipedia.org/wiki/V-Key
The V-model is a graphical representation of a systems development lifecycle . It is used to produce rigorous development lifecycle models and project management models. The V-model falls into three broad categories, the German V-Modell , a general testing model, and the US government standard. [ 2 ] The V-model summarizes the main steps to be taken in conjunction with the corresponding deliverables within computerized system validation framework, or project life cycle development. It describes the activities to be performed and the results that have to be produced during product development. The left side of the "V" represents the decomposition of requirements, and the creation of system specifications. The right side of the "V" represents an integration of parts and their validation. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] However, requirements need to be validated first against the higher level requirements or user needs. Furthermore, there is also something as validation of system models. This can partially be done on the left side also. To claim that validation only occurs on the right side may not be correct. The easiest way is to say that verification is always against the requirements (technical terms) and validation is always against the real world or the user's needs. The aerospace standard RTCA DO-178B states that requirements are validated—confirmed to be true—and the end product is verified to ensure it satisfies those requirements. Validation can be expressed with the query "Are you building the right thing?" and verification with "Are you building it right?" There are three general types of V-model. "V-Modell" is the official project management method of the German government. It is roughly equivalent to PRINCE2 , but more directly relevant to software development. [ 8 ] The key attribute of using a "V" representation was to require proof that the products from the left-side of the V were acceptable by the appropriate test and integration organization implementing the right-side of the V. [ 9 ] [ 10 ] [ 11 ] Throughout the testing community worldwide, the V-model is widely seen as a vaguer illustrative depiction of the software development process as described in the International Software Testing Qualifications Board Foundation Syllabus for software testers. [ 12 ] There is no single definition of this model, which is more directly covered in the alternative article on the V-Model (software development) . The US also has a government standard V-model. Its scope is a narrower systems development lifecycle model, but far more detailed and more rigorous than most UK practitioners and testers would understand by the V-model. [ 13 ] [ 14 ] [ 3 ] [ 4 ] [ 15 ] [ 16 ] It is sometimes said that validation can be expressed by the query "Are you building the right thing?" and verification by "Are you building it right?" In practice, the usage of these terms varies. The PMBOK guide , also adopted by the IEEE as a standard (jointly maintained by INCOSE, the Systems engineering Research Council SERC, and IEEE Computer Society) defines them as follows in its 4th edition: [ 17 ] The V-model provides guidance for the planning and realization of projects. The following objectives are intended to be achieved by a project execution: The systems engineering process (SEP) provides a path for improving the cost-effectiveness of complex systems as experienced by the system owner over the entire life of the system, from conception to retirement. [ 1 ] It involves early and comprehensive identification of goals, a concept of operations that describes user needs and the operating environment, thorough and testable system requirements, detailed design, implementation, rigorous acceptance testing of the implemented system to ensure it meets the stated requirements (system verification), measuring its effectiveness in addressing goals (system validation), on-going operation and maintenance, system upgrades over time, and eventual retirement. [ 1 ] [ 3 ] [ 4 ] [ 7 ] The process emphasizes requirements-driven design and testing. All design elements and acceptance tests must be traceable to one or more system requirements and every requirement must be addressed by at least one design element and acceptance test. Such rigor ensures nothing is done unnecessarily and everything that is necessary is accomplished. [ 1 ] [ 3 ] The specification stream mainly consists of: The testing stream generally consists of: The development stream can consist (depending on the system type and the development scope) of customization, configuration or coding. The V-model is used to regulate the software development process within the German federal administration. Nowadays [ when? ] it is still the standard for German federal administration and defense projects, as well as software developers within the region. The concept of the V-model was developed simultaneously, but independently, in Germany and in the United States in the late 1980s: It has now found widespread application in commercial as well as defense programs. Its primary use is in project management [ 3 ] [ 4 ] and throughout the project lifecycle. One fundamental characteristic of the US V-model is that time and maturity move from left to right and one cannot move back in time. All iteration is along a vertical line to higher or lower levels in the system hierarchy, as shown in the figure. [ 3 ] [ 4 ] [ 7 ] This has proven to be an important aspect of the model. The expansion of the model to a dual-Vee concept is treated in reference. [ 3 ] As the V-model is publicly available many companies also use it. In project management it is a method comparable to PRINCE2 and describes methods for project management as well as methods for system development . The V-model, while rigid in process, can be very flexible in application, especially as it pertains to the scope outside of the realm of the System Development Lifecycle normal parameters. These are the advantages V-model offers in front of other systems development models: The following aspects are not covered by the V-model, they must be regulated in addition, or the V-model must be adapted accordingly: [ 25 ] [ 26 ]
https://en.wikipedia.org/wiki/V-model
In mathematics , a V-ring is a ring R such that every simple R - module is injective . The following three conditions are equivalent: [ 1 ] A commutative ring is a V-ring if and only if it is Von Neumann regular . [ 2 ] This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/V-ring_(ring_theory)
The V. M. Goldschmidt Award is an award given by the Geochemical Society at the V. M. Goldschmidt Conference for achievements in the fields of geochemistry and cosmochemistry . The award in honor of Victor Moritz Goldschmidt , a pioneer in both those fields. [ 1 ] This geochemistry article is a stub . You can help Wikipedia by expanding it . This science awards article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/V._M._Goldschmidt_Award
A V0-morph is an organism whose surface area remains constant as the organism grows. [ 1 ] The reason why the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake. Biofilms on a flat solid substrate are examples of V0-morphs; they grow in thickness, but not in surface area that is involved in nutrient exchange. Other examples are dinophyta and diatoms that have a cell wall that does not change during the cell cycle. During cell-growth, when the amounts of protein and carbohydrates increase, the vacuole shrinks. The outer membrane that is involved in nutrient uptake remains constant. At cell division, the daughter cells rapidly take up water, complete a new cell wall and the cycle repeats. Rods (bacteria that have the shape of a rod and grow in length, but not in diameter) are a static mixture between a V0- and a V1-morph , where the caps act as V0-morphs and the cylinder between the caps as V1-morph.The mixture is called static because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function are constant during growth. Crusts , such as lichens that grow on a solid substrate, are a dynamic mixture between a V0- and a V1-morph, where the inner part acts as V0-morph, and the outer annulus as V1-morph.The mixture is called dynamic because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function change during growth. The Dynamic Energy Budget theory explains why the diameter of crusts grow linearly in time at constant substrate availability.
https://en.wikipedia.org/wiki/V0-morph
An V1-morph is an organism that changes in shape during growth such that its surface area is proportional to its volume. [ 1 ] In most cases both volume and surface area are proportional to length The reason the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake. Since uptake is proportional to maintenance for V1-morphs, there is no size control, and an organism grows exponentially at constant food (substrate) availability. Filaments, such as fungi that form hyphae growing in length, but not in diameter, are examples of V1-morphs. Sheets that extend, but do not change in thickness, like some colonial bacteria and algae, are another example. An important property of V1-morphs is that the distinction between the individual and the population level disappears; a single long filament grows as fast as many small ones of the same diameter and the same total length.
https://en.wikipedia.org/wiki/V1-morph
The V1 Saliency Hypothesis , or V1SH (pronounced ‘vish’) is a theory [ 1 ] [ 2 ] about V1, the primary visual cortex (V1) . It proposes that the V1 in primates creates a saliency map of the visual field to guide visual attention or gaze shifts exogenously. V1SH is the only theory so far to not only endow V1 a very important cognitive function, but also to have provided multiple non-trivial theoretical predictions that have been experimentally confirmed subsequently. [ 2 ] [ 3 ] According to V1SH, V1 creates a saliency map from retinal inputs to guide visual attention or gaze shifts. [ 1 ] Anatomically, V1 is the gate for retinal visual inputs to enter neocortex , and is also the largest cortical area devoted to vision. In the 1960s, David Hubel and Torsten Wiesel discovered that V1 neurons are activated by tiny image patches that are large enough to depict a small bar [ 4 ] but not a discernible face. This work led to a Nobel prize, [ 5 ] and V1 has since been seen as merely serving a back-office function (of image processing ) for the subsequent cognitive processing in the brain beyond V1. However, research progress to understand the subsequent processing has been much more difficult or slower than expected (by, e.g., Hubel and Wiesel [ 6 ] ). Outside the box of the traditional views, V1SH is catalyzing a change of framework [ 7 ] to enable fresh progresses on understanding vision. See for where primary visual cortex is in the brain and relative to the eyes. V1SH states that V1 transforms the visual inputs into a saliency map of the visual field to guide visual attention or direction of gaze. [ 2 ] [ 1 ] Humans are essentially blind to visual inputs outside their window of attention . Therefore, attention gates visual perception and awareness , and theories of visual attention are cornerstones of theories of visual functions in the brain. A saliency map is by definition computed from, or caused by, the external visual input rather than from internal factors such as animal’s expectations or goals (e.g., to read a book). Therefore, a saliency map is said to guide attention exogenously rather than endogenously . Accordingly, this saliency map is also called the bottom-up saliency map to guide reflexive or involuntary shifts of attention . For example, it guides our gaze shifts towards an insect flying in our peripheral visual field when we are reading a book. Note that this saliency map, which is constructed by a biological or natural brain, is not the same as the sort of saliency map that is engineered in artificial or computer vision, partly because the artificial saliency maps often include attentional guidance factors that are endogenous in nature. In this (biological) saliency map of the visual field, each visual location has a saliency value. This value is defined as the strength of this location to attract attention exogenously. [ 2 ] So if location A has a higher saliency value than location B, then location A is more likely to attract visual attention or gaze shifts towards it than location B. In V1, each neuron can be activated only by visual inputs in a small region of the visual field. This region is called the receptive field of this neuron, and typically covers no more than the size of a coin at an arm’s length. [ 8 ] Neighbouring V1 neurons have neighbouring and overlapping receptive fields. [ 8 ] Hence, each visual location can simultaneously activate many V1 neurons. According to V1SH, the most activated neuron among these neurons signals the saliency value at this location by its neural activity. [ 1 ] [ 2 ] A V1 neuron’s response to visual inputs within its receptive field is also influenced by visual inputs outside the receptive field. [ 9 ] Hence saliency value at each location depends on visual input context. [ 1 ] [ 2 ] This is as it should be since saliency depends on context. For example, a vertical bar is salient in an image in which all the other visual items surrounding it are horizontal bars, but this same vertical bar is not salient if these other items are all vertical bars instead. The figure above gives a schematics of the neural mechanisms in V1 to generate the saliency map. In this example, the retinal image has many purple bars, all uniformly oriented (right-tilted) except for one bar that is oriented uniquely (left-tilted). This orientation singleton is the most salient in this image, so it attracts attention or gaze, as observed in psychological experiments. [ 10 ] In V1, many neurons have their preferred orientations for visual inputs. [ 8 ] For example, a neuron's response to a bar in its receptive field is higher when this bar is oriented in its preferred orientation. Analogously, many V1 neurons have their preferred colours. [ 8 ] In this schematic, each input bar to the retina activates two (groups of) V1 neurons, one preferring its orientation and the other preferring its colour. The responses from neurons activated by their preferred orientations in their receptive fields are visualized in the schematics by the black dots in the plane representing the V1 neural responses. Similarly, responses from neurons activated by their preferred colours in their receptive fields are visualized by the purple dots. The sizes of the dots visualize the strengths of the V1 neural responses. In this example, the largest response comes from the neurons preferring and responding to the uniquely oriented bar. This is because of iso-orientation suppression: when two V1 neurons are near each other and have the same or similar preferred orientations, they tend to suppress each other’s activities. [ 9 ] [ 11 ] Therefore, among the group of neurons that prefer and respond to the uniformly oriented background bars, each neuron receives iso-orientation suppression from other neurons of this group. [ 1 ] [ 9 ] Meanwhile, the neuron responding to the orientation singleton does not belong to this group and thus escapes this suppression, [ 1 ] hence its response is higher than the other neural responses. Iso-colour suppression [ 12 ] is analogous to iso-orientation suppression, so all neurons preferring and responding to the purple colours of the input bars are under the iso-colour suppression. According to V1SH, the maximum response at each bar’s location represents the saliency value at each bar’s location. [ 1 ] [ 2 ] This saliency value is thus highest at the location of the orientation singleton, and is represented by the response from neurons preferring and responding to the orientation of this singleton. These saliency values are sent to the superior colliculus , [ 13 ] a midbrain area, to execute gaze shifts to the receptive field of the most activated neuron responding to visual input space. [ 13 ] Hence, for this input image in the figure above, the orientation singleton, which evokes the highest V1 response to this image, attracts visual attention or gaze. V1SH can explain data on visual search , such as the short response times to find a uniquely red item among green items, or a uniquely vertical bar among horizontal bars, or an item uniquely moving to the right among items moving to the left. These kind of visual searches are called feature searches , when the search target is unique in a basic feature value like orientation, color, or motion direction. [ 10 ] [ 14 ] The shortness of the search response time manifests a higher saliency value at the location of the search target to attract attention. V1SH also explains why it takes longer to find a unique red-vertical bar among red-horizontal bars and green-vertical bars. This is an example of conjunction searches when the search target is unique only by the conjunction of two features, each of which is present in the visual scene. [ 10 ] Furthermore, V1SH explains data that are difficult to be explained by alternative frameworks. [ 10 ] [ 15 ] The figure above illustrates an example: two neighboring textures in A, one made of uniformly left-tilted bars and another of uniformly right-tilted bars, are very easy to be segmented from each other by human vision. This is because the texture bars at the border between the two textures evoke the highest V1 neural responses (since they are least suppressed by iso-orientation suppression), therefore, the border bars are the most salient in the image to attract attention to the border. However, the segmentation becomes much more difficult if the texture in B is superposed on the original image in A (the result is depicted in C). This is because, at non-border texture locations, V1 neural responses to the horizontal and vertical bars (from B) are higher than those to the oblique bars (from A); these higher responses dictate and raise the saliency values at these non-border locations, making the border no longer as competitive for saliency. [ 16 ] V1SH was proposed in late 1990's [ 17 ] [ 18 ] by Li Zhaoping . It was uninfluential initially since for decades it has been believed that attentional guidance is essentially or only controlled by higher-level brain areas. These higher-level brain areas include the frontal eye field and parietal cortical areas [ 19 ] in the frontal and more anterior part of the brain, and they are believed to be intelligent for attentional and executive control . In addition, the primary visual cortex, V1, located in occipital lobe in the back or posterior part of the brain, has traditionally been thought of as a low-level visual area that plays mainly a supporting role to other brain areas for their more important visual functions. [ 8 ] Opinions started to change by a surprising piece of behavioral data: an item uniquely shown to one eye --- an ocular singleton --- among similarly appearing items shown to the other eye (using e.g. a pair of glasses for watching 3D movies ) can attract gaze or attention automatically. [ 20 ] [ 21 ] An example is illustrated in this figure. Here, an image containing a single letter 'X' is shown to the right eye, and another image containing an array of the same 'X's and a letter 'O' is shown to the left eye. In such a situation, human observers normally perceive an image resembling a superposition of the two monocular images, such that they see an array of all the 'X's and the single 'O'. The 'X' arising from the right-eye image will not appear distinctive. Nevertheless, even when they are doing a task to search (in their perceived image) for the unique and perceptually distinctive 'O' as quickly as possible, their gaze automatically or involuntarily shifts to the 'X' arising from the right-eye image, often before their gaze shifts to the 'O'. Attention capture by such an ocular singleton occurs even when observers fail to guess whether this singleton is present (if it were absent in this example figure, all 'X's and the single 'O' would be shown to the left eye only). [ 20 ] This observation was counter-intuitive, [ 22 ] was easily reproduced by other vision researchers, and was uniquely predicted by V1SH. Since V1 is the only visual cortical area with neurons tuned to eye of origin of visual inputs, [ 4 ] this observation strongly supports V1's role in guiding attention. More experiments followed to further investigate V1SH, [ 2 ] and supporting data emerged from functional brain imaging, [ 23 ] visual psychophysics, [ 24 ] [ 25 ] and from monkey electrophysiology [ 3 ] [ 26 ] [ 27 ] [ 28 ] (although see some conflicting data [ 29 ] ). V1SH has since become more popular. [ 30 ] [ 31 ] V1 is now seen as one of the corner stones in the brain's network of attentional mechanisms, [ 32 ] [ 33 ] and its functional role in guiding visual attention is appearing in handbooks [ 34 ] [ 35 ] and textbooks. [ 36 ] [ 37 ] Zhaoping argues that If V1SH is correct, the ideas [ 38 ] [ 39 ] about how visual system works, and consequently questions to ask for future vision research, should be fundamentally changed. [ 7 ]
https://en.wikipedia.org/wiki/V1_Saliency_Hypothesis
Vanadium(III) sulfate is the inorganic compound with the formula V 2 (SO 4 ) 3 . It is a pale yellow solid that is stable to air, in contrast to most vanadium(III) compounds. It slowly dissolves in water to give the green aquo complex [V(H 2 O) 6 ] 3+ . The compound is prepared by treating V 2 O 5 in sulfuric acid with elemental sulfur : [ 2 ] This transformation is a rare example of a reduction by elemental sulfur. When heated in vacuum at or slightly below 410 °C, it decomposes into vanadyl sulfate (VOSO 4 ) and SO 2 . Vanadium(III) sulfate is stable in dry air but upon exposure to moist air for several weeks forms a green hydrate form. Vanadium(III) sulfate is a reducing agent . This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/V2(SO4)3
Vanadium(III) oxide is the inorganic compound with the formula V 2 O 3 . It is a black solid prepared by reduction of V 2 O 5 with hydrogen or carbon monoxide . [ 3 ] [ 4 ] It is a basic oxide dissolving in acids to give solutions of vanadium (III) complexes. [ 4 ] V 2 O 3 has the corundum structure. [ 4 ] It is antiferromagnetic with a critical temperature of 160 K, below which there is an abrupt change in conductivity from metallic to insulating. [ 5 ] This also distorts the crystal structure to a monoclinic space group: C2/c. [ 1 ] Upon exposure to air it gradually converts into indigo-blue V 2 O 4 . [ 5 ] In nature it occurs as the rare mineral karelianite . [ 6 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/V2O3
C 0.1 mg V 2 O 5 /m 3 (fume) [ 6 ] Vanadium(V) oxide ( vanadia ) is the inorganic compound with the formula V 2 O 5 . Commonly known as vanadium pentoxide , it is a dark yellow solid, although when freshly precipitated from aqueous solution, its colour is deep orange. Because of its high oxidation state , it is both an amphoteric oxide and an oxidizing agent . From the industrial perspective, it is the most important compound of vanadium , being the principal precursor to alloys of vanadium and is a widely used industrial catalyst . [ 8 ] The mineral form of this compound, shcherbinaite, is extremely rare, almost always found among fumaroles . A mineral trihydrate , V 2 O 5 ·3H 2 O, is also known under the name of navajoite. Upon heating a mixture of vanadium(V) oxide and vanadium(III) oxide , comproportionation occurs to give vanadium(IV) oxide , as a deep-blue solid: [ 9 ] The reduction can also be effected by oxalic acid , carbon monoxide , and sulfur dioxide . Further reduction using hydrogen or excess CO can lead to complex mixtures of oxides such as V 4 O 7 and V 5 O 9 before black V 2 O 3 is reached. V 2 O 5 is an amphoteric oxide, and unlike most transition metal oxides, it is slightly water soluble , giving a pale yellow, acidic solution. Thus V 2 O 5 reacts with strong non-reducing acids to form solutions containing the pale yellow salts containing dioxovanadium(V) centers: It also reacts with strong alkali to form polyoxovanadates , which have a complex structure that depends on pH . [ 10 ] If excess aqueous sodium hydroxide is used, the product is a colourless salt , sodium orthovanadate , Na 3 VO 4 . If acid is slowly added to a solution of Na 3 VO 4 , the colour gradually deepens through orange to red before brown hydrated V 2 O 5 precipitates around pH 2. These solutions contain mainly the ions HVO 4 2− and V 2 O 7 4− between pH 9 and pH 13, but below pH 9 more exotic species such as V 4 O 12 4− and HV 10 O 28 5− ( decavanadate ) predominate. Upon treatment with thionyl chloride , it converts to the volatile liquid vanadium oxychloride , VOCl 3 : [ 11 ] Hydrochloric acid and hydrobromic acid are oxidised to the corresponding halogen , e.g., Vanadates or vanadyl compounds in acid solution are reduced by zinc amalgam through the colourful pathway: The ions are all hydrated to varying degrees. Technical grade V 2 O 5 is produced as a black powder used for the production of vanadium metal and ferrovanadium . [ 10 ] A vanadium ore or vanadium-rich residue is treated with sodium carbonate and an ammonium salt to produce sodium metavanadate , NaVO 3 . This material is then acidified to pH 2–3 using H 2 SO 4 to yield a precipitate of "red cake" (see above ). The red cake is then melted at 690 °C to produce the crude V 2 O 5 . Vanadium(V) oxide is produced when vanadium metal is heated with excess oxygen , but this product is contaminated with other, lower oxides. A more satisfactory laboratory preparation involves the decomposition of ammonium metavanadate at 500–550 °C: [ 13 ] In terms of quantity, the dominant use for vanadium(V) oxide is in the production of ferrovanadium (see above ). The oxide is heated with scrap iron and ferrosilicon , with lime added to form a calcium silicate slag . Aluminium may also be used, producing the iron-vanadium alloy along with alumina as a byproduct. Another important use of vanadium(V) oxide is in the manufacture of sulfuric acid , an important industrial chemical with an annual worldwide production of 165 million tonnes in 2001, with an approximate value of US$8 billion. Vanadium(V) oxide serves the crucial purpose of catalysing the mildly exothermic oxidation of sulfur dioxide to sulfur trioxide by air in the contact process : The discovery of this simple reaction, for which V 2 O 5 is the most effective catalyst, allowed sulfuric acid to become the cheap commodity chemical it is today. The reaction is performed between 400 and 620 °C; below 400 °C the V 2 O 5 is inactive as a catalyst, and above 620 °C it begins to break down. Since it is known that V 2 O 5 can be reduced to VO 2 by SO 2 , one likely catalytic cycle is as follows: followed by It is also used as catalyst in the selective catalytic reduction (SCR) of NO x emissions in some power plants and diesel engines. Due to its effectiveness in converting sulfur dioxide into sulfur trioxide, and thereby sulfuric acid, special care must be taken with the operating temperatures and placement of a power plant's SCR unit when firing sulfur-containing fuels. Maleic anhydride is produced by the V 2 O 5 -catalysed oxidation of butane with air: Maleic anhydride is used for the production of polyester resins and alkyd resins . [ 15 ] Phthalic anhydride is produced similarly by V 2 O 5 -catalysed oxidation of ortho -xylene or naphthalene at 350–400 °C. The equation for the vanadium oxide-catalysed oxidation of o -xylene to phthalic anhydride: The equation for the vanadium oxide-catalysed oxidation of naphthalene to phthalic anhydride: [ 16 ] Phthalic anhydride is a precursor to plasticisers , used for conferring pliability to polymers. A variety of other industrial compounds are produced similarly, including adipic acid , acrylic acid , oxalic acid , and anthraquinone . [ 8 ] Due to its high coefficient of thermal resistance , vanadium(V) oxide finds use as a detector material in bolometers and microbolometer arrays for thermal imaging . It also finds application as an ethanol sensor in ppm levels (up to 0.1 ppm). Vanadium redox batteries are a type of flow battery used for energy storage , including large power facilities such as wind farms . [ 17 ] Vanadium oxide is also used as a cathode in lithium-ion batteries . [ 18 ] Vanadium(V) oxide exhibits very modest acute toxicity to humans, with an LD 50 of about 470 mg/kg. The greater hazard is with inhalation of the dust, where the LD 50 ranges from 4–11 mg/kg for a 14-day exposure. [ 8 ] Vanadate ( VO 3− 4 ), formed by hydrolysis of V 2 O 5 at high pH, appears to inhibit enzymes that process phosphate (PO 4 3− ). However the mode of action remains elusive. [ 10 ] [ better source needed ]
https://en.wikipedia.org/wiki/V2O5
VAA : Chemistry Executives is a German organisation for managers and executives in chemical industries. It safeguards the interests of 27,000 members and is one of the most important German professional associations and unions for executives, academics and employees with managerial responsibility. It was founded in 1919 as the Union of Employed Chemists and Engineers “Budaci”. The VAA champions the material, legal and social interests of its members. It gives them advice on different aspects of working life and tries to provide better labor conditions by negotiating collective agreements . The association offers special legal services for retirees and older employees, who participate in part-time work schemes. The VAA represents executive staff, managers, academics, and professionals. Members of the VAA belong to different occupational groups with over 70% of the VAA members having an academic background, many from the field of natural or applied sciences . Thus many of them work as scientists , engineers or managers. But there are also a significant number of economists , agronomists and lawyers working in responsible positions for small and medium enterprises (SMEs), larger companies and corporations in the chemical industry. The VAA is divided into eight state groups, the structure of which is related to the regional concentration of chemical companies or corporations. The state groups do not adhere to the administrative territory of the sixteen German states. Each state group consists of a certain number of so-called factory groups, which are made up by VAA members on-site in the respective companies. As a rule of thumb: the bigger the company the greater the number of people organized in a factory group. Hence there are strong factory groups in companies like BASF , Bayer , Clariant , Evonik , Henkel , Merck , Roche Diagnostics , Sanofi Aventis or Wacker . In case there is no factory group in a company (e.g. in some SMEs), people enter the VAA as single members. As a politically independent professional association, the VAA cooperates with its German social partners, the Federal Association of Employers in Chemistry BAVC and the Mining, Chemical and Energy Industrial Union IG BCE . One of the main tasks of the VAA is collective bargaining. Therefore it holds regular talks with the BAVC. The setting-up of a collective agreement for academics and non-tariff employees (i.e. those professionals, who are not covered by the collective agreement between IG BCE and BAVC) can be considered as one of the association's main achievements. Furthermore, the VAA has managed to sign a special collective agreement on annual remuneration for employees in their second professional year. The VAA also promotes the interests of its members on the political level. It is a member of the German Managers’ Confederation ULA, an interest group for executives which has close links and ties to political players and organisations in both Berlin and Brussels . The VAA is also a member of the European Federation of Managers in the Chemical and Allied Industries FECCIA and the European Confederation of Executives and Managerial Staff CEC.
https://en.wikipedia.org/wiki/VAA_–_Chemistry_Executives
VALO-CD , a distribution of open-source software on a CD for Microsoft Windows , aims to spread knowledge and the use of open-source software. [ 1 ] VALO-CD originates from Finland , and was originally available only in Finnish. Since version 7, an international version of VALO-CD has also been available in English. [ 2 ] The acronym VALO means "Free/Libre Open-Source Software" in Finnish. Valo is also a Finnish word that means "light". [ 3 ] Therefore the name of the project has a connotation of bringing enlightenment. The Finnish version of VALO-CD had the special goal of concentrating on completely localized open-source software. The project started in 2008, and aims to support technological and economic development in Finland. [ 4 ] Version 8 of VALO-CD includes the following software: This free and open-source software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/VALO-CD
The VAT Information Exchange System ( VIES ) is an electronic means of transmitting information relating to VAT registration (i.e., validity of VAT numbers) of companies registered in the European Union . EU law requires that, where goods or services are procured within the EU by a VAT taxpayer, VAT must be paid only in the member state where the purchaser resides, while in other cases, VAT must be paid in the member state where the supplier resides. For this reason, suppliers need an easy way to validate the VAT numbers presented by purchasers. This validation is performed through VIES. VIES does not itself maintain a VAT number database. Instead, it forwards the VAT number validation query to the database of the member state concerned and, upon reply, it transmits back to the inquirer the information provided by the member state. This information includes at least a "YES/NO" answer on the existence and validity of the supplied number. It may also include additional information, such as the holder's name and address, if this is provided by the member state. VIES optionally provides a unique reference number which can be used to prove to a tax authority that a particular VAT number was confirmed at time of purchase. A negative VIES validation result does not necessarily mean, however, that the VAT number of the purchaser is invalid. For example, the Polish law stipulates that an EU VAT number (registered in VIES) must only be obtained by the Polish VAT taxpayers performing intra-community acquisition of goods worth PLN 50′000 or more in a year. All other Polish VAT taxpayers may use the ordinary Polish NIP number (without the PL prefix and VIES registration) instead when performing their intra-community acquisitions (purchases), in spite of the fact that VIES will return a negative validation. Moreover, a VAT taxpayer with a VIES registration who declares PLN 0 worth of intra-community acquisitions in three consecutive months will automatically be delisted from VIES. Therefore, when the VIES validation returns a negative result, only the validation at the national level of the VAT taxpayer status of a NIP number may decisively clarify its validity as a valid VAT number (status check available freely at [1] ). [ 1 ] Germany , Italy , Spain and Poland all provide such services at national level. [ 2 ] This tax -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/VAT_Information_Exchange_System
The VAX Unit of Performance , or VUP for short, is an obsolete measurement of computer performance used by Digital Equipment Corporation (DEC). 1 VUP was equivalent to the performance of a VAX 11/780 completing a given task. [ 1 ] One VUPS was roughly equivalent to 1 MIPS , and can be used interchangeably in most cases. [ 2 ] Other VAX machines and later workstation designs were compared in performance terms by defining system speed in VUPs, for instance, the VAXft Model 310 ran at 3.8 VUPs, meaning it ran roughly 3.8 times as fast as the 11/780. The term was used largely within the DEC and its community, and fell from use as more standard ratings like SPEC became more widely used. This was especially true with the introduction of DEC workstations running Unix , in which case the VUP was of little use comparing the platforms to competition systems. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/VAX_Unit_of_Performance
Vanadium(II) bromide is a inorganic compound with the formula VBr 2 . It adopts the cadmium iodide structure, featuring octahedral V(II) centers. [ 1 ] A hexahydrate is also known. The hexahydrate undergoes partial dehydration to give the tetrahydrate. Both the hexa- and tetrahydrates are bluish in color. [ 2 ] The compound is produced by the reduction of vanadium(III) bromide with hydrogen .
https://en.wikipedia.org/wiki/VBr2
Vanadium(III) bromide , also known as vanadium tribromide, describes the inorganic compounds with the formula VBr 3 and its hydrates. The anhydrous material is a green-black solid. In terms of its structure, the compound is polymeric with octahedral vanadium(III) surrounded by six bromide ligands. VBr 3 has been prepared by treatment of vanadium tetrachloride with hydrogen bromide : The reaction proceeds via the unstable vanadium(IV) bromide (VBr 4 ), which releases Br 2 near room temperature. [ 1 ] It is also possible to prepare vanadium(III) bromide by reacting bromine with vanadium or ferrovanadine: [ 2 ] Vanadium(III) bromide is present in the form of black, leafy, very hygroscopic crystals with a sometimes greenish sheen. It is soluble in water with green color. Its crystal structure is isotypic to that of vanadium(III) chloride with space group R 3 c (space group no. 167), a = 6.400 Å, c = 18.53 Å. When heated to temperatures of around 500 °C, a violet gas phase is formed, from which, under suitable conditions, red vanadium(IV) bromide can be separated by rapid cooling, which decomposes at −23 °C. [ 2 ] Like vanadium(III) chloride , vanadium(III) bromide forms red-brown soluble complexes with dimethoxyethane and THF , such as mer -VBr 3 (THF) 3 . [ 3 ] Aqueous solutions prepared from VBr 3 contain the cation trans -[VBr 2 (H 2 O) 4 ] + . Evaporation of these solutions give the salt trans-[VBr 2 (H 2 O) 4 ]Br . (H 2 O) 2 . [ 4 ]
https://en.wikipedia.org/wiki/VBr3
VCDIFF is a format and an algorithm for delta encoding , described in IETF 's RFC 3284 . The algorithm is based on Jon Bentley and Douglas McIlroy 's paper "Data Compression Using Long Common Strings" [ 1 ] written in 1999. [ citation needed ] VCDIFF is used as one of the delta encoding algorithms in "Delta encoding in HTTP" ( RFC 3229 ) and was employed in Google 's Shared Dictionary Compression Over HTTP technology, formerly used in their Chrome browser . VCDIFF has 3 delta instructions. ADD, COPY, and RUN. ADD adds a new sequence, COPY copies from an old sequence, and RUN adds repeated data. Free software implementations include xdelta (version 3) and open-vcdiff . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/VCDIFF
VCX score is a smartphone camera benchmarking score described as "designed to reflect the user experience regarding the image quality and the performance of a camera in a mobile device". [ 1 ] [ 2 ] developed by a non-profit organization - VCX-Forum. [ 3 ] VCX scores [ 4 ] are used by specialist media [ 5 ] and by VCX-Forum members to showcase the benchmarking of smartphones, [ 6 ] as well as market photography [ 7 ] technology. VCX scoring methodology has been cited in various published books and independent imaging organizations: VCX-Forum (where VCX is an acronym for Valued Camera eXperience) is an independent, non-governmental, standard-setting organization for image quality measurement and benchmarking (VCX score). Its members are drawn from mobile phone manufacturers, mobile operators, imaging labs, mobile and computer chipset manufacturers, sensor manufacturers, device manufacturers, software companies, equipment providers, and camera & accessory manufacturers among others. [ 11 ] VCX score methodologies are based on the 5 Tenets: To ensure the test results accurately reflect the user experience, the image quality is evaluated for five parameters: A detailed description of the setup and procedure is available as a whitepaper on the VCX-Forum website. [ 12 ] as well as in the book, Camera Image Quality Benchmarking, [ 13 ] page 318, section 9.4.3 Tests and benchmarks are conducted by independent labs. The test procedure, metrics, and weighting are dictated by the standard developed by VCX-Forum. VCX scores are published on the VCX-Forum website. Parts of this publication are often reproduced in specialist media and smartphone vendor social media channels as part of their marketing campaign. VCX-Forum claims that all test measurements must ensure the out-of-the-box experience (Tenet 1 of VCX-Forum) but does not specify what happens when the devices are updated later on. VCX-Forum claims to be objective (Tenet 2 of VCX-Forum) but uses subjective components for the formation of the weighting itself. This subjective base is claimed to have come from blind tests for which no evidence has been provided on the website. Despite the claim that VCX is an open and transparent standard (Tenet 3 of VCX-forum), the details of weighting and scoring are only visible to members of the VCX-Forum. Most measurements are done with the device on a tripod [ 14 ] and aimed at test charts. This does not reflect the common user scenario that VCX-Forum claims to reflect.
https://en.wikipedia.org/wiki/VCX_score
Vanadium(II) chloride is the inorganic compound with the formula VCl 2 , and is the most reduced vanadium chloride. Vanadium(II) chloride is an apple-green solid that dissolves in water to give purple solutions. [ 2 ] Solid VCl 2 is prepared by thermal decomposition of VCl 3 , which leaves a residue of VCl 2 : [ 2 ] VCl 2 dissolves in water to give the purple hexaaquo ion [V(H 2 O) 6 ] 2+ . Evaporation of such solutions produces crystals of [V(H 2 O) 6 ]Cl 2 . [ 3 ] Vanadium dichloride is used as a specialty reductant in organic chemistry . As an aqueous solution, it converts cyclohexylnitrate to cyclohexanone . It reduces phenyl azide into aniline . [ 4 ] Solid VCl 2 adopts the cadmium iodide structure, featuring octahedral coordination geometry. VBr 2 and VI 2 are structurally and chemically similar to the dichloride. All have the d 3 configuration, with a quartet ground state, akin to Cr(III). [ 5 ]
https://en.wikipedia.org/wiki/VCl2
Vanadium(III) chloride describes the inorganic compound with the formula VCl 3 and its hydrates. It forms a purple anhydrous form and a green hexahydrate [VCl 2 (H 2 O) 4 ]Cl·2H 2 O. These hygroscopic salts are common precursors to other vanadium(III) complexes and is used as a mild reducing agent . [ 5 ] VCl 3 has the common layered BiI 3 structure, a motif that features hexagonally closest-packed chloride framework with vanadium ions occupying the octahedral holes. [ 6 ] VBr 3 and VI 3 adopt the same structure, but VF 3 features a structure more closely related to ReO 3 . [ 3 ] The V 3+ cation has a d 2 electronic configuration with two unpaired electrons, making the compound paramagnetic . [ 7 ] VCl 3 is a Mott insulator and undergoes an antiferromagnetic transition at low temperatures. [ 6 ] [ 8 ] Solid hexahydrate, [VCl 2 (H 2 O) 4 ]Cl·2H 2 O, has a monoclinic crystal structure and consists of slightly distorted octahedral trans -[VCl 2 (H 2 O) 4 ] + centers as well as chloride and two molecules of water of crystallization . [ 9 ] [ 10 ] The hexahydrate phase loses two water of crystallization to form the tetrahydrate if heated to 90 °C in a stream of hydrogen chloride gas. [ 1 ] Solutions of vanadium(III) chloride in sulfuric acid and hydrochloric acid are used as electrolytes in vanadium redox batteries . [ 11 ] It is also used as a mild Lewis acid in organic synthesis. One example of such is its use as a catalyst in the cleavage of the acetonide group. [ 12 ] Another example of the use of VCl 3 as a reducing agent is shown in the determination of nitrate and nitrite concentration in water, where VCl 3 reduces nitrate to nitrite. This method is a safer alternative to the cadmium column method. [ 13 ] VCl 3 is prepared by heating VCl 4 at 160–170 °C under a flowing stream of inert gas, which sweeps out the Cl 2 . The bright red liquid converts to a purple solid. [ 14 ] The vanadium oxides can also be used to produce vanadium(III) chloride. For example, vanadium(III) oxide reacts with thionyl chloride at 200 °C: [ 14 ] The reaction of vanadium(V) oxide and disulfur dichloride also produces vanadium(III) chloride with the release of sulfur dioxide and sulfur. [ 14 ] The hexahydrate can be prepared by evaporation of acidic aqueous solutions of the trichloride. [ 1 ] Heating of VCl 3 decomposes with volatilization of VCl 4 , leaving VCl 2 above 350 °C. [ 2 ] [ 15 ] Upon heating under H 2 at 675 °C (but less than 700 °C), VCl 3 reduces to greenish VCl 2 . Comproportionation of vanadium trichloride and vanadium(V) oxides gives vanadium oxydichloride : [ 16 ] The heating of the hexahydrate does not give the anhydrous form, instead undergoes partial hydrolysis and forms vanadium oxydichloride at 160 °C. In an inert atmosphere, it forms a trihydrate at 130 °C and at higher temperatures, it forms vanadium oxychloride. [ 17 ] Vanadium trichloride catalyses the pinacol coupling reaction of benzaldehyde (PhCHO) to 1,2-diphenyl-1,2-ethanediol by various reducing metals such as zinc: [ 18 ] VCl 3 forms colorful adducts and derivatives with a broad scale of ligands. VCl 3 dissolves in water to give the aquo complexes . From these solutions, the hexahydrate [VCl 2 (H 2 O) 4 ]Cl . 2H 2 O crystallizes. In other words, two of the water molecules are not bound to the vanadium, whose structure resembles the corresponding Fe(III) derivative. Removal of the two bound chloride ligands gives the green hexaaquo complex [V(H 2 O) 6 ] 3+ . [ 9 ] [ 19 ] With tetrahydrofuran , VCl 3 forms the red/pink complex VCl 3 (THF) 3 , [ 21 ] and zinc-powder reduction of the latter gives Caulton's reagent , [(V(THF) 3 ) 2 Cl 3 ] 2 [Zn 2 Cl 6 ]. [ 22 ] Vanadium(III) chloride reacts with acetonitrile to give the green adduct VCl 3 (MeCN) 3 . When treated with KCN, VCl 3 converts to [V(CN) 7 ] 4− (early metals commonly adopt coordination numbers greater than 6 with compact ligands). Complementarily, larger metals can form complexes with rather bulky ligands. This aspect is illustrated by the isolation of VCl 3 (NMe 3 ) 2 , containing two bulky NMe 3 ligands. Vanadium(III) chloride is able to form complexes with other adducts, such as pyridine or triphenylphosphine oxide . [ 19 ] Vanadium(III) chloride as its thf complex is a precursor to V(mesityl) 3 . [ 23 ]
https://en.wikipedia.org/wiki/VCl3
Vanadium (V) chloride chlorimide is a chemical compound containing vanadium in a +5 oxidation state bound to three chlorine atoms and with a double bond to a chlorimide group (=NCl). It has formula VNCl 4 . This can be also considered as a chloroiminato complex. [ 3 ] Vanadium(V) chloride chlorimide can be made by chlorinating vanadium nitride at 120°. Or chlorine azide can react with vanadium tetrachloride to yield it. Or more conveniently it can be made by heating vanadium tetrachloride with sodium azide and chlorine gas. [ 4 ] The melting point of Vanadium(V) chloride chlorimide is 136 °C. It can be sublimed in a vacuum below its melting point. The density of the solid is 2.48. [ 5 ] The solid has a triclinic crystal structure with unit cell a=7.64, b=7.14, c=5.91 Å; α=112.4°, β=94.9°, γ=107.8° with Z=2 (formulas per unit cell). [ 6 ] The space group is P 1 . [ 7 ] Vanadium(V) chloride chlorimide is molecular. In the gas phase the V to N bond length is 1.651 Å, V to Cl bond length is 2.138 Å, N to Cl bond length is 1.597 Å. The bond angles are ∠ClVCl 113.4°, VNCl 169.7 and ∠NVCl 106.0. [ 8 ] In the solid form, the bond lengths are slightly different. V-Cl: 2.20, 2.30 and 2.38 Å, V=N bond is 1.64 Å long and N-Cl length is 1.59 Å. The VNC angle is 175.2°. [ 7 ] VCl 3 NCl can crystallise with antimony pentachloride . Phosphines and nitrogen bases can form complexes with the vanadium in this compound. ( triphenyl phosphine , tributyl phosphine , pyridine , bipyridine ). A reaction with ammonium chloride yields the [Cl 5 VNCl] 2− ion. [ 9 ]
https://en.wikipedia.org/wiki/VCl3NCl
Vanadium tetrachloride is the inorganic compound with the formula V Cl 4 . This reddish-brown liquid serves as a useful reagent for the preparation of other vanadium compounds. With one more valence electron than diamagnetic TiCl 4 , VCl 4 is a paramagnetic liquid. It is one of only a few paramagnetic compounds that is liquid at room temperature. VCl 4 is prepared by chlorination of vanadium metal. VCl 5 does not form in this reaction; Cl 2 lacks the oxidizing power to attack VCl 4 . VCl 5 can however be prepared indirectly from VF 5 at −78 °C. [ 1 ] Consistent with its high oxidizing power, VCl 4 reacts with HBr at -50 °C to produce VBr 3 . The reaction proceeds via VBr 4 , which releases Br 2 during warming to room temperature. [ 2 ] VCl 4 forms adducts with many donor ligands, for example, VCl 4 ( THF ) 2 . It is the precursor to vanadocene dichloride . In organic synthesis , VCl 4 is used for the oxidative coupling of phenols. For example, it converts phenol into a mixture of 4,4'-, 2,4'-, and 2,2'- biphenols : [ 3 ] VCl 4 is a catalyst for the polymerization of alkenes, especially those useful in the rubber industry. The underlying technology is related to Ziegler–Natta catalysis , which involves the intermediacy of vanadium alkyls. VCl 4 is a volatile, aggressive oxidant that readily hydrolyzes to release HCl .
https://en.wikipedia.org/wiki/VCl4
Vanadium (V) chloride chlorimide is a chemical compound containing vanadium in a +5 oxidation state bound to three chlorine atoms and with a double bond to a chlorimide group (=NCl). It has formula VNCl 4 . This can be also considered as a chloroiminato complex. [ 3 ] Vanadium(V) chloride chlorimide can be made by chlorinating vanadium nitride at 120°. Or chlorine azide can react with vanadium tetrachloride to yield it. Or more conveniently it can be made by heating vanadium tetrachloride with sodium azide and chlorine gas. [ 4 ] The melting point of Vanadium(V) chloride chlorimide is 136 °C. It can be sublimed in a vacuum below its melting point. The density of the solid is 2.48. [ 5 ] The solid has a triclinic crystal structure with unit cell a=7.64, b=7.14, c=5.91 Å; α=112.4°, β=94.9°, γ=107.8° with Z=2 (formulas per unit cell). [ 6 ] The space group is P 1 . [ 7 ] Vanadium(V) chloride chlorimide is molecular. In the gas phase the V to N bond length is 1.651 Å, V to Cl bond length is 2.138 Å, N to Cl bond length is 1.597 Å. The bond angles are ∠ClVCl 113.4°, VNCl 169.7 and ∠NVCl 106.0. [ 8 ] In the solid form, the bond lengths are slightly different. V-Cl: 2.20, 2.30 and 2.38 Å, V=N bond is 1.64 Å long and N-Cl length is 1.59 Å. The VNC angle is 175.2°. [ 7 ] VCl 3 NCl can crystallise with antimony pentachloride . Phosphines and nitrogen bases can form complexes with the vanadium in this compound. ( triphenyl phosphine , tributyl phosphine , pyridine , bipyridine ). A reaction with ammonium chloride yields the [Cl 5 VNCl] 2− ion. [ 9 ]
https://en.wikipedia.org/wiki/VCl4N
2KOH 1003 12562 ENSG00000179776 ENSMUSG00000031871 P33151 P55284 NM_001114117 NM_001795 NM_009868 NP_001786 NP_033998 Cadherin-5 , or VE-cadherin (vascular endothelial cadherin), also known as CD144 ( C luster of D ifferentiation 144 ), is a type of cadherin . It is encoded by the human gene CDH5 . [ 5 ] VE-cadherin is a classical cadherin from the cadherin superfamily and the gene is located in a six-cadherin cluster in a region on the long arm of chromosome 16 that is involved in loss of heterozygosity events in breast and prostate cancer. The encoded protein is a calcium-dependent cell–cell adhesion glycoprotein composed of five extracellular cadherin repeats, a transmembrane region and a highly conserved cytoplasmic tail . Functioning as a classic cadherin by imparting to cells the ability to adhere in a homophilic manner, the protein may play an important role in endothelial cell biology through control of the cohesion and organization of the intercellular junctions. [ 6 ] Integrity of intercellular junctions is a major determinant of permeability of the endothelium , and the VE-cadherin-based adherens junction is thought to be particularly important. VE-cadherin is known to be required for maintaining a restrictive endothelial barrier – early studies using blocking antibodies to VE-cadherin increased monolayer permeability in cultured cells [ 7 ] and resulted in interstitial edema and hemorrhage in vivo. [ 8 ] A recent study has shown that TNFAIP3 (A20, a dual- ubiquitin editing enzyme) is essential for stability and expression of VE-cadherin. Deubiquitinase function of A20 was shown to remove ubiquitin chains from VE-cadherin, thereby prevented loss of VE-cadherin expression at the endothelial adherens junctions. [ 9 ] VE-cadherin is indispensable for proper vascular development – there have been two transgenic mouse models of VE-cadherin deficiency, both embryonic lethal due to vascular defects. [ 10 ] [ 11 ] Further studies using one of these models revealed that although vasculogenesis occurred, nascent vessels collapsed or disassembled in the absence of VE-cadherin. [ 12 ] Therefore, it was concluded that VE-cadherin serves the purpose of maintaining newly formed vessels. VE-cadherin has been shown to interact with: VE-cadherin may serve as a biomarker for radiation exposure . [ 20 ] This article incorporates text from the United States National Library of Medicine , which is in the public domain .
https://en.wikipedia.org/wiki/VE-cadherin
The VEGAS algorithm , due to G. Peter Lepage , [ 1 ] [ 2 ] [ 3 ] is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the integrand that make the greatest contribution to the final integral . The VEGAS algorithm is based on importance sampling . It samples points from the probability distribution described by the function | f | , {\displaystyle |f|,} so that the points are concentrated in the regions that make the largest contribution to the integral. The GNU Scientific Library (GSL) provides a VEGAS routine. In general, if the Monte Carlo integral of f {\displaystyle f} over a volume Ω {\displaystyle \Omega } is sampled with points distributed according to a probability distribution described by the function g , {\displaystyle g,} we obtain an estimate E g ( f ; N ) , {\displaystyle \mathrm {E} _{g}(f;N),} The variance of the new estimate is then where V a r ( f ; N ) {\displaystyle \mathrm {Var} (f;N)} is the variance of the original estimate, V a r ( f ; N ) = E ( f 2 ; N ) − ( E ( f ; N ) ) 2 . {\displaystyle \mathrm {Var} (f;N)=\mathrm {E} (f^{2};N)-(\mathrm {E} (f;N))^{2}.} If the probability distribution is chosen as g = | f | / ∫ Ω | f ( x ) | d x {\displaystyle g=|f|/\textstyle \int _{\Omega }|f(x)|dx} then it can be shown that the variance V a r g ( f ; N ) {\displaystyle \mathrm {Var} _{g}(f;N)} vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution. The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region while histogramming the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like K d {\displaystyle K^{d}} with dimension d the probability distribution is approximated by a separable function: g ( x 1 , x 2 , … ) = g 1 ( x 1 ) g 2 ( x 2 ) ⋯ {\displaystyle g(x_{1},x_{2},\ldots )=g_{1}(x_{1})g_{2}(x_{2})\cdots } so that the number of bins required is only Kd . This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS. This computational physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/VEGAS_algorithm
Velocity Energy-efficient and Link-aware Cluster-Tree (VELCT) is a cluster and tree-based topology management protocol for mobile wireless sensor networks ( MWSNs ). [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/VELCT
VENOM (short for Virtualized Environment Neglected Operations Manipulation [ 1 ] ) is a computer security flaw that was discovered in 2015 by Jason Geffner, then a security researcher at CrowdStrike . [ 2 ] The flaw was introduced in 2004 and affected versions of QEMU , Xen , KVM , and VirtualBox from that date until it was patched following disclosure. [ 3 ] [ 4 ] The existence of the vulnerability was due to a flaw in QEMU's virtual floppy disk controller. [ 5 ] VENOM is registered in the Common Vulnerabilities and Exposures database as CVE - 2015-3456 . [ 6 ] This computer security article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/VENOM
The VHF Data Link or VHF Digital Link ( VDL ) is a means of sending information between aircraft and ground stations (and in the case of VDL Mode 4, other aircraft) over VHF . Aeronautical VHF data links use the band 117.975–137 MHz assigned by the International Telecommunication Union to Aeronautical mobile (R) service . There are ARINC standards for ACARS on VHF and other data links installed on approximately 14,000 aircraft and a range of ICAO standards defined by the Aeronautical Mobile Communications Panel (AMCP) in the 1990s. Mode 2 is the only VDL mode being implemented operationally to support Controller Pilot Data Link Communications (CPDLC). The ICAO AMCP defined this Mode for validation purposes. It was the same as VDL Mode 2 except that it used the same VHF link as VHF ACARS so it could be implemented using analog radios before VHF Digital Radio implementation was completed. The ICAO AMCP completed validation of VDL Modes 1&2 in 1994, after which the Mode 1 was no longer needed and was deleted from the ICAO standards. The ICAO VDL Mode 2 is the main version of VDL. It has been implemented in a Eurocontrol Link 2000+ program and is specified as the primary link in the EU Single European Sky rule adopted in January 2009 requiring all new aircraft flying in Europe after January 1, 2014 to be equipped with CPDLC . [ 1 ] In advance of CPDLC implementation, VDL Mode 2 has already been implemented in approximately 2,000 aircraft to transport ACARS messages simplifying the addition of CPDLC. Networks of ground stations providing VDL Mode 2 service have been deployed by ARINC and SITA with varying levels of coverage. The ICAO standard for the VDL Mode 2 specifies three layers: the Subnetwork , Link , and Physical Layer . The Subnetwork Layer complies with the requirements of the ICAO Aeronautical Telecommunication Network (ATN) standard which specifies an end-to-end data protocol to be used over multiple air-ground and ground subnetworks including VDL. The VDL Mode 2 Link Layer is made up of two sublayers: a Data Link service and a media access control (MAC) sublayer. The Data Link protocol is based on the ISO standards used for dial-up HDLC access to X.25 networks. It provides aircraft with a positive link establishment to a ground station, and defines an addressing scheme for ground stations. The MAC protocol is a version of Carrier Sense Multiple Access (CSMA). The VDL Mode 2 Physical Layer specifies the use in a 25 kHz wide VHF channel of a modulation scheme called Differential 8- Phase-shift keying with a symbol rate of 10,500 symbols per second. The raw (uncoded) physical layer bit rate is thus 31.5 kilobit/second. [ 2 ] This required the implementation of VHF digital radios. The ICAO standard for VDL Mode 3 defines a protocol providing aircraft with both data and digitized voice communications that was defined by the US FAA with support from Mitre. The digitized voice support made the Mode 3 protocol much more complex than VDL Mode 2. The data and digitized voice packets go in Time Division Multiple Access (TDMA) slots assigned by ground stations. The FAA implemented a prototype system around 2003 but did not manage to convince airlines to install VDL Mode 3 avionics and in 2004 abandoned its implementation. The ICAO standard for VDL Mode 4 specifies a protocol enabling aircraft to exchange data with ground stations and other aircraft. VDL Mode 4 uses a protocol (Self-organized Time Division Multiple Access, STDMA , invented by Swede Håkan Lans in 1988) that allows it to be self-organizing, meaning no master ground station is required. This made it much simpler to implement than VDL Mode 3. In November 2001 this protocol was adopted by ICAO as a global standard. Its primary function was to provide a VHF frequency physical layer for ADS-B transmissions. However it was overtaken as the link for ADS-B by the Mode S radar link operating in the 1,090 MHz band which was selected as the primary link by the ICAO Air Navigation Conference in 2003. The VDL Mode 4 medium can also be used for air-ground exchanges. It is best used for short message transmissions between a large number of users, e.g. providing situational awareness, Digital Aeronautical Information Management ( D-AIM ), etc.. European Air Traffic Management modernization trials have implemented ADS-B and air-ground exchanges using VDL Mode 4 systems. However, on air transport aircraft the operational implementations of ADS-B will use the Mode S link and of CPDLC will use VDL Mode 2. [ 3 ] The European Frequency Management Manual of the International Civil Aviation Organization , ICAO contains, among other things, the following regulations for the use of frequency channels for the VHF Data-Link: [ 4 ] Additional remarks: Mode 2: Mode 4:
https://en.wikipedia.org/wiki/VHF_Data_Link
Very High Frequency Omnidirectional Range Station ( VOR ) [ 1 ] is a type of short-range VHF radio navigation system for aircraft , enabling aircraft with a VOR receiver to determine the azimuth (also radial), referenced to magnetic north, between the aircraft to/from fixed VOR ground radio beacons . VOR [ 2 ] and the first DME (1950) [ 3 ] system (referenced to 1950 since different from today's DME/N) to provide the slant range distance, were developed in the United States as part of a U.S. civil/military program for Aeronautical Navigation Aids in 1945. Deployment of VOR and DME (1950) began in 1949 by the U.S. CAA (Civil Aeronautics Administration). ICAO standardized VOR and DME (1950) in 1950 in ICAO Annex ed.1. [ 4 ] Frequencies for the use of VOR are standardized in the very high frequency (VHF) band between 108.00 and 117.95 MHz [ 5 ] Chapter 3, Table A . To improve azimuth accuracy of VOR even under difficult siting conditions, Doppler VOR (DVOR) was developed in the 1960s. VOR is according to ICAO rules a primary means navigation system for commercial and general aviation, [ 6 ] [ 7 ] (D)VOR are gradually decommissioned [ 8 ] [ 9 ] and replaced by DME-DME RNAV (area navigation) [ 5 ] 7.2.3 and satellite based navigation systems such as GPS in the early 21st century. In 2000 there were about 3,000 VOR stations operating around the world, including 1,033 in the US, but by 2013 the number in the US had been reduced to 967. [ 10 ] The United States is decommissioning approximately half of its VOR stations and other legacy navigation aids as part of a move to performance-based navigation , while still retaining a "Minimum Operational Network" of VOR stations as a backup to GPS. [ 11 ] In 2015, the UK planned to reduce the number of stations from 44 to 19 by 2020. [ 8 ] A VOR beacon radiates via two or more antennas an amplitude modulated signal and a frequency modulated subcarrier . By comparing the fixed 30 Hz reference signal with the rotating azimuth 30 Hz signal the azimuth from an aircraft to a (D)VOR is detected. The phase difference is indicative of the bearing from the (D)VOR station to the receiver relative to magnetic north. This line of position is called the VOR "radial". While providing the same signal over the air at the VOR receiver antennas. DVOR is based on the Doppler shift to modulate the azimuth dependent 30 Hz signal in space, by continuously switching the signal of about 25 antenna pairs that form a circle around the center 30 Hz reference antenna. The intersection of radials from two different VOR stations can be used to fix the position of the aircraft, as in earlier radio direction finding (RDF) systems. VOR stations are short range navigation aids limited to the radio- line-of-sight (RLOS) between transmitter and receiver in an aircraft. Depending on the site elevation of the VOR and altitude of the aircraft Designated Operational Coverages (DOC) of at max. about 200 nautical miles (370 kilometres) [ 5 ] Att.C, Fig.C-13 can be achieved. The prerequesite is that the EIRP provides in spite of losses, e.g. due to propagation and antenna pattern lobing, for a sufficiently strong signal at the aircraft VOR antenna that it can be processed successfully by the VOR receiver. Each (D)VOR station broadcasts a VHF radio composite signal, including the mentioned navigation and reference signal, and a station's identifier and optional additional voice. [ 5 ] 3.3.5 The station's identifier is typically a three-letter string in Morse code . While defined in Annex 10 voice channel is seldomly used today, e.g. for recorded advisories like ATIS . [ 5 ] 3.3.6 A VORTAC is a radio-based navigational aid for aircraft pilots consisting of a co-located VHF omnidirectional range and a tactical air navigation system (TACAN) beacon. Both types of beacons provide pilots azimuth information, but the VOR system is generally used by civil aircraft and the TACAN system by military aircraft. However, the TACAN distance measuring equipment is also used for civil purposes because civil DME equipment is built to match the military DME specifications. Most VOR installations in the United States are VORTACs. The system was designed and developed by the Cardion Corporation. The Research, Development, Test, and Evaluation (RDT&E) contract was awarded 28 December 1981. [ 12 ] Developed from earlier Visual Aural Radio Range (VAR) systems. The VOR development was part of a U.S. civil/military program for Aeronautical Navigation Aids. [ 2 ] In 1949 VOR for the azimuth/bearing of an aircraft to/from a VOR installation and UHF DME (1950) [ 3 ] and the first ICAO Distance Measuring Equipment standard, [ 4 ] were put in operation by the U.S. CAA (Civil Aeronautics Administration). In 1950 ICAO standardized VOR and DME (1950) in Annex 10 ed.1. [ 4 ] The VOR was designed to provide 360 courses to and from the station, selectable by the pilot. Early vacuum tube transmitters with mechanically rotated antennas were widely installed in the 1950s, and began to be replaced with fully solid-state units in the early 1960s. DVOR were gradually implemented They became the major radio navigation system in the 1960s, when they took over from the older radio beacon and four-course (low/medium frequency range) system . Some of the older range stations survived, with the four-course directional features removed, as non-directional low or medium frequency radiobeacons ( NDBs ). A worldwide land-based network of "air highways", known in the US as Victor airways (below 18,000 ft or 5,500 m) and "jet routes" (at and above 18,000 feet), was set up linking VORs. An aircraft can follow a specific path from station to station by tuning into the successive stations on the VOR receiver, and then either following the desired course on a Radio Magnetic Indicator, or setting it on a course deviation indicator (CDI) or a horizontal situation indicator (HSI, a more sophisticated version of the VOR indicator) and keeping a course pointer centered on the display. As of 2005, due to advances in technology, many airports are replacing VOR and NDB approaches with RNAV (GNSS) approach procedures; however, receiver and data update costs [ 13 ] are still significant enough that many small general aviation aircraft are not equipped with GNSS equipment certified for primary navigation or approaches. VOR signals provide considerably greater accuracy and reliability than NDBs due to a combination of factors. Most significant is that VOR provides a bearing from the station to the aircraft which does not vary with wind or orientation of the aircraft. VHF radio is less vulnerable to diffraction (course bending) around terrain features and coastlines. Phase encoding suffers less interference from thunderstorms. VOR signals offer a predictable accuracy of 90 m (300 ft), 2 sigma at 2 NM from a pair of VOR beacons; [ 14 ] as compared to the accuracy of unaugumented Global Positioning System (GPS) which is less than 13 meters, 95%. [ 14 ] VOR stations, being VHF, operate on "line of sight". This means that if, on a perfectly clear day, you cannot see the transmitter from the receiver antenna, or vice versa, the signal will be either imperceptible or unusable. This limits VOR (and DME ) range to the horizon—or closer if mountains intervene. Although the modern solid state transmitting equipment requires much less maintenance than the older units, an extensive network of stations, needed to provide reasonable coverage along main air routes, is a significant cost in operating current airway systems. Typically, a VOR station's identifier represents a nearby town, city or airport. For example, the VOR station located on the grounds of John F. Kennedy International Airport has the identifier JFK. VORs are assigned radio channels between 108.0 MHz and 117.95 MHz (with 50 kHz spacing); this is in the very high frequency (VHF) range. The first 4 MHz is shared with the instrument landing system (ILS) band. In the United States, frequencies within the pass band of 108.00 to 111.95 MHz which have an even 100 kHz first digit after the decimal point (108.00, 108.05, 108.20, 108.25, and so on) are reserved for VOR frequencies while frequencies within the 108.00 to 111.95 MHz pass band with an odd 100 kHz first digit after the decimal point (108.10, 108.15, 108.30, 108.35, and so on) are reserved for ILS. [ 15 ] The VOR encodes azimuth (direction from the station) as the phase relationship between a reference signal and a variable signal. One of them is amplitude modulated, and one is frequency modulated. On conventional VORs (CVOR), the 30 Hz reference signal is frequency modulated (FM) on a 9,960 Hz subcarrier . On these VORs, the amplitude modulation is achieved by rotating a slightly directional antenna exactly in phase with the reference signal at 30 revolutions per second. Modern installations are Doppler VORs (DVOR), which use a circular array of typically 48 omni-directional antennas and no moving parts. The active antenna is moved around the circular array electronically to create a doppler effect, resulting in frequency modulation. The amplitude modulation is created by making the transmission power of antennas at e.g. the north position lower than at the south position. The role of amplitude and frequency modulation is thus swapped in this type of VOR. Decoding in the receiving aircraft happens in the same way for both types of VORs: the AM and FM 30 Hz components are detected and then compared to determine the phase angle between them. The VOR signal also contains a modulated continuous wave (MCW) 7 wpm Morse code station identifier, and usually contains an amplitude modulated (AM) voice channel. This information is then fed over an analog or digital interface to one of four common types of indicators: In many cases, VOR stations have co-located distance measuring equipment (DME) or military Tactical Air Navigation ( TACAN ) – the latter includes both the DME distance feature and a separate TACAN azimuth feature that provides military pilots data similar to the civilian VOR. A co-located VOR and TACAN beacon is called a VORTAC . A VOR co-located only with DME is called a VOR-DME. A VOR radial with a DME distance allows a one-station position fix. Both VOR-DMEs and TACANs share the same DME system. VORTACs and VOR-DMEs use a standardized scheme of VOR frequency to TACAN/DME channel pairing [ 15 ] so that a specific VOR frequency is always paired with a specific co-located TACAN or DME channel. On civilian equipment, the VHF frequency is tuned and the appropriate TACAN/DME channel is automatically selected. While the operating principles are different, VORs share some characteristics with the localizer portion of ILS and the same antenna, receiving equipment and indicator is used in the cockpit for both. When a VOR station is selected, the OBS is functional and allows the pilot to select the desired radial to use for navigation. When a localizer frequency is selected, the OBS is not functional and the indicator is driven by a localizer converter, typically built into the receiver or indicator. A VOR station serves a volume of airspace called its Service Volume. Some VORs have a relatively small geographic area protected from interference by other stations on the same frequency—called "terminal" or T-VORs. Other stations may have protection out to 130 nautical miles (240 kilometres) or more. It is popularly thought that there is a standard difference in power output between T-VORs and other stations, but in fact the stations' power output is set to provide adequate signal strength in the specific site's service volume. In the United States, there are three standard service volumes (SSV): terminal, low, and high (standard service volumes do not apply to published instrument flight rules (IFR) routes). [ 17 ] Additionally, two new service volumes – "VOR low" and "VOR high" – were added in 2021, providing expanded coverage above 5,000 feet AGL. This allows aircraft to continue to receive off-route VOR signals despite the reduced number of VOR ground stations provided by the VOR Minimum Operational Network. [ 18 ] VOR and the older NDB stations were traditionally used as intersections along airways . A typical airway will hop from station to station in straight lines. When flying in a commercial airliner , an observer will notice that the aircraft flies in straight lines occasionally broken by a turn to a new course. These turns are often made as the aircraft passes over a VOR station or at an intersection in the air defined by one or more VORs. Navigational reference points can also be defined by the point at which two radials from different VOR stations intersect, or by a VOR radial and a DME distance. This is the basic form of RNAV and allows navigation to points located away from VOR stations. As RNAV systems have become more common, in particular those based on GPS , more and more airways have been defined by such points, removing the need for some of the expensive ground-based VORs. In many countries there are two separate systems of airway at lower and higher levels: the lower Airways (known in the US as Victor Airways ) and Upper Air Routes (known in the US as Jet routes ). Most aircraft equipped for instrument flight (IFR) have at least two VOR receivers. As well as providing a backup to the primary receiver, the second receiver allows the pilot to easily follow a radial to or from one VOR station while watching the second receiver to see when a certain radial from another VOR station is crossed, allowing the aircraft's exact position at that moment to be determined, and giving the pilot the option of changing to the new radial if they wish. As of 2008 [update] , space-based Global Navigation Satellite Systems (GNSS) such as the Global Positioning System ( GPS ) are increasingly replacing VOR and other ground-based systems. [ 20 ] In 2016, GNSS was mandated as the primary needs of navigation for IFR aircraft in Australia. [ 9 ] GNSS systems have a lower transmitter cost per customer and provide distance and altitude data. Future satellite navigation systems, such as the European Union Galileo , and GPS augmentation systems are developing techniques to eventually equal or exceed VOR accuracy. However, low VOR receiver cost, broad installed base and commonality of receiver equipment with ILS are likely to extend VOR dominance in aircraft until space receiver cost falls to a comparable level. As of 2008 in the United States, GPS-based approaches outnumbered VOR-based approaches but VOR-equipped IFR aircraft outnumber GPS-equipped IFR aircraft. [ citation needed ] There is some concern that GNSS navigation is subject to interference or sabotage, leading in many countries to the retention of VOR stations for use as a backup. [ citation needed ] The VOR signal has the advantage of static mapping to local terrain. [ clarification needed ] The US FAA plans [ 21 ] by 2020 to decommission roughly half of the 967 [ 22 ] VOR stations in the US, retaining a "Minimum Operational Network" to provide coverage to all aircraft more than 5,000 feet above the ground. Most of the decommissioned stations will be east of the Rocky Mountains , where there is more overlap in coverage between them. [ citation needed ] On July 27, 2016, a final policy statement was released [ 23 ] specifying stations to be decommissioned by 2025. A total of 74 stations are to be decommissioned in Phase 1 (2016–2020), and 234 more stations are scheduled to be taken out of service in Phase 2 (2021–2025). In the UK, 19 VOR transmitters are to be kept operational until at least 2020. Those at Cranfield and Dean Cross were decommissioned in 2014, with the remaining 25 to be assessed between 2015 and 2020. [ 24 ] [ 25 ] Similar efforts are underway in Australia, [ 26 ] and elsewhere. In the UK and the United States, DME transmitters are planned to be retained in the near future even after co-located VORs are decommissioned. [ 8 ] [ 11 ] However, there are long-term plans to decommission DME, TACAN and NDBs. The VOR signal encodes a morse code identifier, optional voice, and a pair of navigation tones. The radial azimuth is equal to the phase angle between the lagging and leading navigation tone. The conventional signal encodes the station identifier, i ( t ) , optional voice a ( t ) , navigation reference signal in c ( t ) , and the isotropic (i.e. omnidirectional) component. The reference signal is encoded on an F3 subcarrier (colour). The navigation variable signal is encoded by mechanically or electrically rotating a directional, g ( A , t ) , antenna to produce A3 modulation (grey-scale). Receivers (paired colour and grey-scale trace) in different directions from the station paint a different alignment of F3 and A3 demodulated signal. e ( A , t ) = cos ⁡ ( 2 π F c t ) ( 1 + c ( t ) + g ( A , t ) ) c ( t ) = M i cos ⁡ ( 2 π F i t ) i ( t ) + M a a ( t ) + M d cos ⁡ ( 2 π ∫ 0 t ( F s + F d cos ⁡ ( 2 π F n t ) ) d t ) g ( A , t ) = M n cos ⁡ ( 2 π F n t − A ) {\displaystyle {\begin{array}{rcl}e(A,t)&=&\cos(2\pi F_{c}t)(1+c(t)+g(A,t))\\c(t)&=&M_{i}\cos(2\pi F_{i}t)~i(t)\\&+&M_{a}~a(t)\\&+&M_{d}\cos(2\pi \int _{0}^{t}(F_{s}+F_{d}\cos(2\pi F_{n}t))dt)\\g(A,t)&=&M_{n}\cos(2\pi F_{n}t-A)\\\end{array}}} The doppler signal encodes the station identifier, i ( t ) , {\displaystyle \ i(t)\ ,} optional audio voice, a ( t ) , {\displaystyle \ a(t)\ ,} navigation variable signal in c ( t ) , {\displaystyle \ c(t)\ ,} and the isotropic (i.e. omnidirectional) component. The navigation variable signal is A3 modulated (greyscale). The navigation reference signal is delayed, t + , t − , {\displaystyle \ t_{+}\ ,t_{-}\ ,\ } by electrically revolving a pair of transmitters. The cyclic doppler blue shift, and corresponding doppler red shift, as a transmitter closes on and recedes from the receiver results in F3 modulation (colour). The pairing of transmitters offset equally high and low of the isotropic carrier frequency produce the upper and lower sidebands. Closing and receding equally on opposite sides of the same circle around the isotropic transmitter produce F3 subcarrier modulation, g ( A , t ) . {\displaystyle \ g(A,t)~.} t = t + ( A , t ) − R C sin ( 2 π F n t + ( A , t ) + A ) t = t − ( A , t ) + R C sin ( 2 π F n t − ( A , t ) + A ) e ( A , t ) = ( 1 + c ( t ) ) cos ( 2 π F c t ) + g ( A , t ) c ( t ) = M i cos ( 2 π F i t ) i ( t ) + M a a ( t ) + M n cos ( 2 π F n t ) g ( A , t ) = 1 2 M d cos ( 2 π ( F c + F s ) t + ( A , t ) ) + 1 2 M d cos ( 2 π ( F c − F s ) t − ( A , t ) ) {\displaystyle {\begin{array}{rcl}t&=&t_{+}(A,t)-{\frac {\ R\ }{C}}\ \sin \!\left(\ 2\pi \ F_{n}\ t_{+}(A,t)+A\ \right)\\t&=&t_{-}(A,t)+{\frac {\ R\ }{C}}\ \sin \!\left(\ 2\pi \ F_{n}\ t_{-}(A,t)+A\ \right)\\\\e(A,t)&=&\left(\ 1+c(t)\ \right)\ \cos \!\left(\ 2\pi \ F_{c}\ t\ \right)~+~g(A,t)\\\\c(t)&=&M_{i}\cos \!\left(\ 2\pi \ F_{i}\ t\ \right)\ i(t)~+~M_{a}\ a(t)~+~M_{n}\cos \!\left(\ 2\pi \ F_{n}\ t\ \right)\\\\g(A,t)&=&{\tfrac {1}{2}}\ M_{d}\ \cos \!\left(\ 2\pi \left(F_{c}+F_{s}\right)\ t_{+}(A,t)\ \right)\\&+&{\tfrac {1}{2}}\ M_{d}\ \cos \!\left(\ 2\pi \left(F_{c}-F_{s}\right)\ t_{-}(A,t)\ \right)\\\end{array}}} where the revolution radius R = F d C 2 π F n F c {\displaystyle \ R={\frac {\ F_{d}\ C\ }{\ 2\pi \ F_{n}\ F_{c}\ }}\ } is 6.76 ± 0.3 m . The transmitter acceleration 4 π 2 F n 2 R {\displaystyle \ 4\pi ^{2}\ F_{n}^{2}\ R\ } (24,000 g) makes mechanical revolution impractical, and halves ( gravitational redshift ) the frequency change ratio compared to transmitters in free-fall. The mathematics to describe the operation of a DVOR is far more complex than indicated above. The reference to "electronically rotated" is a vast simplification. The primary complication relates to a process that is called "blending". [ citation needed ] Another complication is that the phase of the upper and lower sideband signals have to be locked to each other. The composite signal is detected by the receiver. The electronic operation of detection effectively shifts the carrier down to 0 Hz, folding the signals with frequencies below the Carrier, on top of the frequencies above the carrier. Thus the upper and lower sidebands are summed. If there is a phase shift between these two, then the combination will have a relative amplitude of 1 + cos φ . If φ was 180° , then the aircraft's receiver would not detect any sub-carrier (signal A3). "Blending" describes the process by which a sideband signal is switched from one antenna to the next. The switching is not discontinuous. The amplitude of the next antenna rises as the amplitude of the current antenna falls. When one antenna reaches its peak amplitude, the next and previous antennas have zero amplitude. By radiating from two antennas, the effective phase center becomes a point between the two. Thus the phase reference is swept continuously around the ring – not stepped as would be the case with antenna to antenna discontinuous switching. In the electromechanical antenna switching systems employed before solid state antenna switching systems were introduced, the blending was a by-product of the way the motorized switches worked. These switches brushed a coaxial cable past 48 or 50 antenna feeds. As the cable moved between two antenna feeds, it would couple signal into both. But blending accentuates another complication of a DVOR. Each antenna in a DVOR uses an omnidirectional antenna. These are usually Alford Loop antennas (see Andrew Alford ). Unfortunately, the sideband antennas are very close together, so that approximately 55% of the energy radiated is absorbed by the adjacent antennas [ citation needed ] . Half of that is re-radiated, and half is sent back along the antenna feeds of the adjacent antennas [ citation needed ] . The result is an antenna pattern that is no longer omnidirectional. This causes the effective sideband signal to be amplitude modulated at 60 Hz as far as the aircraft's receiver is concerned. The phase of this modulation can affect the detected phase of the sub-carrier. This effect is called "coupling". Blending complicates this effect. It does this because when two adjacent antennas radiate a signal, they create a composite antenna. Imagine two antennas that are separated by half their wavelength. In the transverse direction the two signals will sum, but in the tangential direction they will cancel. Thus as the signal "moves" from one antenna to the next, the distortion in the antenna pattern will increase and then decrease. The peak distortion occurs at the midpoint. This creates a half-sinusoidal 1500 Hz amplitude distortion in the case of a 50 antenna system, (1,440 Hz in a 48 antenna system). This distortion is itself amplitude modulated with a 60 Hz amplitude modulation (also some 30 Hz as well). This distortion can add or subtract with the above-mentioned 60 Hz distortion depending on the carrier phase. In fact one can add an offset to the carrier phase (relative to the sideband phases) so that the 60 Hz components tend to null one another. There is a 30 Hz component, though, which has some pernicious effects. DVOR designs use all sorts of mechanisms to try to compensate these effects. The methods chosen are major selling points for each manufacturer, with each extolling the benefits of their technique over their rivals. Note that ICAO Annex 10 limits the worst case amplitude modulation of the sub-carrier to 40%. A DVOR that did not employ some technique to compensate for coupling and blending effects would not meet this requirement. The predicted accuracy of the VOR system is ±1.4°. However, test data indicates that 99.94% of the time a VOR system has less than ±0.35° of error [ citation needed ] . Internal monitoring of a VOR station will shut it down, or change over to a standby system if the station error exceeds some limit. A Doppler VOR beacon will typically change over or shut down when the bearing error exceeds 1.0°. [ 14 ] National air space authorities may often set tighter limits. For instance, in Australia, a Primary Alarm limit may be set as low as ±0.5° on some Doppler VOR beacons. [ citation needed ] ARINC 711 – 10 January 30, 2002, states that receiver accuracy should be within 0.4° with a statistical probability of 95% under various conditions. Any receiver compliant with this standard can be expected to perform within these tolerances. All radio navigation beacons are required to monitor their own output. Most have redundant systems, so that the failure of one system will cause automatic change-over to one or more standby systems. The monitoring and redundancy requirements in some instrument landing systems (ILS) can be very strict. The general philosophy followed is that no signal is preferable to a poor signal. VOR beacons monitor themselves by having one or more receiving antennas located away from the beacon. The signals from these antennas are processed to monitor many aspects of the signals. The signals monitored are defined in various US and European standards. The principal standard is European Organisation for Civil Aviation Equipment (EuroCAE) Standard ED-52. The five main parameters monitored are the bearing accuracy, the reference and variable signal modulation indices, the signal level, and the presence of notches (caused by individual antenna failures). Note that the signals received by these antennas, in a Doppler VOR beacon, are different from the signals received by an aircraft. This is because the antennas are close to the transmitter and are affected by proximity effects. For example, the free space path loss from nearby sideband antennas will be 1.5 dB different (at 113 MHz and at a distance of 80 m) from the signals received from the far side sideband antennas. For a distant aircraft there will be no measurable difference. Similarly the peak rate of phase change seen by a receiver is from the tangential antennas. For the aircraft these tangential paths will be almost parallel, but this is not the case for an antenna near the DVOR. The bearing accuracy specification for all VOR beacons is defined in the International Civil Aviation Organization Convention on International Civil Aviation Annex 10, Volume 1. This document sets the worst case bearing accuracy performance on a Conventional VOR (CVOR) to be ±4°. A Doppler VOR (DVOR) is required to be ±1°. All radio-navigation beacons are checked periodically to ensure that they are performing to the appropriate International and National standards. This includes VOR beacons, distance measuring equipment (DME), instrument landing systems (ILS), and non-directional beacons (NDB). Their performance is measured by aircraft fitted with test equipment. The VOR test procedure is to fly around the beacon in circles at defined distances and altitudes, and also along several radials. These aircraft measure signal strength, the modulation indices of the reference and variable signals, and the bearing error. They will also measure other selected parameters, as requested by local/national airspace authorities. Note that the same procedure is used (often in the same flight test) to check distance measuring equipment (DME). In practice, bearing errors can often exceed those defined in Annex 10, in some directions. This is usually due to terrain effects, buildings near the VOR, or, in the case of a DVOR, some counterpoise effects. Note that Doppler VOR beacons utilize an elevated groundplane that is used to elevate the effective antenna pattern. It creates a strong lobe at an elevation angle of 30° which complements the 0° lobe of the antennas themselves. This groundplane is called a counterpoise. A counterpoise though, rarely works exactly as one would hope. For example, the edge of the counterpoise can absorb and re-radiate signals from the antennas, and it may tend to do this differently in some directions than others. National air space authorities will accept these bearing errors when they occur along directions that are not the defined air traffic routes. For example, in mountainous areas, the VOR may only provide sufficient signal strength and bearing accuracy along one runway approach path. Doppler VOR beacons are inherently more accurate than conventional VORs because they are less affected by reflections from hills and buildings. The variable signal in a DVOR is the 30 Hz FM signal; in a CVOR it is the 30 Hz AM signal. If the AM signal from a CVOR beacon bounces off a building or hill, the aircraft will see a phase that appears to be at the phase centre of the main signal and the reflected signal, and this phase center will move as the beam rotates. In a DVOR beacon, the variable signal, if reflected, will seem to be two FM signals of unequal strengths and different phases. Twice per 30 Hz cycle, the instantaneous deviation of the two signals will be the same, and the phase locked loop will get (briefly) confused. As the two instantaneous deviations drift apart again, the phase locked loop will follow the signal with the greatest strength, which will be the line-of-sight signal. If the phase separation of the two deviations is small, however, the phase locked loop will become less likely to lock on to the true signal for a larger percentage of the 30 Hz cycle (this will depend on the bandwidth of the output of the phase comparator in the aircraft). In general, some reflections can cause minor problems, but these are usually about an order of magnitude less than in a CVOR beacon. If a pilot wants to approach the VOR station from due east then the aircraft will have to fly due west to reach the station. The pilot will use the OBS to rotate the compass dial until the number 27 (270°) aligns with the pointer (called the primary index ) at the top of the dial. When the aircraft intercepts the 90° radial (due east of the VOR station) the needle will be centered and the To/From indicator will show "To". Notice that the pilot sets the VOR to indicate the reciprocal; the aircraft will follow the 90° radial while the VOR indicates that the course "to" the VOR station is 270°. This is called "proceeding inbound on the 090 radial." The pilot needs only to keep the needle centered to follow the course to the VOR station. If the needle drifts off-center the aircraft would be turned towards the needle until it is centered again. After the aircraft passes over the VOR station the To/From indicator will indicate "From" and the aircraft is then proceeding outbound on the 270° radial. The CDI needle may oscillate or go to full scale in the "cone of confusion" directly over the station but will recenter once the aircraft has flown a short distance beyond the station. In the illustration on the right, notice that the heading ring is set with 360° (north) at the primary index, the needle is centred and the To/From indicator is showing "TO". The VOR is indicating that the aircraft is on the 360° course (north) to the VOR station (i.e. the aircraft is south of the VOR station). If the To/From indicator were showing "From" it would mean the aircraft was on the 360° radial from the VOR station (i.e. the aircraft is north of the VOR). Note that there is absolutely no indication of what direction the aircraft is flying. The aircraft could be flying due West and this snapshot of the VOR could be the moment when it crossed the 360° radial. Before using a VOR indicator for the first time, it can be tested and calibrated at an airport with a VOR test facility , or VOT. A VOT differs from a VOR in that it replaces the variable directional signal with another omnidirectional signal, in a sense transmitting a 360° radial in all directions. The NAV receiver is tuned to the VOT frequency, then the OBS is rotated until the needle is centred. If the indicator reads within four degrees of 000 with the FROM flag visible or 180 with the TO flag visible, it is considered usable for navigation. The FAA requires testing and calibration of a VOR indicator no more than 30 days before any flight under IFR. [ 27 ] There are many methods available to determine what heading to fly to intercept a radial from the station or a course to the station. The most common method involves the acronym T-I-T-P-I-T. The acronym stands for Tune – Identify – Twist – Parallel – Intercept – Track. Each of these steps are quite important to ensure the aircraft is headed where it is being directed. First, tune the desired VOR frequency into the navigation radio, second and most important, Identify the correct VOR station by verifying the Morse code heard with the sectional chart. Third, twist the VOR OBS knob to the desired radial (FROM) or course (TO) the station. Fourth, bank the aircraft until the heading indicator indicates the radial or course set in the VOR. The fifth step is to fly towards the needle. If the needle is to the left, turn left by 30–45° and vice versa. The last step is once the VOR needle is centred, turn the heading of the aircraft back to the radial or course to track down the radial or course flown. If there is wind, a wind correction angle will be necessary to maintain the VOR needle centred. Another method to intercept a VOR radial exists and more closely aligns itself with the operation of an HSI ( Horizontal Situation Indicator ). The first three steps above are the same; tune, identify and twist. At this point, the VOR needle should be displaced to either the left or the right. Looking at the VOR indicator, the numbers on the same side as the needle will always be the headings needed to return the needle back to centre. The aircraft heading should then be turned to align itself with one of those shaded headings. If done properly, this method will never produce reverse sensing. [ definition needed ] Using this method will ensure quick understanding of how an HSI works as the HSI visually shows what we are mentally trying to do. In the adjacent diagram, an aircraft is flying a heading of 180° while located at a bearing of 315° from the VOR. After twisting the OBS knob to 360°, the needle deflects to the right. The needle shades the numbers between 360 and 090. If the aircraft turns to a heading anywhere in this range, the aircraft will intercept the radial. Although the needle deflects to the right, the shortest way of turning to the shaded range is a turn to the left.
https://en.wikipedia.org/wiki/VHF_omnidirectional_range
Vanadium(II) iodide is the inorganic compound with the formula VI 2 . It is a black micaceous solid. It adopts the cadmium iodide structure, featuring octahedral V(II) centers. [ 1 ] The hexahydrate [V( H 2 O ) 6 ]I 2 , an aquo complex , is also known. It forms red-violet crystals. The hexahydrate dehydrates under vacuum to give a red-brown tetrahydrate with the formula V( H 2 O ) 4 I 2 . [ 2 ] The original synthesis of VI 2 involved reaction of the elements. [ 1 ] Solvated vanadium(II) iodides can be prepared by reduction of vanadium(III) chlorides with trimethylsilyl iodide . [ 3 ] It reacts with anhydrous ammonia to give the hexaammine complex . [ 4 ]
https://en.wikipedia.org/wiki/VI2
The VIA APC is a low-cost ($49) single-board computer from VIA Technologies designed to run the Android operating system. It has been available for purchase since July 2012. Since January 2013 enhanced versions are available for purchase at a higher price. [ 1 ] The APC Paper version, housed in a recycled cardboard case resembling a book, won a Design and Innovation Award at Computex 2013. [ 2 ] [ 3 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/VIA_APC
The VIKOR method is a multi-criteria decision making (MCDM) method. It was originally developed by Serafim Opricovic in 1979 to solve decision problems with conflicting and noncommensurable (different units) criteria. It assumes that compromise is acceptable for conflict resolution and that the decision maker wants a solution that is the closest to the ideal, so the alternatives are evaluated according to all established criteria. VIKOR then ranks alternatives and determines the solution named compromise that is the closest to the ideal. The idea of compromise solution was introduced in MCDM by Po-Lung Yu in 1973, [ 1 ] and by Milan Zeleny. [ 2 ] S. Opricovic had developed the basic ideas of VIKOR in his Ph.D. dissertation in 1979, and an application was published in 1980. [ 3 ] The name VIKOR appeared in 1990 [ 4 ] from Serbian: VIseKriterijumska Optimizacija I Kompromisno Resenje, that means: Multicriteria Optimization and Compromise Solution, with pronunciation: vikor. The real applications were presented in 1998. [ 5 ] The paper in 2004 contributed to the international recognition of the VIKOR method. [ 6 ] (The most cited paper in the field of Economics, Science Watch, Apr.2009). The MCDM problem is stated as follows: Determine the best (compromise) solution in multicriteria sense from the set of J feasible alternatives A 1 , A 2 , … , A J {\displaystyle A_{1},A_{2},\dots ,A_{J}} , evaluated according to the set of n criterion functions. The input data are the elements F i j {\displaystyle F_{ij}} of the performance (decision) matrix, where F i j {\displaystyle F_{ij}} is the value of the i -th criterion function for the alternative A j {\displaystyle A_{j}} . The VIKOR procedure has the following steps: Step 1. Determine the best fi* and the worst fi^ values of all criterion functions, i = 1,2,...,n; fi* = max (fij,j=1,...,J), fi^ = min (fij,j=1,...,J), if the i-th function is benefit; fi* = min (fij,j=1,...,J), fi^ = max (fij,j=1,...,J), if the i-th function is cost. Step 2. Compute the values Sj and Rj, j=1,2,...,J, by the relations: Sj=sum[wi(fi* - fij)/(fi*-fi^),i=1,...,n], weighted and normalized Manhattan distance ; Rj=max[wi(fi* - fij)/(fi*-fi^),i=1,...,n], weighted and normalized Chebyshev distance ; where wi are the weights of criteria, expressing the DM's preference as the relative importance of the criteria. Step 3. Compute the values Qj, j=1,2,...,J, by the relation Qj = v(Sj – S*)/(S^ - S*) + (1-v)(Rj-R*)/(R^-R*) where S* = min (Sj, j=1,...,J), S^ = max (Sj, j=1,...,J), R* = min (Rj, j=1,...,J), R^ = max (Rj, j=1,...,J),; and is introduced as a weight for the strategy of maximum group utility, whereas 1-v is the weight of the individual regret. These strategies could be compromised by v = 0.5, and here v is modified as = (n + 1)/ 2n (from v + 0.5(n-1)/n = 1) since the criterion (1 of n) related to R is included in S, too. Step 4. Rank the alternatives, sorting by the values S, R and Q, from the minimum value. The results are three ranking lists. Step 5. Propose as a compromise solution the alternative A(1) which is the best ranked by the measure Q (minimum) if the following two conditions are satisfied: C1. “Acceptable Advantage”: Q(A(2) – Q(A(1)) >= DQ where: A(2) is the alternative with second position in the ranking list by Q; DQ = 1/(J-1). C2. “Acceptable Stability in decision making”: The alternative A(1) must also be the best ranked by S or/and R. This compromise solution is stable within a decision making process, which could be the strategy of maximum group utility (when v > 0.5 is needed), or “by consensus” v about 0.5, or “with veto” v < 0.5). If one of the conditions is not satisfied, then a set of compromise solutions is proposed, which consists of: - Alternatives A(1) and A(2) if only the condition C2 is not satisfied, or - Alternatives A(1), A(2),..., A(M) if the condition C1 is not satisfied; A(M) is determined by the relation Q(A(M)) – Q(A(1)) < DQ for maximum M (the positions of these alternatives are “in closeness”). The obtained compromise solution could be accepted by the decision makers because it provides a maximum utility of the majority (represented by min S), and a minimum individual regret of the opponent (represented by min R). The measures S and R are integrated into Q for compromise solution, the base for an agreement established by mutual concessions. A comparative analysis of MCDM methods VIKOR, TOPSIS , ELECTRE and PROMETHEE is presented in the paper in 2007, through the discussion of their distinctive features and their application results. [ 7 ] Sayadi et al. extended the VIKOR method for decision making with interval data. [ 8 ] Heydari et al. extended this method for solving Multiple Objective Large-Scale Nonlinear Programming problems. [ 9 ] The Fuzzy VIKOR method has been developed to solve problem in a fuzzy environment where both criteria and weights could be fuzzy sets . The triangular fuzzy numbers are used to handle imprecise numerical quantities. Fuzzy VIKOR is based on the aggregating fuzzy merit that represents distance of an alternative to the ideal solution. The fuzzy operations and procedures for ranking fuzzy numbers are used in developing the fuzzy VIKOR algorithm. [ 10 ]
https://en.wikipedia.org/wiki/VIKOR_method