id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
44,363
https://en.wikipedia.org/wiki/Wien%27s%20displacement%20law
In physics, Wien's displacement law states that the black-body radiation curve for different temperatures will peak at different wavelengths that are inversely proportional to the temperature. The shift of that peak is a direct consequence of the Planck radiation law, which describes the spectral brightness or intensity of black-body radiation as a function of wavelength at any given temperature. However, it had been discovered by German physicist Wilhelm Wien several years before Max Planck developed that more general equation, and describes the entire shift of the spectrum of black-body radiation toward shorter wavelengths as temperature increases. Formally, the wavelength version of Wien's displacement law states that the spectral radiance of black-body radiation per unit wavelength, peaks at the wavelength given by: where is the absolute temperature and is a constant of proportionality called Wien's displacement constant, equal to or . This is an inverse relationship between wavelength and temperature. So the higher the temperature, the shorter or smaller the wavelength of the thermal radiation. The lower the temperature, the longer or larger the wavelength of the thermal radiation. For visible radiation, hot objects emit bluer light than cool objects. If one is considering the peak of black body emission per unit frequency or per proportional bandwidth, one must use a different proportionality constant. However, the form of the law remains the same: the peak wavelength is inversely proportional to temperature, and the peak frequency is directly proportional to temperature. There are other formulations of Wien's displacement law, which are parameterized relative to other quantities. For these alternate formulations, the form of the relationship is similar, but the proportionality constant, , differs. Wien's displacement law may be referred to as "Wien's law", a term which is also used for the Wien approximation. In "Wien's displacement law", the word displacement refers to how the intensity-wavelength graphs appear shifted (displaced) for different temperatures. Examples Wien's displacement law is relevant to some everyday experiences: A piece of metal heated by a blow torch first becomes "red hot" as the very longest visible wavelengths appear red, then becomes more orange-red as the temperature is increased, and at very high temperatures would be described as "white hot" as shorter and shorter wavelengths come to predominate the black body emission spectrum. Before it had even reached the red hot temperature, the thermal emission was mainly at longer infrared wavelengths, which are not visible; nevertheless, that radiation could be felt as it warms one's nearby skin. One easily observes changes in the color of an incandescent light bulb (which produces light through thermal radiation) as the temperature of its filament is varied by a light dimmer. As the light is dimmed and the filament temperature decreases, the distribution of color shifts toward longer wavelengths and the light appears redder, as well as dimmer. A wood fire at 1500 K puts out peak radiation at about 2000 nanometers. 98% of its radiation is at wavelengths longer than 1000 nm, and only a tiny proportion at visible wavelengths (390–700 nanometers). Consequently, a campfire can keep one warm but is a poor source of visible light. The effective temperature of the Sun is 5778 Kelvin. Using Wien's law, one finds a peak emission per nanometer (of wavelength) at a wavelength of about 500 nm, in the green portion of the spectrum near the peak sensitivity of the human eye. On the other hand, in terms of power per unit optical frequency, the Sun's peak emission is at 343 THz or a wavelength of 883 nm in the near infrared. In terms of power per percentage bandwidth, the peak is at about 635 nm, a red wavelength. About half of the Sun's radiation is at wavelengths shorter than 710 nm, about the limit of the human vision. Of that, about 12% is at wavelengths shorter than 400 nm, ultraviolet wavelengths, which is invisible to an unaided human eye. A large amount of the Sun's radiation falls in the fairly small visible spectrum and passes through the atmosphere. The preponderance of emission in the visible range, however, is not the case in most stars. The hot supergiant Rigel emits 60% of its light in the ultraviolet, while the cool supergiant Betelgeuse emits 85% of its light at infrared wavelengths. With both stars prominent in the constellation of Orion, one can easily appreciate the color difference between the blue-white Rigel (T = 12100 K) and the red Betelgeuse (T ≈ 3800 K). While few stars are as hot as Rigel, stars cooler than the Sun or even as cool as Betelgeuse are very commonplace. Mammals with a skin temperature of about 300 K emit peak radiation at around 10 μm in the far infrared. This is therefore the range of infrared wavelengths that pit viper snakes and passive IR cameras must sense. When comparing the apparent color of lighting sources (including fluorescent lights, LED lighting, computer monitors, and photoflash), it is customary to cite the color temperature. Although the spectra of such lights are not accurately described by the black-body radiation curve, a color temperature (the correlated color temperature) is quoted for which black-body radiation would most closely match the subjective color of that source. For instance, the blue-white fluorescent light sometimes used in an office may have a color temperature of 6500 K, whereas the reddish tint of a dimmed incandescent light may have a color temperature (and an actual filament temperature) of 2000 K. Note that the informal description of the former (bluish) color as "cool" and the latter (reddish) as "warm" is exactly opposite the actual temperature change involved in black-body radiation. Discovery The law is named for Wilhelm Wien, who derived it in 1893 based on a thermodynamic argument. Wien considered adiabatic expansion of a cavity containing waves of light in thermal equilibrium. Using Doppler's principle, he showed that, under slow expansion or contraction, the energy of light reflecting off the walls changes in exactly the same way as the frequency. A general principle of thermodynamics is that a thermal equilibrium state, when expanded very slowly, stays in thermal equilibrium. Wien himself deduced this law theoretically in 1893, following Boltzmann's thermodynamic reasoning. It had previously been observed, at least semi-quantitatively, by an American astronomer, Langley. This upward shift in with is familiar to everyone—when an iron is heated in a fire, the first visible radiation (at around 900 K) is deep red, the lowest frequency visible light. Further increase in causes the color to change to orange then yellow, and finally blue at very high temperatures (10,000 K or more) for which the peak in radiation intensity has moved beyond the visible into the ultraviolet. The adiabatic principle allowed Wien to conclude that for each mode, the adiabatic invariant energy/frequency is only a function of the other adiabatic invariant, the frequency/temperature. From this, he derived the "strong version" of Wien's displacement law: the statement that the blackbody spectral radiance is proportional to for some function of a single variable. A modern variant of Wien's derivation can be found in the textbook by Wannier and in a paper by E. Buckingham The consequence is that the shape of the black-body radiation function (which was not yet understood) would shift proportionally in frequency (or inversely proportionally in wavelength) with temperature. When Max Planck later formulated the correct black-body radiation function it did not explicitly include Wien's constant . Rather, the Planck constant was created and introduced into his new formula. From the Planck constant and the Boltzmann constant , Wien's constant can be obtained. Peak differs according to parameterization The results in the tables above summarize results from other sections of this article. Percentiles are percentiles of the Planck blackbody spectrum. Only 25 percent of the energy in the black-body spectrum is associated with wavelengths shorter than the value given by the peak-wavelength version of Wien's law. Notice that for a given temperature, different parameterizations imply different maximal wavelengths. In particular, the curve of intensity per unit frequency peaks at a different wavelength than the curve of intensity per unit wavelength. For example, using and parameterization by wavelength, the wavelength for maximal spectral radiance is with corresponding frequency . For the same temperature, but parameterizing by frequency, the frequency for maximal spectral radiance is with corresponding wavelength . These functions are radiance density functions, which are probability density functions scaled to give units of radiance. The density function has different shapes for different parameterizations, depending on relative stretching or compression of the abscissa, which measures the change in probability density relative to a linear change in a given parameter. Since wavelength and frequency have a reciprocal relation, they represent significantly non-linear shifts in probability density relative to one another. The total radiance is the integral of the distribution over all positive values, and that is invariant for a given temperature under any parameterization. Additionally, for a given temperature the radiance consisting of all photons between two wavelengths must be the same regardless of which distribution you use. That is to say, integrating the wavelength distribution from to will result in the same value as integrating the frequency distribution between the two frequencies that correspond to and , namely from to . However, the distribution shape depends on the parameterization, and for a different parameterization the distribution will typically have a different peak density, as these calculations demonstrate. The important point of Wien's law, however, is that any such wavelength marker, including the median wavelength (or, alternatively, the wavelength below which any specified percentage of the emission occurs) is proportional to the reciprocal of temperature. That is, the shape of the distribution for a given parameterization scales with and translates according to temperature, and can be calculated once for a canonical temperature, then appropriately shifted and scaled to obtain the distribution for another temperature. This is a consequence of the strong statement of Wien's law. Frequency-dependent formulation For spectral flux considered per unit frequency (in hertz), Wien's displacement law describes a peak emission at the optical frequency given by: or equivalently where is a constant resulting from the maximization equation, is the Boltzmann constant, is the Planck constant, and is the absolute temperature. With the emission now considered per unit frequency, this peak now corresponds to a wavelength about 76% longer than the peak considered per unit wavelength. The relevant math is detailed in the next section. Derivation from Planck's law Parameterization by wavelength Planck's law for the spectrum of black-body radiation predicts the Wien displacement law and may be used to numerically evaluate the constant relating temperature and the peak parameter value for any particular parameterization. Commonly a wavelength parameterization is used and in that case the black body spectral radiance (power per emitting area per solid angle) is: Differentiating with respect to and setting the derivative equal to zero gives: which can be simplified to give: By defining: the equation becomes one in the single variable x: which is equivalent to: This equation is solved by where is the principal branch of the Lambert W function, and gives . Solving for the wavelength in millimetres, and using kelvins for the temperature yields: Parameterization by frequency Another common parameterization is by frequency. The derivation yielding peak parameter value is similar, but starts with the form of Planck's law as a function of frequency : The preceding process using this equation yields: The net result is: This is similarly solved with the Lambert W function: giving . Solving for produces: Parameterization by the logarithm of wavelength or frequency Using the implicit equation yields the peak in the spectral radiance density function expressed in the parameter radiance per proportional bandwidth. (That is, the density of irradiance per frequency bandwidth proportional to the frequency itself, which can be calculated by considering infinitesimal intervals of (or equivalently ) rather of frequency itself.) This is perhaps a more intuitive way of presenting "wavelength of peak emission". That yields . Mean photon energy as an alternate characterization Another way of characterizing the radiance distribution is via the mean photon energy where is the Riemann zeta function. The wavelength corresponding to the mean photon energy is given by Criticism Marr and Wilkin (2012) contend that the widespread teaching of Wien's displacement law in introductory courses is undesirable, and it would be better replaced by alternate material. They argue that teaching the law is problematic because: the Planck curve is too broad for the peak to stand out or be regarded as significant; the location of the peak depends on the parameterization, and they cite several sources as concurring that "that the designation of any peak of the function is not meaningful and should, therefore, be de-emphasized"; the law is not used for determining temperatures in actual practice, direct use of the Planck function being relied upon instead. They suggest that the average photon energy be presented in place of Wien's displacement law, as being a more physically meaningful indicator of changes that occur with changing temperature. In connection with this, they recommend that the average number of photons per second be discussed in connection with the Stefan–Boltzmann law. They recommend that the Planck spectrum be plotted as a "spectral energy density per fractional bandwidth distribution," using a logarithmic scale for the wavelength or frequency. See also Wien approximation Emissivity Sakuma–Hattori equation Stefan–Boltzmann law Thermometer Ultraviolet catastrophe References Further reading External links Eric Weisstein's World of Physics Eponymous laws of physics Statistical mechanics Foundational quantum physics Light 1893 in science 1893 in Germany
Wien's displacement law
Physics
2,831
38,556,291
https://en.wikipedia.org/wiki/Pennant%20International
Pennant International Group Plc comprises a number of individual companies which provide engineering products and services to wide range of markets in the UK, Canada, Australasia and the US. Its head office is in Cheltenham, UK. The Company operates through three segments: Training Systems, both hardware and software based, in the defence sector, Data Services, to the defence, rail, power and government sectors, Software, including the Omega suite of software sold into the Canadian and Australian defence sectors. GenFly Origins The GenFly Generic Flying Controls Trainer is an aircraft maintenance training rig developed in the late twentieth century for the Royal Air Force, for use in training airframe mechanics and technicians, particularly in maintenance activities involving flying controls and aircraft hydraulics. Service The RAF took delivery of four GenFly units to RAF Cosford, UK. The training manuals were written to RAF aircraft maintenance documentation standards, in order to familiarise students with the format. All four rigs were assigned tail numbers from the RAF military aircraft register, to allow even greater realism in simulating aircraft documentation, while avoiding the possibility of confusion from fabricated tail numbers coinciding with those of real airframes. The tail numbers are: ZJ695, ZJ696, ZJ697 and ZJ698. The device entered service in 2001. References Royal Air Force education and training Aircraft simulators Engineering software companies Flight training Military education and training Training companies
Pennant International
Technology
291
2,029,171
https://en.wikipedia.org/wiki/Chromoblastomycosis
Chromoblastomycosis is a long-term fungal infection of the skin and subcutaneous tissue (a chronic subcutaneous mycosis). It can be caused by many different types of fungi which become implanted under the skin, often by thorns or splinters. Chromoblastomycosis spreads very slowly. It is rarely fatal and usually has a good prognosis, but it can be very difficult to cure. The several treatment options include medication and surgery. The infection occurs most commonly in tropical or subtropical climates, often in rural areas. Symptoms and signs The initial trauma causing the infection is often forgotten or not noticed. The infection builds at the site over the years, and a small red papule (skin elevation) appears. The lesion is usually not painful, with few, if any symptoms. Patients rarely seek medical care at this point. Several complications may occur. Usually, the infection slowly spreads to the surrounding tissue while remaining localized to the area around the original wound. However, sometimes the fungi may spread through the blood vessels or lymph vessels, producing metastatic lesions at distant sites. Another possibility is secondary infection with bacteria. This may lead to lymph stasis (obstruction of the lymph vessels) and elephantiasis. The nodules may become ulcerated, or multiple nodules may grow and coalesce, affecting a large area of a limb. Cause Chromoblastomycosis is believed to originate in minor trauma to the skin, usually from vegetative material such as thorns or splinters; this trauma implants fungus in the subcutaneous tissue. In many cases, the patient will not notice or remember the initial trauma, as symptoms often do not appear for years. The fungi most commonly observed to cause chromoblastomycosis are: Fonsecaea pedrosoi Cladophialophora bantiana causes both cutaneous chromoblastomycosis and systemic phaeohyphomycosis Phialophora verrucosa Cladophialophora carrionii Fonsecaea compacta Mechanism Over months to years, an erythematous papule appears at the site of inoculation. Although the mycosis slowly spreads, it usually remains localized to the skin and subcutaneous tissue. Hematogenous and/or lymphatic spread may occur. Multiple nodules may appear on the same limb, sometimes coalescing into a large plaque. Secondary bacterial infection may occur, sometimes inducing lymphatic obstruction. The central portion of the lesion may heal, producing a scar, or it may ulcerate. Diagnosis The most informative test is to scrape the lesion and add potassium hydroxide (KOH), then examine it under a microscope. (KOH scrapings are commonly used to examine fungal infections.) The pathognomonic finding is observing medlar bodies (also called muriform bodies or sclerotic cells). Scrapings from the lesion can also be cultured to identify the organism involved. Blood tests and imaging studies are not commonly used. On histology, chromoblastomycosis manifests as pigmented yeasts resembling "copper pennies". Special stains, such as periodic acid Schiff and Gömöri methenamine silver, can be used to demonstrate the fungal organisms if needed. Prevention No preventive measure is known aside from avoiding the traumatic inoculation of fungi. At least one study found a correlation between walking barefoot in endemic areas and the occurrence of chromoblastomycosis on the foot. Treatment Chromoblastomycosis is very difficult to cure. The primary treatments of choice are: Itraconazole, an antifungal azole, is given orally, with or without flucytosine. Alternatively, cryosurgery with liquid nitrogen has also been shown to be effective. Other treatment options are the antifungal drug terbinafine, another antifungal azole posaconazole, and heat therapy. Antibiotics may be used to treat bacterial superinfections. Amphotericin B has also been used. Photodynamic therapy is a newer type of therapy used to treat Chromoblastomycosis. Prognosis The prognosis for chromoblastomycosis is very good for small lesions. Severe cases are difficult to cure, although the prognosis is still good. The primary complications are ulceration, lymphedema, and secondary bacterial infection. A few cases of malignant transformation to squamous cell carcinoma have been reported. Chromoblastomycosis is very rarely fatal. Epidemiology Chromoblastomycosis occurs globally, most commonly in rural areas in tropical or subtropical climates. It is most common in rural areas between approximately 30°N and 30°S latitude. Over two-thirds of patients are male, usually between the ages of 30 and 50. A correlation with HLA-A29 suggests genetic factors may play a role, as well. Social and cultural Chromoblastomycosis is considered a neglected tropical disease, affects mainly people living in poverty, and causes considerable morbidity, stigma, and discrimination. See also List of cutaneous conditions References External links Mycosis-related cutaneous conditions Tropical diseases Fungal diseases
Chromoblastomycosis
Biology
1,123
22,468,906
https://en.wikipedia.org/wiki/Hirsch%E2%80%93Plotkin%20radical
In mathematics, especially in the study of infinite groups, the Hirsch–Plotkin radical is a subgroup describing the normal locally nilpotent subgroups of the group. It was named by after Kurt Hirsch and Boris I. Plotkin, who proved that the join of normal locally nilpotent subgroups is locally nilpotent; this fact is the key ingredient in its construction. The Hirsch–Plotkin radical is defined as the subgroup generated by the union of the normal locally nilpotent subgroups (that is, those normal subgroups such that every finitely generated subgroup is nilpotent). The Hirsch–Plotkin radical is itself a locally nilpotent normal subgroup, so is the unique largest such. In a finite group, the Hirsch–Plotkin radical coincides with the Fitting subgroup but for infinite groups the two subgroups can differ. The subgroup generated by the union of infinitely many normal nilpotent subgroups need not itself be nilpotent, so the Fitting subgroup must be modified in this case. References Functional subgroups Infinite group theory
Hirsch–Plotkin radical
Mathematics
223
23,609,968
https://en.wikipedia.org/wiki/List%20of%20veterinary%20drugs
This article lists veterinary pharmaceutical drugs alphabetically by name. Many veterinary drugs have more than one name and, therefore, the same drug may be listed more than once. Abbreviations are used in the list as follows: INN = International Nonproprietary Name BAN = British Approved Name USAN = United States Adopted Name A acepromazine – sedative, tranquilizer, and antiemetic afoxolaner - antiparasitic albendazole - anthelminthic alphaxolone - hypnotic/sedative alprazolam – benzodiazepine used as an anxiolytic and tranquilizer altrenogest – used to synchronizes estrus amantadine – analgesic for chronic pain aminophylline – bronchodilator amitraz – antiparasitic amitriptyline – tricyclic antidepressant used to treat separation anxiety, excessive grooming dogs and cats amlodipine – calcium channel blocker used to decrease blood pressure amoxicillin – antibacterial apomorphine – emetic (used to induce vomiting) artificial tears – lubricant eye drops used as a tear supplement atenolol – treats cardiac arrhythmias, hypertension, and diabetes plus other cardiovascular disorders atipamezole – α2-adrenergic antagonist used to reverse the sedative and analgesic effects of alpha-2 adrenergic receptor agonists B bedinvetmab - nerve growth factor inhibitor monoclonal antibody used for osteoarthritis in dogs benazepril – ACE-inhibitor used in heart failure, hypertension, chronic kidney failure and protein-losing nephropathy bethanechol – stimulates bladder contractions, tranquilizer, makes the patient feel no pain bexagliflozin - oral antidiabetic medication bupivacaine – local anesthetic primarily utilized pre- and post-operatively buprenorphine – narcotic for pain relief in cats after surgery butorphanol – mu agonist/kappa antagonist, used as a cough suppressant and for a muscle relaxation effect in horses C carprofen – COX-2 selective NSAID used to relieve pain and inflammation in dogs and cats cefpodoxime – antibiotic cephalexin – antibiotic, particularly useful for susceptible Staphylococcus infections ciprofloxacin – antibiotic of quinolone group clamoxyquine – antiparasitic to treat salmonids for infection with the myxozoan parasite, Myxobolus cerebralis clavamox – antibiotic, used to treat skin and other infections clindamycin – antibiotic with particular use in dental infections with effects against most aerobic Gram-positive cocci, as wel as muchenionoweloozi disorder. clomipramine – primarily used in dogs to treat behavioral problems D deracoxib – nonsteroidal anti-inflammatory drug (NSAID) dexamethasone – anti-inflammatory steroid diazepam – benzodiazepine used to treat status epilepticus, also used as a preanaesthetic and a sedative dichlorophene – fungicide, germicide, and antimicrobial agent, also used for the removal of parasites diphenhydramine – histamine blocker doxycycline – antibiotic, also used to treat Lyme disease E enalapril – ACE-inhibitor used to treat high blood pressure and heart failure enrofloxacin – Broad spectrum antibiotic (Gram-positive and -negative) -- not recommended for streptococci, or anaerobic bacteria equine chorionic gonadotropin – gonadotropic hormone used to induce ovulation in livestock prior to artificial insemination F fenbendazole – antiparasitic - another medication "febantel" is metabolized to fenbendazole fipronil – antiparasitic flumazenil - reversal agent for benzodiazepines flunixin meglumine – nonsteroidal anti-inflammatory drug used as an analgesic and antipyretic in horses fluralaner - antiparasitic frunevetmab - nerve growth factor inhibitor monoclonal antibody used for osteoarthritis in cats furosemide – diuretic used to prevent exercise-induced pulmonary hemorrhage in horses G gabapentin – pain reliever gentamicin/betamethasone valerate/clotrimazole – combination drug product used to treat ear disease in dogs glycopyrrolate – emergency drug used for cardiac support grapiprant - non-cyclooxygenase inhibiting nonsteroidal anti-inflammatory drug (NSAID) H hydromorphone – opioid analgesic used as a premedication hydroxyzine – antihistamine drug used primarily for treatment of allergies I imidacloprid/moxidectin – antiparasitic isoxsuprine – vasodilator used for laminitis and navicular disease in horses ivermectin – a broad-spectrum antiparasitic used in horses, cattle, sheep, goats and dogs K ketamine – dissociative anesthetic and tranquilizer in cats, dogs, horses, and other animals ketoprofen – nonsteroidal anti-inflammatory drug (NSAID) L levamisole – antiparasitic levetiracetam – anti-convulsant used for seizures levothyroxine – used in the treatment of hypothyroidism lokivetmab -Anti IL31 monoclonal antibody used for atopic dermatitis in dogs lufenuron – insecticide used for flea control M marbofloxacin – antibiotic maropitant – antiemetic mavacoxib – nonsteroidal anti-inflammatory drug (NSAID) medetomidine – surgical anesthetic and analgesic meloxicam – nonsteroidal anti-inflammatory drug (NSAID) metacam – used to reduce inflammation and pain methimazole – used in treatment of hyperthyroidism methocarbamol - muscle relaxant used to reduce muscle spasms associated with inflammation, injury, intervertebral disc disease, and certain toxicities metoclopramide – potent antiemetic, secondarily as a prokinetic metronidazole – antibiotic against anaerobic bacteria milbemycin oxime – broad spectrum antiparasitic used as an anthelmintic, insecticide and miticide mirtazapine – antiemetic and appetite stimulant in cats and dogs mitratapide – used to help weight loss in dogs morphine – pure mu agonist/opioid analgesic used as a premedication moxifloxacin – antibiotic N neomycin – antibacterial nimesulide – nonsteroidal anti-inflammatory drug (NSAID) nitarsone – feed additive used in poultry to increase weight gain, improve feed efficiency, and prevent histomoniasis (blackhead disease) nitenpyram – insecticide nitroscanate – anthelmintic used to treat roundworms, hookworms and tapeworms nitroxynil – anthelmintic for fasciola and liver fluke infestations nystatin – antifungal O oclacitinib – antipruritic ofloxacin – fluoroquinolone antibiotic omeprazole – used for treatment and prevention of gastric ulcers in horses oxibendazole – anthelmintic oxymorphone – analgesic oxytetracycline – antibiotic P pentobarbital – humane euthanasia of animals not to be used for food pentoxyfylline – xanthine derivative used in as an antiinflammatory drug and in the prevention of endotoxemia pergolide – dopamine receptor agonist used for the treatment of pituitary pars intermedia dysfunction in horses phenobarbital – anti-convulsant used for seizures phenylbutazone – nonsteroidal anti-inflammatory drug (NSAID) phenylpropanolamine – controls urinary incontinence in dogs phenytoin/pentobarbital – animal euthanasia product containing phenytoin and pentobarbital pimobendan – phosphodiesterase 3 inhibitor used to manage heart failure in dogs pirlimycin – antimicrobial ponazuril – anticoccidial praziquantel – treatment of infestations of the tapeworms Dipylidium caninum, Taenia pisiformis, Echinococcus granulosus prazosin – sympatholytic used in hypertension and abnormal muscle contractions prednisolone – glucocorticoid (steroid) used in the management of inflammation and auto-immune disease, primarily in cats prednisone – glucocorticoid (steroid) used in the management of inflammation and auto immune disease pregabalin – neuropathic pain reliever and anti-convulsant propofol – short acting intravenous drug used to induce anesthesia pyrantel – effective against ascarids, hookworms and stomach worms R rafoxanide – antiparasitic rifampin – anti-microbial primarily used in conjunction with other erythromycin in the treatment of Rhodococcus equi infections in foals robenacoxib – nonsteroidal anti-inflammatory drug (NSAID) roxarsone – arsenical used as a coccidiostat and for increased weight gain S sarolaner - antiparasitic selamectin – antiparasitic treating fleas, roundworms, ear mites, heartworm, and hookworms silver sulfadiazine – antibacterial streptomycin – antibiotic used in large animals sucralfate – treats gastric ulcers sulfasalazine – anti-inflammatory and antirheumatic T Telazol – intravenous drug used to induce anesthesia; combination of tiletamine and zolazepam telmisartan - Angiotensin II receptor blocker tepoxalin – nonsteroidal anti-inflammatory drug (NSAID) theophylline – for bronchospasm and cardiogenic edema thiabendazole – antiparasitic thiostrepton – antibiotic tolfenamic acid — nonsteroidal anti-inflammatory drug (NSAID) tramadol – analgesic trazodone – antidepressant triamcinolone acetonide – corticosteroid trilostane – for canine Cushing's (hyperadrenocorticism) syndrome trimethoprim — used widely for bacterial infections, is in the family of sulfa drugs trimethoprim/sulfadoxine — antibacterial containing trimethoprim and sulfadoxine trimethoprim/sulfamethoxazole - antibacterial containing trimethoprim and sulfamethoxazole tylosin – antibiotic U ursodeoxycholic acid (INN) or ursodiol (USAN) — hydrophilic bile acid used to treat liver diseases V velagliflozin - oral antidiabetic medication X xylazine – α2-adrenergic agonist, used to temporarily sedate animals Y yohimbine – used to reverse effects of xylazine, also called an "antidote" to xylazine Z zonisamide – anti-convulsant used for seizures External links Animal Drugs @ FDA Veterinary Veterinary drugs
List of veterinary drugs
Chemistry
2,540
27,976,434
https://en.wikipedia.org/wiki/FlexGen%20B.V.
FlexGen was a biotechnology company based in Leiden, Netherlands. FlexGen was a spin-off from Leiden University Medical Centre and Dutch Space (part of EADS) and had proprietary technologies for laser based in-situ synthesis of oligonucleotides and other biomolecules. On 21 December 2015, Flexgen Bv in Leiden (South Holland) was declared bankrupt by the court in Gelderland Source. Products FleXelect FleXelect oligopools consist of custom oligonucleotides in solution and can be used for in solution target enrichment prior to next generation DNA sequencing. Target enrichment or In solution hybrid selection is a method for genomic selection in an increasing number of applications such as; Analysis of custom genomic regions of interest (e.g. specific genes, multiple variants and/or complete pathways). Analysis of Chromosomal translocation Validation of Single-nucleotide polymorphism or SNPs (typically after whole genome or whole exome studies) Other research and diagnostic applications (e.g. Synthetic biology) An example of a recent application is testing of the BRCA1 and BRCA2 breast cancer risk genes FlexArrayer The FlexArrayer is an in-house custom oligonucleotide synthesis instrument. The FlexArrayer facilitates high throughput synthesis of FleXelect oligopools for in-solution target enrichment as well as custom microarray production. The FlexArrayer is also applicable for array based re-sequencing. The FlexArrayer provides microarray and oligopool synthesis typically used by: Genomics centres and sequencing facilities Health and safety institutes & microbiology labs Technology innovators in the fields of: Surface chemistries, PNA's (Peptide nucleic acid), siRNA's (Small interfering RNA) and more Technology Production of microarrays and FleXelect oligopools is done with the FlexArrayer (see image) using proprietary technology. The FlexArrayer synthesizes custom probesets on a substrate based on oligonucleotide deprotection technology; Before the first oligonucleotide synthesis step the complete DNA microarray surface is covered by photolabile groups. Those spots the first nucleotide addition is to occur are individually activated by the laser . The nucleotide solution is washed over the microarray surface and the nucleotides chemically bind to the activated spots. All nucleotides contain a photolabile group that can in turn be activated. As many rounds of photoactivation and nucleotide addition are performed as are required to synthesize oligonucleotides of the desired length. This is repeated up to 60 times until the required sequences (up to 100.000) have been synthesized. Thus, the maximum length of any oligonucleotide produced on this platform is 60mer in length. The microarray is now ready to be used, alternatively the oligonucleotides can be cleaved off to produce FleXelect oligopools. References External links FlexGen's Former Company Website on WayBack Machine Biotechnology companies of the Netherlands Biotechnology companies established in 2004 Microarrays Companies based in Leiden 2004 establishments in the Netherlands
FlexGen B.V.
Chemistry,Materials_science,Biology
673
57,717,760
https://en.wikipedia.org/wiki/Armor%20Survivability%20Kit
The Armor Survivability Kit (ASK) is an armor kit developed by the U.S. Army Research Laboratory (ARL) in 2003 to protect vehicles like the Humvee from small arms, explosive device fragments, and rocket-propelled grenades (RPGs). Armor The Armor Survivability Kit consisted of armored steel doors with bullet-proof glass, protective armored plating, and a ballistic windshield and came in either a two-door kit variant (weighing 900 pounds/409 kilograms) or a four-door kit variant (weighing 1,300 pounds/590 kilograms). History The ASK was first produced in response to the lack of sufficient armor protecting Humvee vehicles and supply trucks during the war in Iraq and the rising number of deaths caused by improvised explosive devices (IEDs), sniper fire, and rocket-propelled grenade (RPGs). The Humvee was not designed for active combat and as early as 1996 people inside the Pentagon had called for the army to develop a vehicle to protect soldiers. Near the beginning of the Iraq war in 2003, the U.S. forces found themselves increasingly vulnerable to guerrilla attacks from roadside bombs and RPGs when driving in Humvees. By February 2004, more than 80 soldiers were killed by roadside bombs since the start of the war; soldiers improved armor but even that was not sufficient. U.S. troops had to rely on improvised vehicle armour and many soldiers resorted to jury-rigging scrap metal onto the doors of unprotected Humvees. At the same time military contractors in Iraq had protected vehicles like the Rhino Runner and the M1117, which had not been approved for procurement. The issue came to public attention when troops preparing to deploy to Iraq challenged Donald Rumsfeld as to why they had to resort to "hillbilly armor" scrounged from junk yards to protect themselves. In response to the demand, Central Command’s Combined Joint Task Force 7 requested the Tank-Automotive and Armaments Command Research Development and Engineering Center (TARDEC) and ARL to develop a temporary armor kit to install onto unprotected Humvees until more armored vehicles could be shipped to Iraq. Within a week, the engineering team led by Michael J. Zoltoski created the designs for the ASK, which integrated ballistic metals, glass, and ceramics as well as polymers in order to withstand 7.62mm machine gun fire and IEDs. In October 2003, less than 6 weeks after the initial design was created, 40 ASK prototypes were produced and field-tested at Aberdeen Proving Grounds in Maryland, after which they were ready to be shipped to Iraq along with two installers. By March 2004, 1,924 kits were shipped to Iraq and 1,636 kits were installed onto Humvee vehicles. That year, the Department of Defense rewarded ARL with an innovation award for the development of the ASK. By January 2005, more than 9,400 kits were reportedly delivered to soldiers in both Iraq and Afghanistan. The ASK also served as a precursor to the development of the Fragmentary Armor or Frag Kits for armored vehicles in 2004. While the introduction of ASKs onto unprotected Humvees did offer passengers more protection, other issues with the vehicle began to appear. The armor was still not sufficient to protect passengers from IEDs, which by that time were destroying even heavily armored vehicles. Also, due to the hot summer temperature in Iraq, occupants began to develop heat-related illnesses due to the heat buildup inside the vehicle. To resolve this problem, an air-conditioning system was installed inside many of the Humvees fitted with the ASKs. In addition, the additional weight brought on by the ASKs and other heavy armor plating increased the likelihood of the vehicle rolling over during serious accidents, which were sometimes fatal. From March 2003 through November 2005, an analysis of the U.S. Army’s ground-accident database found that 60 of the 85 soldiers who died in Humvee accidents in Iraq were killed when the vehicle rolled. References Explosion protection Vehicle armour
Armor Survivability Kit
Chemistry,Engineering
828
9,116,064
https://en.wikipedia.org/wiki/Pi2%20Cygni
{{DISPLAYTITLE:Pi2 Cygni}} Pi2 Cygni, Latinized from π2 Cygni, is a triple star system in the northern constellation of Cygnus. It is visible to the naked eye about 2.5° east-northeast of the open cluster M39, having an apparent visual magnitude of 4.24. Based upon an annual parallax shift of 2.95 mas, it is located at a distance of roughly 1,100 light years from the Sun. The inner pair of stars in this system form a single-lined spectroscopic binary with an orbital period of 72.0162 days and an eccentricity of 0.34. The primary, component A, is a B-type giant star with a stellar classification of B2.5 III. It is a Beta Cephei variable with an estimated 8.4 times the mass of the Sun and around 7.1 times the Sun's radius. The star is roughly 33 million years old and is spinning with a projected rotational velocity of 50 km/s. It is radiating 8,442 times the solar luminosity from its outer atmosphere at an effective temperature of around 20,815 K. The third member of this system is a magnitude 5.98 star at an angular separation of 0.10 arc seconds along a position angle of 129°, as of 1996. Historical names In Chinese, (), meaning Flying Serpent, refers to an asterism consisting of π2 Cygni, α Lacertae, 4 Lacertae, π1 Cygni, HD 206267, ε Cephei, β Lacertae, σ Cassiopeiae, ρ Cassiopeiae, τ Cassiopeiae, AR Cassiopeiae, 9 Lacertae, 3 Andromedae, 7 Andromedae, 8 Andromedae, λ Andromedae, κ Andromedae, ψ Andromedae and ι Andromedae. Consequently, the Chinese name for π2 Cygni itself is (, ) References B-type giants Beta Cephei variables Spectroscopic binaries Triple star systems Cygni, Pi2 Cygnus (constellation) BD+58 3504 Cygni, 81 207330 107533 8335
Pi2 Cygni
Astronomy
471
743,106
https://en.wikipedia.org/wiki/Scott%20continuity
In mathematics, given two partially ordered sets P and Q, a function f: P → Q between them is Scott-continuous (named after the mathematician Dana Scott) if it preserves all directed suprema. That is, for every directed subset D of P with supremum in P, its image has a supremum in Q, and that supremum is the image of the supremum of D, i.e. , where is the directed join. When is the poset of truth values, i.e. Sierpiński space, then Scott-continuous functions are characteristic functions of open sets, and thus Sierpiński space is the classifying space for open sets. A subset O of a partially ordered set P is called Scott-open if it is an upper set and if it is inaccessible by directed joins, i.e. if all directed sets D with supremum in O have non-empty intersection with O. The Scott-open subsets of a partially ordered set P form a topology on P, the Scott topology. A function between partially ordered sets is Scott-continuous if and only if it is continuous with respect to the Scott topology. The Scott topology was first defined by Dana Scott for complete lattices and later defined for arbitrary partially ordered sets. Scott-continuous functions are used in the study of models for lambda calculi and the denotational semantics of computer programs. Properties A Scott-continuous function is always monotonic, meaning that if for , then . A subset of a directed complete partial order is closed with respect to the Scott topology induced by the partial order if and only if it is a lower set and closed under suprema of directed subsets. A directed complete partial order (dcpo) with the Scott topology is always a Kolmogorov space (i.e., it satisfies the T0 separation axiom). However, a dcpo with the Scott topology is a Hausdorff space if and only if the order is trivial. The Scott-open sets form a complete lattice when ordered by inclusion. For any Kolmogorov space, the topology induces an order relation on that space, the specialization order: if and only if every open neighbourhood of x is also an open neighbourhood of y. The order relation of a dcpo D can be reconstructed from the Scott-open sets as the specialization order induced by the Scott topology. However, a dcpo equipped with the Scott topology need not be sober: the specialization order induced by the topology of a sober space makes that space into a dcpo, but the Scott topology derived from this order is finer than the original topology. Examples The open sets in a given topological space when ordered by inclusion form a lattice on which the Scott topology can be defined. A subset X of a topological space T is compact with respect to the topology on T (in the sense that every open cover of X contains a finite subcover of X) if and only if the set of open neighbourhoods of X is open with respect to the Scott topology. For CPO, the cartesian closed category of dcpo's, two particularly notable examples of Scott-continuous functions are curry and apply. Nuel Belnap used Scott continuity to extend logical connectives to a four-valued logic. See also Alexandrov topology Upper topology Footnotes References Order theory General topology Domain theory
Scott continuity
Mathematics
691
54,582,337
https://en.wikipedia.org/wiki/NGC%207074
NGC 7074 is an edge-on lenticular galaxy located about 140 million light-years away in the constellation of Pegasus. NGC 7074 was discovered by astronomer Albert Marth on October 16, 1863. See also List of NGC objects (7001–7840) References External links Lenticular galaxies Pegasus (constellation) 7074 66850 Astronomical objects discovered in 1863
NGC 7074
Astronomy
79
8,247,462
https://en.wikipedia.org/wiki/Transit%20village
A transit village is a pedestrian-friendly mixed-use district or neighborhood oriented around the station of a high-quality transit system, such as rail or B.R.T. Often a civic square of public space abuts the train station, functioning as the hub or centerpiece of the surrounding community and encouraging social interaction. While mainly residential in nature, many transit villages offer convenience retail and services to residents heading to and from train stations. The term "transit villages" was popularized in the 1997 book by Michael Bernick and Robert Cervero, Transit Villages for the 21st Century, whose cover shows a mixed-use, pedestrian-friendly community infilling what then was a surface park-and-ride lot of the Pleasant Hill BART station area, and what is now the Contra Costa Centre Transit Village. In their book, the authors distinguished transit villages from transit-oriented development (TOD) as more residential-oriented in land-use composition, with neighborhood retail and services provided in and around the rail station and a prominent civic space immediate to the station. Portland, Oregon has actively pursued transit village style development along the Portland area light rail known as Metropolitan Area Express (MAX). California is also exploring transit village development options for its evolving transit systems. Miami, Florida has placed large affordable housing complexes at its two least used Metrorail stations, one is known as the Brownsville Transit Village and the other is Santa Clara Apartments. Miami-Dade Transit has its headquarters in the Overtown Transit Village building at one of its downtown stations. New Jersey Transit Village Initiative New Jersey has become a national leader in promoting Transit Village development through a program known as the Transit Village initiative. The New Jersey Department of Transportation established the Transit Village Initiative in 1999, offering multi-agency assistance and grants from the annual $1 million Transit village fund to any municipality with a ready to go project specifying appropriate mixed land-use strategy, available property, station-area management, and commitment to affordable housing, job growth, and culture. Transit village development must also preserve the architectural integrity of historically significant buildings. Transit Village districts are defined by the half mile radius surrounding the transit station. To become a Transit Village, towns must meet the following criteria: have existing transit, demonstrate a willingness to grow, adopt a transit-oriented-development redevelopment plan or zoning ordinance, identify specific TOD sites and projects, identify bicycle and pedestrian improvements, and identify "place making" efforts near the transit station, such as community events, celebrations, and other cultural or artistic events. Since 1999 the state has made 35 Transit Village designations, which are in different stages of development: Pleasantville (1999), Morristown (1999), Rutherford (1999), South Amboy (1999), South Orange (1999), Riverside (2001), Rahway (2002), Metuchen (2003), Belmar (2003), Bloomfield (2003), Bound Brook (2003), Collingswood (2003), Cranford (2003), Matawan (2003), New Brunswick (2005), Journal Square/Jersey City (2005), Netcong (2005), Elizabeth/Midtown (2007), Burlington City (2007), the City of Orange Township (2009), Montclair (2010), Somerville (2010), Linden (2010), West Windsor (2012), East Orange (2012), Dunellen (2012), Summit (2013), Plainfield (2014), Park Ridge (2015), Irvington (2015) Hackensack (2016), Long Branch (2016), Asbury Park (2017), Newark (2021), and Atlantic City. (2023). See also Commuter town New Urbanism Principles of Intelligent Urbanism Smart growth Urban sprawl References External links Transit Oriented Development transitvillages.org Urban planning
Transit village
Engineering
780
19,364,010
https://en.wikipedia.org/wiki/Reverse%20vaccinology
Reverse vaccinology is an improvement of vaccinology that employs bioinformatics and reverse pharmacology practices, pioneered by Rino Rappuoli and first used against Serogroup B meningococcus. Since then, it has been used on several other bacterial vaccines. Computational approach The basic idea behind reverse vaccinology is that an entire pathogenic genome can be screened using bioinformatics approaches to find genes. Some traits that the genes are monitored for, may indicate antigenicity and include genes that code for proteins with extracellular localization, signal peptides & B cell epitopes. Those genes are filtered for desirable attributes that would make good vaccine targets such as outer membrane proteins. Once the candidates are identified, they are produced synthetically and are screened in animal models of the infection. History After Craig Venter published the genome of the first free-living organism in 1995, the genomes of other microorganisms became more readily available throughout the end of the twentieth century. Reverse vaccinology, designing vaccines using the pathogen's sequenced genome, came from this new wealth of genomic information, as well as technological advances. Reverse vaccinology is much more efficient than traditional vaccinology, which requires growing large amounts of specific microorganisms as well as extensive wet lab tests. In 2000, Rino Rappuoli and the J. Craig Venter Institute developed the first vaccine using Reverse Vaccinology against Serogroup B meningococcus. The J. Craig Venter Institute and others then continued work on vaccines for A Streptococcus, B Streptococcus, Staphylococcus aureus, and Streptococcus pneumoniae. Reverse vaccinology with Meningococcus B Attempts at reverse vaccinology first began with Meningococcus B (MenB). Meningococcus B caused over 50% of meningococcal meningitis, and scientists had been unable to create a successful vaccine for the pathogen because of the bacterium's unique structure. This bacterium's polysaccharide shell is identical to that of a human self-antigen, but its surface proteins vary greatly; and the lack of information about the surface proteins caused developing a vaccine to be extremely difficult. As a result, Rino Rappuoli and other scientists turned towards bioinformatics to design a functional vaccine. Rappuoli and others at the J. Craig Venter Institute first sequenced the MenB genome. Then, they scanned the sequenced genome for potential antigens. They found over 600 possible antigens, which were tested by expression in Escherichia coli. The most universally applicable antigens were used in the prototype vaccines. Several proved to function successfully in mice, however, these proteins alone did not effectively interact with the human immune system due to not inducing a good immune response in order for the protection to be achieved. Later, by addition of outer membrane vesicles that contain lipopolysaccharides from the purification of blebs on gram negative cultures. The addition of this adjuvant (previously identified by using conventional vaccinology approaches) enhanced immune response to the level that was required. Later, the vaccine was proven to be safe and effective in adult humans. Subsequent reverse vaccinology research During the development of the MenB vaccine, scientists adopted the same Reverse Vaccinology methods for other bacterial pathogens. A Streptococcus and B Streptococcus vaccines were two of the first Reverse Vaccines created. Because those bacterial strains induce antibodies that react with human antigens, the vaccines for those bacteria needed to not contain homologies to proteins encoded in the human genome in order to not cause adverse reactions, thus establishing the need for genome-based Reverse Vaccinology. Later, Reverse Vaccinology was used to develop vaccines for antibiotic-resistant Staphylococcus aureus and Streptococcus pneumoniae Pros and cons The major advantage for reverse vaccinology is finding vaccine targets quickly and efficiently. Traditional methods may take decades to unravel pathogens and antigens, diseases and immunity. However, In silico can be very fast, allowing to identify new vaccines for testing in only a few years. The downside is that only proteins can be targeted using this process. Whereas, conventional vaccinology approaches can find other biomolecular targets such as polysaccharides. Available software Though using bioinformatic technology to develop vaccines has become typical in the past ten years, general laboratories often do not have the advanced software that can do this. However, there are a growing number of programs making reverse vaccinology information more accessible. NERVE is one relatively new data processing program. Though it must be downloaded and does not include all epitope predictions, it does help save some time by combining the computational steps of reverse vaccinology into one program. Vaxign, an even more comprehensive program, was created in 2008. Vaxign is web-based and completely public-access. Though Vaxign has been found to be extremely accurate and efficient, some scientists still utilize the online software RANKPEP for the peptide bonding predictions. Both Vaxign and RANKPEP employ PSSMs (Position Specific Scoring Matrices) when analyzing protein sequences or sequence alignments. Computer-Aided bioinformatics projects are becoming extremely popular, as they help guide the laboratory experiments. Other developments because of reverse vaccinology and bioinformatics Reverse vaccinology has caused an increased focus on pathogenic biology. Reverse vaccinology led to the discovery of pili in gram-positive pathogens such as A streptococcus, B streptococcus, and pneumococcus. Previously, all gram-positive bacteria were thought to not have any pili. Reverse vaccinology also led to the discovery of factor G binding protein in meningococcus, which binds to complement factor H in humans. Binding to the complement factor H allows for meningococcus to grow in human blood while blocking alternative pathways. This model does not fit many animal species, which do not have the same complement factor H as humans, indicating differentiation of meningococcus between differing species. References Vaccination
Reverse vaccinology
Biology
1,282
309,246
https://en.wikipedia.org/wiki/Maxwell%27s%20theorem
In probability theory, Maxwell's theorem (known also as Herschel-Maxwell's theorem and Herschel-Maxwell's derivation) states that if the probability distribution of a random vector in is unchanged by rotations, and if the components are independent, then the components are identically distributed and normally distributed. Equivalent statements If the probability distribution of a vector-valued random variable X = ( X1, ..., Xn )T is the same as the distribution of GX for every n×n orthogonal matrix G and the components are independent, then the components X1, ..., Xn are normally distributed with expected value 0 and all have the same variance. This theorem is one of many characterizations of the normal distribution. The only rotationally invariant probability distributions on Rn that have independent components are multivariate normal distributions with expected value 0 and variance σ2In, (where In = the n×n identity matrix), for some positive number σ2. History James Clerk Maxwell proved the theorem in Proposition IV of his 1860 paper. Ten years earlier, John Herschel also proved the theorem. The logical and historical details of the theorem may be found in. Proof We only need to prove the theorem for the 2-dimensional case, since we can then generalize it to n-dimensions by applying the theorem sequentially to each pair of coordinates. Since rotating by 90 degrees preserves the joint distribution, both has the same probability measure. Let it be . If is a Dirac delta distribution at zero, then it's a gaussian distribution, just degenerate. Now assume that it is not. By Lebesgue's decomposition theorem, we decompose it to a sum of regular measure and an atomic measure: . We need to show that , with a proof by contradiction. Suppose contains an atomic part, then there exists some such that . By independence of , the conditional variable is distributed the same way as . Suppose , then since we assumed is not concentrated at zero, , and so the double ray has nonzero probability. Now by rotational symmetry of , any rotation of the double ray also has the same nonzero probability, and since any two rotations are disjoint, their union has infinite probability, contradiction. (As far as we can find, there is no literature about the case where is singularly continuous, so we will let that case go.) So now let have probability density function , and the problem reduces to solving the functional equation References Sources External links Maxwell's theorem in a video by 3blue1brown Probability theorems James Clerk Maxwell
Maxwell's theorem
Mathematics
534
39,457,694
https://en.wikipedia.org/wiki/Vector%20logic
Vector logic is an algebraic model of elementary logic based on matrix algebra. Vector logic assumes that the truth values map on vectors, and that the monadic and dyadic operations are executed by matrix operators. "Vector logic" has also been used to refer to the representation of classical propositional logic as a vector space, in which the unit vectors are propositional variables. Predicate logic can be represented as a vector space of the same type in which the axes represent the predicate letters and . In the vector space for propositional logic the origin represents the false, F, and the infinite periphery represents the true, T, whereas in the space for predicate logic the origin represents "nothing" and the periphery represents the flight from nothing, or "something". Overview Classic binary logic is represented by a small set of mathematical functions depending on one (monadic) or two (dyadic) variables. In the binary set, the value 1 corresponds to true and the value 0 to false. A two-valued vector logic requires a correspondence between the truth-values true (t) and false (f), and two q-dimensional normalized real-valued column vectors s and n, hence: and (where is an arbitrary natural number, and "normalized" means that the length of the vector is 1; usually s and n are orthogonal vectors). This correspondence generates a space of vector truth-values: V2 = {s,n}. The basic logical operations defined using this set of vectors lead to matrix operators. The operations of vector logic are based on the scalar product between q-dimensional column vectors: : the orthonormality between vectors s and n implies that if , and if , where . Monadic operators The monadic operators result from the application , and the associated matrices have q rows and q columns. The two basic monadic operators for this two-valued vector logic are the identity and the negation: Identity: A logical identity ID(p) is represented by matrix . This matrix operates as follows: Ip = p, p ∈ V2; due to the orthogonality of s with respect to n, we have , and similarly . It is important to note that this vector logic identity matrix is not generally an identity matrix in the sense of matrix algebra. Negation: A logical negation ¬p is represented by matrix Consequently, Ns = n and Nn = s. The involutory behavior of the logical negation, namely that ¬(¬p) equals p, corresponds with the fact that N2 = I. Dyadic operators The 16 two-valued dyadic operators correspond to functions of the type ; the dyadic matrices have q2 rows and q columns. The matrices that execute these dyadic operations are based on the properties of the Kronecker product. Two properties of this product are essential for the formalism of vector logic: Using these properties, expressions for dyadic logic functions can be obtained: Conjunction. The conjunction (p∧q) is executed by a matrix that acts on two vector truth-values: .This matrix reproduces the features of the classical conjunction truth-table in its formulation: and verifies and Disjunction. The disjunction (p∨q) is executed by the matrix resulting in and Implication. The implication corresponds in classical logic to the expression p → q ≡ ¬p ∨ q. The vector logic version of this equivalence leads to a matrix that represents this implication in vector logic: . The explicit expression for this implication is: and the properties of classical implication are satisfied: and Equivalence and Exclusive or. In vector logic the equivalence p≡q is represented by the following matrix: with and The Exclusive or is the negation of the equivalence, ¬(p≡q); it corresponds with the matrix given by with and NAND and NOR The matrices S and P correspond to the Sheffer (NAND) and the Peirce (NOR) operations, respectively: Numerical examples Here are numerical examples of some basic logical gates implemented as matrices for two different sets of 2-dimensional orthonormal vectors for s and n. Set 1: In this case the identity and negation operators are the identity and anti-diagonal identity matrices:, and the matrices for conjunction, disjunction and implication are respectively. Set 2: Here the identity operator is the identity matrix, but the negation operator is no longer the anti-diagonal identity matrix : The resulting matrices for conjunction, disjunction and implication are: respectively. De Morgan's law In the two-valued logic, the conjunction and the disjunction operations satisfy the De Morgan's law: p∧q≡¬(¬p∨¬q), and its dual: p∨q≡¬(¬p∧¬q)). For the two-valued vector logic this law is also verified: , where u and v are two logic vectors. The Kronecker product implies the following factorization: Then it can be proved that in the two-dimensional vector logic the De Morgan's law is a law involving operators, and not only a law concerning operations: Law of contraposition In the classical propositional calculus, the law of contraposition p → q ≡ ¬q → ¬p is proved because the equivalence holds for all the possible combinations of truth-values of p and q. Instead, in vector logic, the law of contraposition emerges from a chain of equalities within the rules of matrix algebra and Kronecker products, as shown in what follows: This result is based in the fact that D, the disjunction matrix, represents a commutative operation. Many-valued two-dimensional logic Many-valued logic was developed by many researchers, particularly by Jan Łukasiewicz and allows extending logical operations to truth-values that include uncertainties. In the case of two-valued vector logic, uncertainties in the truth values can be introduced using vectors with s and n weighted by probabilities. Let , with be this kind of "probabilistic" vectors. Here, the many-valued character of the logic is introduced a posteriori via the uncertainties introduced in the inputs. Scalar projections of vector outputs The outputs of this many-valued logic can be projected on scalar functions and generate a particular class of probabilistic logic with similarities with the many-valued logic of Reichenbach. Given two vectors and and a dyadic logical matrix , a scalar probabilistic logic is provided by the projection over vector s: Here are the main results of these projections: The associated negations are: If the scalar values belong to the set {0, , 1}, this many-valued scalar logic is for many of the operators almost identical to the 3-valued logic of Łukasiewicz. Also, it has been proved that when the monadic or dyadic operators act over probabilistic vectors belonging to this set, the output is also an element of this set. Square root of NOT This operator was originally defined for qubits in the framework of quantum computing. In vector logic, this operator can be extended for arbitrary orthonormal truth values. There are, in fact, two square roots of NOT: , and , with . and are complex conjugates: , and note that , and . Another interesting point is the analogy with the two square roots of -1. The positive root corresponds to , and the negative root corresponds to ; as a consequence, . History Early attempts to use linear algebra to represent logic operations can be referred to Peirce and Copilowish, particularly in the use of logical matrices to interpret the calculus of relations. The approach has been inspired in neural network models based on the use of high-dimensional matrices and vectors. Vector logic is a direct translation into a matrix–vector formalism of the classical Boolean polynomials. This kind of formalism has been applied to develop a fuzzy logic in terms of complex numbers. Other matrix and vector approaches to logical calculus have been developed in the framework of quantum physics, computer science and optics. The Indian biophysicist G.N. Ramachandran developed a formalism using algebraic matrices and vectors to represent many operations of classical Jain logic known as Syad and Saptbhangi; see Indian logic. It requires independent affirmative evidence for each assertion in a proposition, and does not make the assumption for binary complementation. Boolean polynomials George Boole established the development of logical operations as polynomials. For the case of monadic operators (such as identity or negation), the Boolean polynomials look as follows: The four different monadic operations result from the different binary values for the coefficients. Identity operation requires f(1) = 1 and f(0) = 0, and negation occurs if f(1) = 0 and f(0) = 1. For the 16 dyadic operators, the Boolean polynomials are of the form: The dyadic operations can be translated to this polynomial format when the coefficients f take the values indicated in the respective truth tables. For instance: the NAND operation requires that: and . These Boolean polynomials can be immediately extended to any number of variables, producing a large potential variety of logical operators. In vector logic, the matrix-vector structure of logical operators is an exact translation to the format of linear algebra of these Boolean polynomials, where the x and 1−x correspond to vectors s and n respectively (the same for y and 1−y). In the example of NAND, f(1,1)=n and f(1,0)=f(0,1)=f(0,0)=s and the matrix version becomes: Extensions Vector logic can be extended to include many truth values since large-dimensional vector spaces allow the creation of many orthogonal truth values and the corresponding logical matrices. Logical modalities can be fully represented in this context, with recursive process inspired in neural models. Some cognitive problems about logical computations can be analyzed using this formalism, in particular recursive decisions. Any logical expression of classical propositional calculus can be naturally represented by a tree structure. This fact is retained by vector logic, and has been partially used in neural models focused in the investigation of the branched structure of natural languages. The computation via reversible operations as the Fredkin gate can be implemented in vector logic. Such an implementation provides explicit expressions for matrix operators that produce the input format and the output filtering necessary for obtaining computations. Elementary cellular automata can be analyzed using the operator structure of vector logic; this analysis leads to a spectral decomposition of the laws governing its dynamics. In addition, based on this formalism, a discrete differential and integral calculus has been developed. See also Algebraic logic Boolean algebra Propositional calculus Quantum logic Jonathan Westphal References Logic Boolean algebra
Vector logic
Mathematics
2,226
11,390,412
https://en.wikipedia.org/wiki/Electroblotting
Electroblotting is a method in molecular biology/biochemistry/immunogenetics to transfer proteins or nucleic acids onto a membrane by using PVDF or nitrocellulose, after gel electrophoresis. The protein or nucleic acid can then be further analyzed using probes such as specific antibodies, ligands like lectins, or stains. This method can be used with all polyacrylamide and agarose gels. An alternative technique for transferring proteins from a gel is capillary blotting. Development This technique was patented in 1989 by William J. Littlehales under the title "Electroblotting technique for transferring specimens from a polyacrylamide electrophoresis or like gel onto a membrane. Electroblotting procedure This technique relies upon current and a transfer buffer solution to drive proteins or nucleic acids onto a membrane. Following electrophoresis, a standard tank or semi-dry blotting transfer system is set up. A stack is put together in the following order from cathode to anode: sponge | three sheets of filter paper soaked in transfer buffer | gel | PVDF or nitrocellulose membrane | three sheets of filter paper soaked in transfer buffer | sponge. It is a necessity that the membrane is located between the gel and the positively charged anode, as the current and sample will be moving in that direction. Once the stack is prepared, it is placed in the transfer system, and a current of suitable magnitude is applied for a suitable period of time according to the materials being used. Typically the electrophoresis gel is stained with Coomassie brilliant blue following the transfer to ensure that a sufficient quantity of material has been transferred. Because the proteins may retain or regain part of their structure during blotting they may react with specific antibodies giving rise to the term immunoblotting. Alternatively the proteins may react with ligands like lectins giving rise to the term affinity blotting. See also Western blotting SDS-page References External links Protein (Western) Blotting - Introduction to Antibodies Membrane Transfer Detailed electroblotting to PVDF procedure - Protein Chemistry Laboratory Protein methods Molecular biology techniques Electrophoresis ca:Western blot de:Western Blot es:Western blot eo:Proteina trimoo fr:Western blot it:Western blot hu:Western blot ja:ウェスタンブロッティング pl:Western-blot pt:Western blot sv:Western blot tr:Western blot ur:لطخۂ مغرب zh:蛋白質轉漬法
Electroblotting
Chemistry,Biology
550
56,300,713
https://en.wikipedia.org/wiki/Caldanaerovirga
Caldanaerovirga is a xylanolytic, anaerobic and alkalithermophilic genus of bacteria from the family of Thermosediminibacterales with one known species (Caldanaerovirga acetigignens). See also List of Bacteria genera List of bacterial orders References Further reading Thermoanaerobacterales Monotypic bacteria genera Bacteria genera Thermophiles Anaerobes
Caldanaerovirga
Biology
92
77,321,191
https://en.wikipedia.org/wiki/Light%20scanning%20photomacrography
Light Scanning Photomacrography (LSP), also known as Scanning Light Photomacrography (SLP) or Deep-Field Photomacrography, is a photographic film technique that allows for high magnification light imaging with exceptional depth of field (DOF). This method overcomes the limitations of conventional macro photography, which typically only keeps a portion of the subject in acceptable focus at high magnifications. Historical background The principles of LSP were first documented in the early 1960s by Dan McLachlan Jr., who highlighted its capability for extreme focal depth in microscopy and in 1968 patented the process. The technique was revived and further developed in the 1980s by photographers such as Darwin Dale and Nile Root, a faculty member at the Rochester Institute of Technology. In the early 1990s, William Sharp and Charles Kazilek, both researchers at Arizona State University, also published articles describing their technique and system setup for capturing SLP images. Predecessor to stack image photography Light Scanning Photomacrography offered a powerful analog tool for high-detail imaging in the age of film photography. It provided a comprehensive depth of field, making it invaluable in scientific and biomedical photography. As technology and techniques continue to evolve, LSP has been replaced by digital image focus stacking. This technique uses a collection of images captured in series at different focal depths, which are then processed using computer software to create a single image with a greater focus depth than any single image. LSP technique and results LSP involves the use of a thin plane of light that scans across the subject, which is mounted on a stage moving perpendicular to the film plane. The technique utilizes traditional optics and is governed by the physical laws of depth of field. By moving the subject through a narrow band of illumination, the entire subject can be recorded in sharp focus from the nearest details to the farthest ones. This analog process produces sharp and detailed images by slowly recording the image on film as the specimen passes through the sheet of light that is thinner than the effective DOF. Because the image is captured at the same relative distance from the camera lens, the resulting images are axonometric rather than perspective projection, which is what the human eye sees and is typically captured by a film camera. Because all parts of an LSP image are captured at the same distance from the lens, relative measurements can be taken from an LSP photograph and can be used for comparison. Equipment and setup A typical LSP setup includes: A stage that can move the subject perpendicular to the film plane. Light sources, in some cases modified projectors, are used to project a thin plane of light. A camera mounted on a stable stand such as a tabletop copy stand. In 1991, Sharp and Kazilek described their SLP system that used three Kodak Ektagraphic slide projectors with zoom lenses to create a thin plane of light. The projectors each had a slide mount with two razor blades placed edge-to-edge to create a thin slit for the light to pass through. The image was captured using a Nikon FE-2 SLR camera mounted above the specimen. Kodachrome 25 slide film was used to record the image and to minimize film grain size and maximize image sharpness Commercial systems A commercial SLP instrument was produced by the Irvine Optical Corp. Their DYNAPHOT system was based on a photomacroscope and could capture images on 4x5 film. The instrument came with two or three illumination sources and a motorized specimen stage. The system advertised a 2X – 40X magnification range and the ability to capture images in black and white and color. Other systems have been developed by Nile Root and Theodore Clarke and reported higher magnification (up to 100X). LSP process Alignment and Focusing: The light sources are aligned and focused to project a thin, consistent plane of light across the subject. Stage Movement: The subject stage moves at a controlled speed, scanning through the plane of light. Image Capture: The camera shutter is set to a long exposure or can be opened and closed manually. As the subject moves through the illuminated plane, it is recorded on the film. This process is very much like painting an image onto the film using photons instead of paint. Applications LSP was particularly useful in biomedical photography, where it was used to document magnified subjects with increased depth of field over traditional macro and micro photography. It has been employed to capture detailed images of biological specimens, such as imaging small insects and their parts. SLP has been used to document shell collections for scientific documentation and research. Other applications include forensic science, mineralogy, and the imaging of fractured surfaces and parts Advantages and challenges of LSP imaging Advantages Exceptional depth of field: Subjects are rendered in sharp focus throughout. High magnification: Detailed images at significant magnification without sacrificing DOF. Analog precision: Provides a non-digital solution with accurate image representation. Versatility: Can be used for a range of subject sizes, from macro to non-macro scales. Challenges Technical complexity: Requires precise setup and alignment. Exposure time: Typically requires long exposure times due to the scanning process. Contrast control: The highly directional lighting can create harsh shadows and high contrast, which may need to be managed. Digital competition: Focus stacking has largely replaced LSP in the digital era due to convenience and flexibility. DIY contributions Enthusiasts and researchers have contributed to the development and accessibility of LSP by creating and sharing DIY guides. These contributions have enabled others to build their own LSP systems using readily available materials and components. Nile Root's publications provide detailed instructions and recommendations for constructing an LSP setup. These DIY systems have allowed a wider audience to explore and utilize the benefits of LSP imaging in various fields. See also Focus stacking References Image processing Microscopy Optical imaging Photographic techniques Science of photography Scientific techniques
Light scanning photomacrography
Chemistry
1,192
16,796,388
https://en.wikipedia.org/wiki/Three-layer%20architecture
The Three-Layer Architecture is a hybrid reactive/deliberative robot architecture developed by R. James Firby that consists of three layers: a reactive feedback control mechanism, a reactive plan execution mechanism, and a mechanism for performing time-consuming deliberative computations. See also ATLANTIS architecture Servo, subsumption, and symbolic (SSS) architecture Distributed architecture for mobile navigation (DAMN) Autonomous robot architecture (AuRA) References Robot architectures
Three-layer architecture
Engineering
92
62,288,906
https://en.wikipedia.org/wiki/Living%20DNA
Living DNA is a UK-based company that specialises in DNA testing and analysis whose head office is in the UK with facilities in the USA and Denmark. The service is to provide deep ancestry details from all around the world, using a unique process of analysis and using linked DNA. It is one of the major DNA testing services in the world. The company conducts three types of DNA analyses: autosomal, Y-chromosome and mitochrondrial. However, while the DNA test results provide information about the origins of a person, genealogy, i.e. finding relatives in historic time, is not yet part of the company's portfolio. History and partnership In 2016, Living DNA was co-founded by Tricia Nicholson and husband-and-wife team, David Nicholson and Hannah Morden-Nicholson, in Frome, Somerset, England. The company began after extensive research and work along with a team of around 100 genealogists around the world. In 1999 Nicholson founded another company, DNA Worldwide, which he has been running since. In July 2018, Living DNA announced and signed a partnership agreement with Findmypast, also a British genealogy company. By working together, their mission was to provide an extensive and detailed family roots and history. Unfortunately, this partnership ended in 2023. In 2019, Living DNA was reported to provide, for each DNA sample tested, recent (less than 80,00 years) ethnic breakdown for 80 regions in the world with the UK broken down in to 21 regions. They also provided insight into female and paternal (for males) heritage going back about 200,000 years showing migration patterns out of Africa. DNA privacy concerns Research published in the scientific journal eLife by geneticist Michael Edge from the University of California uncovered security concerns with customers DNA data held online by the smaller genealogy companies, including Living DNA. It was found that hackers using creative means could easily exploit these upload-based services. Biostatistician Sharon Browning from the University of Washington said that if consumers "care about their DNA's privacy, then they shouldn't upload [their DNA] to these databases." Critics and reviews Living DNA has gotten a positive review from PCWorld. Tech Radar commented that "..the vagueness of some of its results combined with its relatively high price mean it doesn’t stand out from the crowd." After getting DNA tests results from three different companies to know if his "dad's family came from Russia", David Gewirtz says, "the results I got back from Ancestry and 23andMe were shocking and upsetting would be an understatement." However, "the results from Living DNA were substantially different and led to some fascinating insights that were actually really cool, rather than painful." Controversy surrounding key people Director Hannah Morden-Nicholson, stepped down from the Frome Chamber of Commerce committee in early 2019 after being associated with a locally established "cult" Universal Medicine. This followed on from a BBC investigation into the "socially harmful" group. Co-director David Nicholson is also dedicated to the sect and its leader’s teachings, and ex-director and co-founder Tricia Nicholson declares a 'lifelong family friendship' with the sect's leader. References External links Company listing at the International Society for Genetic Genealogy Wiki Ancestry DNA Testing Reviews for LivingDNA British genealogy websites Companies based in Somerset Genetic genealogy companies Online companies of the United Kingdom Applied genetics Biotechnology companies of the United Kingdom Biotechnology companies established in 2016 Biological databases Privately held companies of England 2016 establishments in England British companies established in 2016
Living DNA
Biology
727
41,426,709
https://en.wikipedia.org/wiki/Nafees%20Bin%20Zafar
Nafees Bin Zafar (born 1978) is a visual effects and computer graphics software engineer of Bangladeshi origin based in Los Angeles, United States. He is currently Principal Engineer at animation studio DreamWorks Animation. In 2008, he received an Academy Scientific and Technical Award in Scientific and Engineering Award (Academy Plaque) category for the development of the fluid simulation system at Digital Domain. , becoming the first person of Bangladeshi origin to win an Academy Award in any category. He also received an Academy Scientific and Technical Award in Technical Achievement Award (Academy Certificate) category in 2015 for the separate development of two large-scale destruction simulation systems based on Bullet. Early life Zafar was born in Dhaka, Bangladesh, and moved to Charleston, South Carolina with his family when he was 11. He studied at Dhaka Residential Model College till grade 6. He attended College of Charleston and graduated in software engineering. During that time, he studied 3D graphics using SGI computers at Virtual Reality South. He is the son of Zafar Bin Bashar, a partner at Marcum & Kliegman, and Nafeesa Zafar, who resides in Long Island, New York. He is a great-grandson of the famous late Bangladeshi poet Golam Mostofa and grand-nephew of the famous Bangladeshi artist and puppeteer Mustafa Monwar. Career In February 2008, Zafar received an Academy Scientific and Technical Award for the development of the fluid simulation system at Digital Domain, which was used in the film Pirates of the Caribbean: At World's End. He received the Scientific and Engineering Award along with his colleagues at Digital Domain, thus becoming the first person of Bangladeshi origin to win an Academy Award. In February 2015, Zafar was again recognized by the Academy when he and his colleagues at Digital Domain received a Technical Achievement Award for their work on the Drop Destruction Toolkit, used to create visual effects in the film 2012. He is now Principal Engineer at DreamWorks Animation. Filmography Madagascar 3: Europe's Most Wanted (principal engineer) Puss in Boots (senior software engineer) Kung Fu Panda 2 (senior software engineer) Megamind (senior production engineer) Shrek Forever After (senior production engineer) Percy Jackson & the Lightning Thief (software engineer) The Seeker: The Dark Is Rising (visual effects: Digital Domain) Pirates of the Caribbean: At World's End (technical developer) Flags of Our Fathers (technical developer) Stealth (software engineer) The Croods (research and development principal engineer: DreamWorks Animation) See also DreamWorks Animation References External links Nafees Bin Zafar: Linkedin profile 1978 births Living people Academy Award for Technical Achievement winners College of Charleston alumni Software engineers People from Dhaka
Nafees Bin Zafar
Engineering
543
54,184,567
https://en.wikipedia.org/wiki/Cetadiol
Cetadiol, also known as androst-5-ene-3β,16α-diol, is a drug described as a "steroid tranquilizer" which was briefly investigated as a treatment for alcoholism in the 1950s. It is an androstane steroid and analogue of 5-androstenediol (androst-5-ene-3β,17β-diol) and 16α-hydroxy-DHEA (androst-5-ene-3β,16α-diol-17-one), but showed no androgenic or myotrophic activity in animal bioassays. The drug was reported in 1956 and studied until 1958. Chemistry See also Androstadienol (androsta-5,16-dien-3β-ol) Androstenol (5α-androst-16-en-3α-ol) 4-Androstadienol (PH94B; Aloradine) Cyclopregnol References Abandoned drugs Alcohol abuse Androstanes Anxiolytics Diols Drug rehabilitation Drugs with unknown mechanisms of action
Cetadiol
Chemistry
246
20,880
https://en.wikipedia.org/wiki/Mira
Mira (), designation Omicron Ceti (ο Ceti, abbreviated Omicron Cet, ο Cet), is a red-giant star estimated to be 200–300 light-years from the Sun in the constellation Cetus. ο Ceti is a binary stellar system, consisting of a variable red giant (Mira A) along with a white dwarf companion (Mira B). Mira A is a pulsating variable star and was the first non-supernova variable star discovered, with the possible exception of Algol. It is the prototype of the Mira variables. Nomenclature ο Ceti (Latinised to Omicron Ceti) is the star's Bayer designation. It was named Mira (Latin for 'wonderful' or 'astonishing') by Johannes Hevelius in his Historiola Mirae Stellae (1662). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Mira for this star. Observation history Evidence that the variability of Mira was known in ancient China, Babylon or Greece is at best only circumstantial. What is certain is that the variability of Mira was recorded by the astronomer David Fabricius beginning on August 3, 1596. Observing what he thought was the planet Mercury (later identified as Jupiter), he needed a reference star for comparing positions and picked a previously unremarked third-magnitude star nearby. By August 21, however, it had increased in brightness by one magnitude, then by October had faded from view. Fabricius assumed it was a nova, but then saw it again on February 16, 1609. In 1638 Johannes Holwarda determined a period of the star's reappearances, eleven months; he is often credited with the discovery of Mira's variability. Johannes Hevelius was observing it at the same time and named it Mira in 1662, for it acted like no other known star. Ismail Bouillaud then estimated its period at 333 days, less than one day off the modern value of 332 days. Bouillaud's measurement may not have been erroneous: Mira is known to vary slightly in period, and may even be slowly changing over time. The star is estimated to be a six-billion-year-old red giant. There is considerable speculation as to whether Mira had been observed prior to Fabricius. Certainly Algol's history (known for certain as a variable only in 1667, but with legends and such dating back to antiquity showing that it had been observed with suspicion for millennia) suggests that Mira might have been known, too. Karl Manitius, a modern translator of Hipparchus' Commentary on Aratus, has suggested that certain lines from that second-century text may be about Mira. The other pre-telescopic Western catalogs of Ptolemy, al-Sufi, Ulugh Beg and Tycho Brahe turn up no mentions, even as a regular star. There are three observations from Chinese and Korean archives, in 1596, 1070 and the same year when Hipparchus would have made his observation (134 BC) that are suggestive. An estimate obtained in 1925 from interferometry by Francis G. Pease at the Mount Wilson Observatory gave Mira a diameter of 250-260 million miles (402 to 418 million km, or approximately ), making it the then-second largest star known and comparable to historical estimates of Betelgeuse, surpassed only by Antares. On the contrary, Otto Struve thought of Mira as a red supergiant with an approximate radius of , while modern consensus accepts Mira to be a highly evolved asymptotic giant branch star. Distance and background Information Pre-Hipparcos estimates centered on 220 light-years; while Hipparcos data from the 2007 reduction suggest a distance of 299 light-years, with a margin of error of 11%. The age of Mira is suspected to be about 6 billion years old. Its gaseous material is scattered, as much as one-thousandth as thin as the air around us. Mira is also among the coolest known bright stars of the red giant class, with a temperature ranging from 3,000 to 4,000 degrees Fahrenheit (1,600 to 2,200 degrees Celsius). As with other long-period variables, Mira's deep red color at minimum pales to a lighter orange as the star brightens. Within the next few million years, Mira will discard its outer layers and become a planetary nebula, leaving behind a white dwarf. Stellar system This binary star system consists of a red giant (Mira, designated Mira A) undergoing mass loss and a high-temperature white dwarf companion (Mira B) that is accreting mass from the primary. Such an arrangement of stars is known as a symbiotic system and this is the closest such symbiotic pair to the Sun. Examination of this system by the Chandra X-ray Observatory shows a direct mass exchange along a bridge of matter from the primary to the white dwarf. The two stars are currently separated by about 70 astronomical units. Component A Mira A is currently an asymptotic giant branch (AGB) star, in the thermally pulsing AGB phase. Each pulse lasts a decade or more, and an amount of time on the order of 10,000 years passes between each pulse. With every pulse cycle Mira increases in luminosity and the pulses grow stronger. This is also causing dynamic instability in Mira, resulting in dramatic changes in luminosity and size over shorter, irregular time periods. The overall shape of Mira A has been observed to change, exhibiting pronounced departures from symmetry. These appear to be caused by bright spots on the surface that evolve their shape on time scales of 3–14 months. Observations of Mira A in the ultraviolet band by the Hubble Space Telescope have shown a plume-like feature pointing toward the companion star. Variability Mira A is a variable star, specifically the prototypical Mira variable. The 6,000 to 7,000 known stars of this class are all red giants whose surfaces pulsate in such a way as to increase and decrease in brightness over periods ranging from about 80 to more than 1,000 days. In the particular case of Mira, its increases in brightness take it up to about magnitude 3.5 on average, placing it among the brighter stars in the Cetus constellation. Individual cycles vary too; well-attested maxima go as high as magnitude 2.0 in brightness and as low as 4.9, a range almost 15 times in brightness, and there are historical suggestions that the real spread may be three times this or more. Minima range much less, and have historically been between 8.6 and 10.1, a factor of four times in luminosity. The total swing in brightness from absolute maximum to absolute minimum (two events which did not occur on the same cycle) is 1,700 times. Mira emits the vast majority of its radiation in the infrared, and its variability in that band is only about two magnitudes. The shape of its light curve is of an increase over about 100 days, and the return to minimum taking twice as long. Contemporary approximate maxima for Mira: Oct 21–31, 1999 Sep 21–30, 2000 Aug 21–31, 2001 Jul 21–31, 2002 Jun 21–30, 2003 May 21–31, 2004 Apr 11–20, 2005 Mar 11–20, 2006 Feb 1–10, 2007 Jan 21–31, 2008 Dec 21–31, 2008 Nov 21–30, 2009 Oct 21–31, 2010 Sep 21–30, 2011 Aug 27, 2012 Jul 26, 2013 May 12, 2014 Apr 9, 2015 Mar 6, 2016 Jan 31, 2017 Dec 29, 2017 Nov 26, 2018 Oct 24, 2019 Sep 20, 2020 Aug 18, 2021 Jul 16, 2022 Jun 13, 2023 May 10, 2024 From northern temperate latitudes, Mira is generally not visible between late March and June due to its proximity to the Sun. This means that at times several years can pass without it appearing as a naked-eye object. The pulsations of Mira variables cause the star to expand and contract, but also to change its temperature. The temperature is highest slightly after the visual maximum, and lowest slightly before minimum. The photosphere, measured at the Rosseland radius, is smallest just before visual maximum and close to the time of maximum temperature. The largest size is reached slightly before the time of lowest temperature. The bolometric luminosity is proportional to the fourth power of the temperature and the square of the radius, but the radius varies by over 20% and the temperature by less than 10%. In Mira, the highest luminosity occurs close to the time when the star is hottest and smallest. The visual magnitude is determined both by the luminosity and by the proportion of the radiation that occurs at visual wavelengths. Only a small proportion of the radiation is emitted at visual wavelengths and this proportion is very strongly influenced by the temperature (Planck's law). Combined with the overall luminosity changes, this creates the very big visual magnitude variation with the maximum occurring when the temperature is high. Infrared VLTI measurements of Mira at phases 0.13, 0.18, 0.26, 0.40 and 0.47, show that the radius varies from at phase 0.13 just after maximum to at phase 0.40 approaching minimum. The temperature at phase 0.13 is and at phase 0.26 about halfway from maximum to minimum. The luminosity is calculated to be at phase 0.13 and at phase 0.26. The pulsations of Mira have the effect of expanding its photosphere by around 50% compared to a non-pulsating star. In the case of Mira, if it was not pulsating it is modelled to have a radius of only around . Mass loss Ultraviolet studies of Mira by NASA's Galaxy Evolution Explorer (GALEX) space telescope have revealed that it sheds a trail of material from the outer envelope, leaving a tail 13 light-years in length, formed over tens of thousands of years. It is thought that a hot bow wave of compressed plasma/gas is the cause of the tail; the bow wave is a result of the interaction of the stellar wind from Mira A with gas in interstellar space, through which Mira is moving at an extremely high speed of . The tail consists of material stripped from the head of the bow wave, which is also visible in ultraviolet observations. Mira's bow shock will eventually evolve into a planetary nebula, the form of which will be considerably affected by the motion through the interstellar medium (ISM). Mira’s tail offers a unique opportunity to study how stars like our sun die and ultimately seed new solar systems. As Mira hurls along, its tail drops off carbon, oxygen and other important elements needed for new stars, planets, and possibly even life to form. This tail material, visible now for the first time, has been shed over the past 30,000 years. Component B The companion star is away from the main star. It was resolved by the Hubble Space Telescope in 1995, when it was 70 astronomical units from the primary; and results were announced in 1997. The HST ultraviolet images and later X-ray images by the Chandra space telescope show a spiral of gas rising off Mira in the direction of Mira B. The companion's orbital period around Mira is approximately 400 years. In 2007, observations showed a protoplanetary disc around the companion, Mira B. This disc is being accreted from material in the solar wind from Mira and could eventually form new planets. These observations also hinted that the companion was a main-sequence star of around 0.7 solar mass and spectral type K, instead of a white dwarf as originally thought. However, in 2010 further research indicated that Mira B is, in fact, a white dwarf. References Further reading Robert Burnham Jr., Burnham's Celestial Handbook, Vol. 1, (New York: Dover Publications, Inc., 1978), 634. James Kaler, The Hundred Greatest Stars, (New York: Copernicus Books, 2002), 121. External links Speeding Bullet Star Leaves Enormous Streak Across Sky at Caltech Mira has tail nearly 13 light years in length (BBC) Astronomy Picture of the Day:1998-10-11, 2001-01-21, 2006-07-22, 2007-02-21, 2007-08-17 SEDS article A lightcurve of Mira from the BAV. Universe Today, That's Not a Comet, that's a Star OMICRON CETI (Mira) Winter 2006: Mira revisited Ceti, Omicron Binary stars Cetus Mira variables M-type giants Stars with proper names Ceti, 68 010826 0681 014386 Emission-line stars Durchmusterung objects
Mira
Astronomy
2,688
12,262,038
https://en.wikipedia.org/wiki/C5H8O2
{{DISPLAYTITLE:C5H8O2}} The molecular formula C5H8O2 may refer to: Acetylacetone Acetylpropionyl Allyl acetate Angelic acid Coffee furanone Cyclobutanecarboxylic acid Ethyl acrylate Glutaraldehyde Isopropenyl acetate Methyl methacrylate Tiglic acid Valerolactones δ-Valerolactone γ-Valerolactone Vinyl propionate
C5H8O2
Chemistry
107
18,782,017
https://en.wikipedia.org/wiki/Carangoides%20ciliarius
Carangoides ciliarius is a dubious species of marine fish in the jack and horse mackerel family, Carangidae. The validity of the species has been questioned by a number of authors, with many concluding it is a synonym of the similar Carangoides armatus, commonly known as the longfin trevally. However, this synonymy has not been accepted by all authorities, with Fishbase and ITIS both recognising it as a valid species. Like Carangoides armatus, the species is occasionally referred to as the 'longfin kingfish'. Taxonomy The species, as it is currently recognised, was scientifically described and named by the German naturalist Eduard Rüppell in 1830, based on the holotype specimen taken from Massawa in the Red Sea. Rüppell named the fish Citula ciliaria, placing the species in what was at the time a valid genus of jacks. As the classification of the carangids was reviewed, Citula was synonymised with Pseudocaranx, with C. ciliaria transferred to Carangoides, and the specific name changed from ciliaria to ciliarius, leading to the currently accepted combination. There is a possibility that Peter Forsskål described and named the species earlier, in 1775, which would make him the correct author under ICZN rules. He named a species Sciaena armata, but the description has been too vague to make any certain conclusions, and this name is considered a nomen dubium that cannot hold priority, and placed in synonymy with C. ciliarius. Georges Cuvier independently renamed the species as Caranx citula in 1833, also making reference to the name Caranx cirrhosus as a synonym of his new name. This name was apparently coined by Christian Gottfried Ehrenberg, although never properly published. These two names are considered to be junior synonyms under ICZN naming rules and are no longer valid. Synonymy with Carangoides armatus There has been extensive confusion in the ichthyological literature between C. ciliarius and C. armatus. Rüppell described both 'species' in the same volume, and a 1973 paper by Margret Smith concluded he merely described both a young and an old individual of the same species. She recommended that C. ciliarius be given priority due to the fact it appears first in the book. A similar mistake involving misidentification of age stages apparently occurred in a 1937 analysis of carangids by Yojiro Wakiya, who divided C. armatus into four separate species, one of them being C. ciliarius. The most recent investigation into this taxonomic problem occurred in 1980, when Williams and Venkataramani confirmed synonymy between C. armatus and C. ciliarius, but recommended the name Carangoides armatus be kept. Most modern publications now list C. ciliarius as a synonym of C. armatus, with the last major revision of Indo-Pacific carangids also reaffirming this. Nevertheless, two major taxonomic authorities, Fishbase and ITIS, list the species as valid based on an older version of California Academy of Sciences Catalog of Fishes, which now treats C. ciliarus as synonymous with Carangoides armatus. This name is occasionally used in non-scientific literature such as fishing publications, although the common name given to the fish, 'longfin kingfish', is also applied to Carangoides armatus. See also Longfin trevally, Carangoides armatus, for a description of the species appearance and distribution References External links Carangoides ciliarius at Fishbase Carangoides Controversial taxa Fish described in 1830
Carangoides ciliarius
Biology
762
42,033,343
https://en.wikipedia.org/wiki/LG%20G2%20Mini
LG G2 Mini is an Android smartphone developed by LG Electronics. It was unveiled at a Mobile World Congress on February 23, 2014. The G2 Mini is designed as a smaller version of its full-sized namesake, sharing a similar design but with a smaller display and other lower-end hardware specifications. It lacks a status LED and an ambient light sensor. See also LG G2 LG Optimus G LG Optimus G Pro Nexus 5 List of Android smartphones References Android (operating system) devices LG Electronics smartphones Mobile phones introduced in 2013 Discontinued smartphones Mobile phones with infrared transmitter
LG G2 Mini
Technology
125
63,786,428
https://en.wikipedia.org/wiki/Jacobus%20Kaper
Jacobus Martinus Kaper (born 12 September 1931) is a biochemist and virologist who worked at the Henry A. Wallace Beltsville Agricultural Research Center of the Agricultural Research Service of the United States. He has performed research on the cucumber mosaic virus. Kaper was born in Madjalengka, Dutch East Indies. He was elected corresponding member of the Royal Netherlands Academy of Arts and Sciences in 1980. References 1931 births Living people 20th-century Dutch East Indies people Members of the Royal Netherlands Academy of Arts and Sciences United States Department of Agriculture people American virologists
Jacobus Kaper
Chemistry
121
70,526,597
https://en.wikipedia.org/wiki/Cornell%20potential
In particle physics, the Cornell potential is an effective method to account for the confinement of quarks in quantum chromodynamics (QCD). It was developed by Estia J. Eichten, Kurt Gottfried, Toichiro Kinoshita, John Kogut, Kenneth Lane and Tung-Mow Yan at Cornell University in the 1970s to explain the masses of quarkonium states and account for the relation between the mass and angular momentum of the hadron (the so-called Regge trajectories). The potential has the form: where is the effective radius of the quarkonium state, is the QCD running coupling, is the QCD string tension and GeV is a constant. Initially, and were merely empirical parameters but with the development of QCD can now be calculated using perturbative QCD and lattice QCD, respectively. Short distance potential The potential consists of two parts. The first one, dominate at short distances, typically for fm. It arises from the one-gluon exchange between the quark and its anti-quark, and is known as the Coulombic part of the potential, since it has the same form as the well-known Coulombic potential induced by the electromagnetic force (where is the electromagnetic coupling constant). The factor in QCD comes from the fact that quarks have different type of charges (colors) and is associated with any gluon emission from a quark. Specifically, this factor is called the color factor or Casimir factor and is , where is the number of color charges. The value for depends on the radius of the studied hadron. Its value ranges from 0.19 to 0.4. For precise determination of the short distance potential, the running of must be accounted for, resulting in a distant-dependent . Specifically, must be calculated in the so-called potential renormalization scheme (also denoted V-scheme) and, since quantum field theory calculations are usually done in momentum space, Fourier transformed to position space. Long distance potential The second term of the potential, , is the linear confinement term and fold-in the non-perturbative QCD effects that result in color confinement. is interpreted as the tension of the QCD string that forms when the gluonic field lines collapse into a flux tube. Its value is GeV. controls the intercepts and slopes of the linear Regge trajectories. Domains of application The Cornell potential applies best for the case of static quarks (or very heavy quarks with non-relativistic motion), although relativistic improvements to the potential using speed-dependent terms are available. Likewise, the potential has been extended to include spin-dependent terms Calculation of the quark-quark potential A test of validity for approaches that seek to explain color confinement is that they must produce, in the limit that quark motions are non-relativistic, a potential that agrees with the Cornell potential. A significant achievement of lattice QCD is to be able compute from first principles the static quark-antiquark potential, with results confirming the empirical Cornell Potential. Other approaches to the confinement problem also results in the Cornell potential, including the dual superconductor model, the Abelian Higgs model, and the center vortex models. More recently, calculations based on the AdS/CFT correspondence have reproduced the Cornell potential using the AdS/QCD correspondence or light front holography. See also Color confinement QCD vacuum References Quantum chromodynamics Mesons Quantum mechanical potentials
Cornell potential
Physics
736
961,611
https://en.wikipedia.org/wiki/Lip%20balm
Lip balm or lip salve is a wax-like substance applied to the lips to moisturize and relieve chapped or dry lips, angular cheilitis, stomatitis, or cold sores. Lip balm often contains beeswax or carnauba wax, camphor, cetyl alcohol, lanolin, paraffin, and petrolatum, among other ingredients. Some varieties contain dyes, flavor, fragrance, phenol, salicylic acid, and sunscreen. Overview The primary purpose of lip balm is to provide an occlusive layer on the lip surface to seal moisture in lips and protect them from external exposure. Dry air, cold temperatures, and wind all have a drying effect on skin by drawing moisture away from the body. Lips are particularly vulnerable because the skin is so thin, and thus they are often the first to present signs of dryness. Occlusive materials like waxes and petroleum jelly prevent moisture loss and maintain lip comfort while flavorings, colorants, sunscreens, and various medicaments can provide additional, specific benefits. Lip balms are produced from bee wax and natural candelilla and carnauba waxes. Lip balm can be applied by a finger to the lips, or in a lipstick-style tube from which it can be applied directly. In 2022, the global lip balm market was valued at US$732.76 mln. The market is predicted to grow at a rate of 9.28% within the next five years and is likely to reach US$1247.74 mln by 2027. Production Production for lip balms includes the following stages: Raw materials are checked for its quality (cosmetic products must comply with strict safety standards) The ingredients are dosed, melted, and mixed (this stage involves special equipment) This mixture is treated in a vacuum to remove bubbles The mixture is crystallized for about 48 hours The mixture is then remelted The mixture is cut into pieces which are shaped as required The lip balm is packaged into a casing History Early lip balms Since 40 BC, the Egyptians made treatment for lip care, which was made with a mixture of beeswax, olive oil, and animal fat. United States In the 1800s, Lydia Maria Child recommended earwax as a treatment for cracked lips in her highly-popular book, Child observed that, "Those who are troubled with cracked lips have found this earwax remedy successful when others have failed. It is one of those sorts of cures, which are very likely to be laughed at; but I know of its having produced very beneficial results." The invention of the lip balm was first formally invented in the 1880s by physician Charles Brown Fleet though its origins may be traced to earwax. Fleet later named his lip balm product "ChapStick". In 1872, chemist Robert Chesebrough discovered and sampled a new petroleum jelly, initially describing it as a "natural, waxy ingredient, rich in minerals from deep within the earth" which could be used as a solution for skin repair. He then distributed his product under the name "Wonder Jelly" before shortly changing it to "Vaseline". In the early 1880s, Charles Brown Fleet created ChapStick. However, due to the lack of sales, Fleet sold his formula and rights to ChapStick to John Morton in 1912 for $5, who saw the marketing potential in the brand. After making the purchase, Morton commissioned Frank Wright, Jr. to create a design for the logo of ChapStick for $15 in 1936. In 1972, ChapStick tubes concealing hidden microphones were used during the Watergate scandal. In 1937, Alfred Woelbing created Carmex to treat cold sores in Milwaukee, though the occurrence of World War 2 would slow the production and sales due to the lack of lanolin. In 1980, Carmex underwent a product change by converting its packaging into squeezable tubes. In 1973, Bonne Bell created the first flavored lip balm and marketed the company as Lip Smackers. The company would later collaborate on various different-flavored lip balms including Dr. Pepper in 1975, The Wrigley Company in 2004, and The Coca-Cola Company in 2006. Bonne Bell also collaborated with Disney to produce lip balms with various princess characters in 2010. In 1991, Burt Shavitz and Roxanne Quimby created their first beeswax based lip balm solution through their company, Burt's Bees. In 2020, it was reported that Burt's Bees had used 50 percent of recycled material to package various products and that 100 percent of the products were recyclable. In 2011, Evolution of Smooth (or commonly known as EOS) created a spherical-shaped lip balm as well as describing its 95% organic ingredients. Cannabis infused lip balms With the gradual legalization of cannabis in the United States, some companies have produced lip balms containing doses of THC or CBD oil. The lip balms were infused with a low dosage of THC in order to prevent the occurrence of any psychoactive or related effect. Notable brands Burt's Bees Blistex Carmex ChapStick Labello Lip Smacker Lypsyl EOS Vaseline Aquaphor Nivea Dependency Addictive ingredients Some physicians have suggested that certain types of lip balm can be addictive or contain ingredients that actually cause drying, the accuracy of which has been debated by many professionals. Lip balm manufacturers sometimes state in their FAQs that there is nothing addictive in their products or that all ingredients are listed and approved by the FDA. Snopes found the claim that there are substances in Carmex that are irritants necessitating reapplication, such as ground glass, to be false. However, some experts such as dermatologist Dr. Cynthia Bailey state that some ingredients in lip balm directly causes sensitive lip skin which may lead to addiction. Dermatology professor Marcia Driscoll also adds onto this argument by stating that aroma ingredients found in flavored or scented lip balms have the potential to irritate skin. Causes for Dependency According to a report, professor Brad Rohu states that it is natural for the lips to feel dry. The exposure to environments with cold, dry, or windy weather can directly cause the chapping of the lips as well as behaviors such as lip licking or mouth breathing. These factors may directly contribute to an increased amount of lip balm usage. According to dermatologist Amy Derick, those who have expressed dependencies on lip balm have developed a desire of how the lips feel after application. She also mentions that the variety of lip balm flavor may also directly cause lip balm dependency as a person may want to lick their lips to taste the flavor, which may consequentially remove the lip balm coating from the lips. This may also leave saliva on the lips which can dry up and make the lips feel even more dry than they initially were. Effects on lip barrier The human lips have an inadequate capability of holding moisture as well as an imperfect lip barrier function. The Journal of the American Academy of Dermatology performed a study in order to determine whether consistent use of lip balm would enhance the overall quality of the lips. The study used 32 female participants within the ages of 20 to 40 years and the participants had mild to moderate dried lips without any history of health-related complications. The participants underwent a procedure in which no lip treatment was provided on the first 3 days, then 2 weeks of consistent lip balm usage, and then a period of no treatment for 3 days. The study determined the quality of the lips based on the physical details and appearance throughout the study. The study showed a direct improvement of the physical details of the lips except for lip cracking during the second week of treatment and after the period of no treatment. The study also showed that hydration of the lips lasted for approximately 8 hours after usage and the lip balm improved the lip barrier function despite discontinued usage. The study concluded that lip balms assist the hydration of the lips which consequentially improves the lip barrier function and the quality. This study was completely funded by Burt's Bees, a lip balm company. Mineral oil In 2015, German consumer watchdog Stiftung Warentest analyzed cosmetics containing mineral oils. After developing a new detection method they found high concentrations of Mineral Oil Aromatic Hydrocarbons (MOAH) and even polyaromatics in products containing mineral oils with Vaseline products containing the most MOAH of all tested cosmetics (up to 9%). The European Food Safety Authority sees MOAH and polyaromatics as possibly carcinogenic. Based on the results, Stiftung Warentest warns not to use Vaseline or any product that is based on mineral oils for lip care. Lip balm market United States In 2019, a research report conducted by the Statista Research Department concluded that ChapStick was the leading lip balm brand in the United States with an approximate unit sale of 55.8 million. Carmex was the second leading brand with approximately 35.2 million units sold and Burt's Bees being the third leading brand with approximately 32.3 million units sold. Trends Beezin' Beezin' is a trend dating back to 2013 in which a person applies Burt's Bees brand lip balm onto the eyelids. The practice is done in order to feel a sensation of being high or drunk, and even to increase the desired effects of alcohol and other substances. In 2022, Beezin' became a viral trend on the social media platform TikTok. Some ingredients, including peppermint oil, are known to be eye irritants which can cause an unintentional inflammatory response which may require treatment and may also cause dermatitis on the eyelids. References External links Skin care Drug delivery devices Dosage forms Lips Cosmetics
Lip balm
Chemistry
2,040
44,907,097
https://en.wikipedia.org/wiki/6463%20aluminium%20alloy
The 6463 aluminium alloy is an aluminium alloy in the wrought aluminium-magnesium-silicon family (6000 or 6xxx series). It is related to 6063 aluminium alloy (Aluminum Association designations that only differ in the second digit are variations on the same alloy), but unlike 6063 it is generally not formed using any processes other than extrusion. It is commonly heat treated to produce tempers with a higher strength but lower ductility. Like 6063, it is often used in architectural applications. Alternate designations include AlMg0.7Si(B) and A96463. The alloy and its various tempers are covered by the following standards: ASTM B 221: Standard Specification for Aluminium and Aluminium-Alloy Extruded Bars, Rods, Wire, Profiles, and Tubes EN 573-3: Aluminium and aluminium alloys. Chemical composition and form of wrought products. Chemical composition and form of products EN 755-2: Aluminium and aluminium alloys. Extruded rod/bar, tube and profiles. Mechanical properties Chemical composition The alloy composition of 6463 aluminium is: Aluminium: 97.9 to 99.4% Copper: 0.2% max Iron: 0.15% max Magnesium: 0.45 to 0.9% Manganese: 0.05% max Silicon: 0.2 to 0.6% Zinc: 0.05% max Residuals: 0.15% max Properties Typical material properties for 6463 aluminium alloy include: Density: 2.69 g/cm3, or 168 lb/ft3. Young's modulus: 70 GPa, or 10 Msi. Ultimate tensile strength: 130 to 230 MPa, or 19 to 33 ksi. Yield strength: 68 to 190 MPa, or 9.9 to 28 ksi. Thermal expansion: 22.1 μm/m-K. References Aluminium alloy table Aluminium alloys Aluminium–magnesium–silicon alloys
6463 aluminium alloy
Chemistry
400
78,353,280
https://en.wikipedia.org/wiki/Julio%20M.%20Ottino
Julio M. Ottino is a chemical engineer known for his research in fluid dynamics, chaos and mixing, and complex systems. He is also an artist, author, and educator. He is currently the Distinguished Robert R. McCormick Institute Professor and Walter P. Murphy Professor of Chemical and Biological Engineering at Northwestern University and is also a professor of management and organizations in the Kellogg School of Management. He previously served as the dean of the McCormick School of Engineering and Applied Science at Northwestern University from 2005-2023. Early life and education Ottino was born in La Plata, Argentina. Growing up with twin interests in art and science, he received a degree in chemical engineering from the National University of La Plata in 1974. After this, while drafted as an officer in the Argentinean Navy, he mounted a solo art exhibit. Immediately after finishing a two-year term in the Navy, he got married and moved to the United States for graduate school in chemical engineering at the University of Minnesota, where he received his PhD in 1979. Career After his PhD, Ottino held faculty positions at University of Massachusetts, Amherst and held visiting appointments at Caltech, Stanford, and the University of Minnesota before joining Northwestern in 1991. He was chair of Northwestern’s Department of Chemical Engineering from 1992 to 2000 and was founder and co-director of NICO, the Northwestern Institute for Complex Systems. In 2005, he became dean of Northwestern’s McCormick School of Engineering. As dean, he developed the whole-brain engineering approach to research and education, integrating both left-brain analysis and right brain creativity through design, entrepreneurship, and leadership and personal development. He created university-wide centers and initiatives, including the Segal Design Institute and the Farley Center for Entrepreneurship and Innovation. In education, he launched several new master’s degrees programs in analytics, artificial intelligence, robotics, and energy and sustainability. At the undergraduate level, he made the first-year Design Thinking and Communication course a centerpiece of the engineering education experience. He was instrumental in developing several cross-school initiatives, including the NUvention series of courses, which brought together university-wide multidisciplinary teams to create and launch startups, and the Bay Area Immersion program, which educates students at the intersection of design, technology, and digital media. He was also instrumental in developing more education and programming at the intersection of fine arts. With the Block Museum, he developed the Artist-at-Large and the Art + Engineering program. He partnered with the Art Institute of Chicago to facilitate the creation of the Center for Scientific Studies in the Arts and to create joint courses such as "Data as Art." During his tenure, applications to the engineering school quadrupled, and research funding doubled. In 2017, he was awarded the Bernard M. Gordon Prize for Innovation in Engineering and Technology Education from the National Academy of Engineering for developing and implementing whole-brain engineering. In 2022, his book The Nexus: Augmented Thinking for a Complex World – The New Convergence of Art, Technology, and Science, co-authored with Bruce Mau, was published by MIT Press. Research Ottino's experimental and theoretical work in chemical engineering connected the fields of chaos and fluid mixing. For the first 10 years of his career, Ottino's principal focus was on fluid mixing. He established the scientific basis of mixing and developed mathematical frameworks that showed flows can produce stretching and folding that creates chaotic motion and effective mixing. He has extended this foundational knowledge to applications including microfluidics, materials processing, and CO2 capture. Recently, Ottino turned his attention to the mixing and segregation of granular materials, exploiting the mathematics of piecewise isometries. His research has been featured in articles and on the covers of Nature, Science, Scientific American, the Proceedings of the National Academy of Sciences of the USA and other publications and has impacted fields such as complex systems, microfluidics, geophysical sciences, and nonlinear dynamics and chaos. He has directed more than 65 PhD theses and is the author of nearly 250 papers and three books. Awards and honors 43rd Annual Michelson Memorial Lecture (2024) Fellow, American Institute for Medical and Biological Engineering (2024) Member, National Academy of Sciences (2022) Founders Award, American Institute of Chemical Engineers (2018) Bernard M. Gordon Prize for Innovation in Engineering and Technology Education, National Academy of Engineering (2017) Fellow, American Institute of Chemical Engineers (2013) Fluid Dynamics Prize, American Physical Society (2008) "One Hundred Engineers of the Modern Era," American Institute of Chemical Engineers (2008) Member, American Academy of Arts and Sciences (2003) Ernest W. Thiele Award (AIChE, Chicago section) (2002) William H. Walker Award, American Institute of Chemical Engineers (2001) John S. Guggenheim Fellowship (2001) Member, National Academy of Engineering (1997) Fellow, American Association for the Advancement of Science (1996) Alpha Chi Sigma Award, American Institute of Chemical Engineers (1994) Fellow, American Physical Society, Division of Fluid Dynamics (1993) Presidential Young Investigator Award (NSF) (1984) Bibliography The Kinematics of Mixing: Stretching, Chaos, and Transport, Cambridge University Press, Cambridge, England 1989 (xiv, 364 pp., illus., + plates), reprinted 1990, 1997; 2004. Mathematical Foundations of Mixing: The Linked Twist Map as a Paradigm in Applications – Micro to Macro, Fluids to Solids, Cambridge University Press, Cambridge, England, 2006. Rob Sturman, Julio M. Ottino, and Stephen Wiggins. The Nexus: Augmented Thinking for a Complex World – The New Convergence of Art, Technology, and Science, MIT Press 2022. Julio Mario Ottino with Bruce Mau. External links www.juliomarioottino.com https://jmo-research.northwestern.edu References Living people Chemical engineers Year of birth missing (living people)
Julio M. Ottino
Chemistry,Engineering
1,205
27,240,077
https://en.wikipedia.org/wiki/Hydracarbazine
Hydracarbazine is a pyridazine that has found use as an antihypertensive agent. References Antihypertensive agents Pyridazines Hydrazines Carboxamides
Hydracarbazine
Chemistry
44
1,195,987
https://en.wikipedia.org/wiki/Naltrexone
Naltrexone, sold under the brand name Revia among others, is a medication primarily used to manage alcohol use or opioid use disorder by reducing cravings and feelings of euphoria associated with substance use disorder. It has also been found effective in the treatment of other addictions and may be used for them off-label. An opioid-dependent person should not receive naltrexone before detoxification. It is taken orally or by injection into a muscle. Effects begin within 30 minutes, though a decreased desire for opioids may take a few weeks to occur. Side effects may include trouble sleeping, anxiety, nausea, and headaches. In those still on opioids, opioid withdrawal may occur. Use is not recommended in people with liver failure. It is unclear if use is safe during pregnancy. Naltrexone is an opioid antagonist and works by blocking the effects of opioids, including both opioid drugs as well as opioids naturally produced in the brain. Naltrexone was first made in 1965 and was approved for medical use in the United States in 1984. Naltrexone, as naltrexone/bupropion (brand name Contrave), is also used to treat obesity. It is on the World Health Organization's List of Essential Medicines. In 2021, it was the 254th most commonly prescribed medication in the United States, with more than 1million prescriptions. Medical uses Alcohol use disorder Naltrexone has been best studied as a treatment for alcoholism. Naltrexone has been shown to decrease the quantity and frequency of ethanol consumption by reducing the dopamine release from the brain after consuming alcohol. It does not appear to change the percentage of people drinking. Its overall benefit has been described as "modest". Acamprosate may work better than naltrexone for eliminating alcohol abuse, while naltrexone may decrease the desire for alcohol to a greater extent. A method pioneered by scientist John David Sinclair (dubbed commercially the “Sinclair Method”) advocates for “pharmacological extinction” of problem drinking behavior by administering naltrexone alongside controlled alcohol consumption. In effect, he argues that naltrexone-induced opioid antagonism sufficiently disrupts reflexive reward mechanisms inherent in the consumption of alcohol and, given enough repetition, will dissociate positive associations formerly made with the consumption of alcohol. The Sinclair Method has a clinically proven success rate of 78%. Opioid use disorder Long-acting injectable naltrexone (under the brand name Vivitrol) is an opioid antagonist, blocking the effects of heroin and other opioids, and decreases heroin use compared to a placebo. Unlike methadone and buprenorphine, it is not a controlled medication. It may decrease cravings for opioids after a number of weeks, and decreases the risk of overdose, at least during the time period that naltrexone is still active, though concern about risk of overdose for those stopping treatment remains. It is given once per month and has better compliance and effect for opioid use than the oral formulation. A drawback of injectable naltrexone is that it requires patients with opioid use disorder and current physiological dependence to be fully withdrawn before it is initiated to avoid a precipitated opioid withdrawal that may be quite severe. In contrast, initiation of buprenorphine only requires delay of the first dose until the patient begins to manifest at least mild opioid withdrawal symptoms. Among patients able to successfully initiate injectable naltrexone, long-term remission rates were similar to those seen in clinical buprenorphine/naloxone administration. The consequence of relapse when weighing the best course of treatment for opiate use disorder remains a concern. Methadone and buprenorphine administration maintain greater drug tolerance while naltrexone allows tolerance to fade, leading to higher instances of an overdose in people who relapse and thus higher mortality. World Health Organization guidelines state that most patients should be advised to use opioid agonists (e.g., methadone or buprenorphine) rather than opioid antagonists like naltrexone, citing evidence of superiority in reducing mortality and retaining patients in care. A 2011 review found insufficient evidence to determine the effect of naltrexone taken orally on opioid dependence. While some do well with this formulation, it must be taken daily, and a person whose cravings become overwhelming can obtain opioid intoxication simply by skipping a dose. Due to this issue, the usefulness of oral naltrexone in opioid use disorder is limited by the low retention in treatment. Naltrexone taken orally remains an ideal treatment for a small number of people with opioid use, usually those with a stable social situation and motivation. With additional contingency management support, naltrexone may be effective in a broader population. Others Unlike varenicline (brand name Chantix), naltrexone is not useful for quitting smoking. Naltrexone has also been under investigation for reducing behavioral addictions such as gambling, NSSID (non-suicidal self-injury disorder), and kleptomania, as well as compulsive sexual behaviors in both offenders and non-offenders (e.g. compulsive porn viewing and masturbation). The results were promising. In one study, the majority of sexual offenders reported a strong reduction in sexual urges and fantasies which reverted to baseline once the medication was discontinued. Case reports have also shown cessation of gambling and other compulsive behaviors, for as long as the medication was taken. When taken at much smaller doses, a regimen known as low-dose naltrexone (LDN), naltrexone may reduce pain and help to address neurological symptoms. Some patients report that LDN helps reduce their symptoms of ME/CFS, multiple sclerosis (MS), fibromyalgia, or autoimmune diseases. Although its mechanism of action is unclear, some have speculated that it may act as an anti-inflammatory. LDN is also being considered as a potential treatment for long COVID. Available forms Naltrexone is available and most commonly used in the form of an oral tablet (50 mg). Vivitrol, a naltrexone formulation for depot injection containing 380 mg of the medication per vial, is also available. Additionally, naltrexone subcutaneous implants that are surgically implanted are available. While these are manufactured in Australia, they are not authorized for use within Australia, but only for export. By 2009, naltrexone implants showed superior efficacy in the treatment of heroin dependence when compared to the oral form. Contraindications Naltrexone should not be used by persons with acute hepatitis or liver failure, or those with recent opioid use (typically 7–10 days). Side effects The most common side effects reported with naltrexone are gastrointestinal complaints such as diarrhea and abdominal cramping. These adverse effects are analogous to the symptoms of opioid withdrawal, as the μ-opioid receptor blockade will increase gastrointestinal motility. The side effects of naltrexone by incidence are as follows: Greater than 10%: difficulty sleeping, anxiety, nervousness, abdominal pain/cramps, nausea and/or vomiting, low energy, joint/muscle pain, and headache. Less than 10%: loss of appetite, diarrhea, constipation, thirstiness, increased energy, feeling down, irritability, dizziness, skin rash, delayed ejaculation, erectile dysfunction, and chills. A variety of other adverse events have also been reported with less than 1% incidence. Opioid withdrawal Naltrexone should not be started until several (typically 7–10) days of abstinence from opioids have been achieved. This is due to the risk of acute opioid withdrawal if naltrexone is taken, as naltrexone will displace most opioids from their receptors. The time of abstinence may be shorter than 7 days, depending on the half-life of the specific opioid taken. Some physicians use a naloxone challenge to determine whether an individual has any opioids remaining. The challenge involves giving a test dose of naloxone and monitoring for opioid withdrawal. If withdrawal occurs, naltrexone should not be started. Adverse effects Whether naltrexone causes dysphoria, depression, anhedonia, or other aversive effects has been studied and reviewed. In early studies of normal and opioid-abstinent individuals, acute and short-term administration of naltrexone was reported to produce a variety of aversive effects including fatigue, loss of energy, sleepiness, mild dysphoria, depression, lightheadedness, faintness, confusion, nausea, gastrointestinal disturbances, sweating, and occasional derealization. However, these studies were small, often uncontrolled, and used subjective means of assessing side effects. Most subsequent longer-term studies of naltrexone for indications like alcohol or opioid dependence have not reported dysphoria or depression with naltrexone in most individuals. According to one source: Naltrexone itself produces little or no psychoactive effect in normal research volunteers even at high doses, which is remarkable given that the endogenous opioid system is important in normal hedonic functioning. Because endogenous opioids are involved in the brain reward system, it would be reasonable to hypothesize that naltrexone might produce anhedonic or dysphoric effects. Although some evidence from small, early trials suggested that patients with a history of opiate dependence might be susceptible to dysphoric effects in response to naltrexone (Crowley et al. 1985; Hollister et al. 1981), reports of such effects have been inconsistent. Most large clinical studies of recovering opioid-dependent individuals have not found naltrexone to have an adverse effect on mood (Greenstein et al. 1984; Malcolm et al. 1987; Miotto et al. 2002; Shufman et al. 1994). Some studies have actually found improvements in mood during the course of treatment with naltrexone (Miotto et al. 1997; Rawlins and Randall 1976). Based on available evidence, naltrexone seems to have minimal untoward effects in the aforementioned areas, at least with long-term therapy. It has been suggested that differences in findings between acute and longer-term studies of naltrexone treatment might be related to altered function in the opioid system with chronic administration of naltrexone. For example, marked upregulation of opioid receptors and hyper-sensitivity to opioids have been observed with naltrexone in preclinical studies. Another possibility is that the central opioid system may have low endogenous functionality in most individuals, becoming active only in the presence of exogenously administered opioid receptor agonists or with stimulation by endogenous opioids induced by pain or stress. A third possibility is that normal individuals may experience different side effects with naltrexone than people with addictive disease such as alcohol or opioid dependence, who may have altered opioid tone or responsiveness. It is notable in this regard that most studies of naltrexone have been in people with substance dependence. Naltrexone may also initially produce opioid withdrawal-like symptoms in a small subset of people not dependent on opioids: The side-effect profile [of naltrexone], at least on the recommended dose of 50 mg per day, is generally benign, although 5 to 10 percent of detoxified opioid addicts experience immediate, intolerable levels of withdrawal-like effects including agitation, anxiety, insomnia, light-headedness, sweating, dysphoria, and nausea. Most patients on naltrexone experience few or no symptoms after the first 1 to 2 weeks of treatment; for a substantial minority (20 to 30 percent) protracted discomfort is experienced. Persisting affective distress related to naltrexone may account for individuals taking the drug who drop out of treatment. Naltrexone has been reported to reduce feelings of social connection. The μ-opioid receptor has been found to play a major role in social reward in animals and the μ-opioid receptor knockout mouse is an animal model of autism. Studies on whether naltrexone can decrease the pleasurable effects of listening to music are conflicting. Besides humans, naltrexone has been found to produce aversive effects in rodents as assessed by conditioned place aversion. Liver damage Naltrexone has been reported to cause liver damage when given at doses higher than recommended. It carries an FDA boxed warning for this rare side effect. Due to these reports, some physicians may check liver function tests before starting naltrexone, and periodically thereafter. Concerns for liver toxicity initially arose from a study of nonaddicted obese patients receiving 300 mg of naltrexone. Subsequent studies have suggested limited or no toxicity in other patient populations and at typical recommended doses such as 50 to 100 mg/day. Overdose No toxic effects have been observed with naltrexone in doses of up to 800 mg/day in clinical studies. The largest reported overdose of naltrexone, which was 1,500 mg in a female patient and was equivalent to an entire bottle of medication (30 × 50 mg tablets), was uneventful. No deaths are known to have occurred with naltrexone overdose. Pharmacology Pharmacodynamics Opioid receptor blockade Naltrexone and its active metabolite 6β-naltrexol are competitive antagonists of the opioid receptors. Naltrexone is specifically an antagonist preferentially of the μ-opioid receptor (MOR), to a lesser extent of the κ-opioid receptor (KOR), and to a much lesser extent of the δ-opioid receptor (DOR). However, naltrexone is not actually a silent antagonist of these receptors but instead acts as a weak partial agonist, with Emax values of 14 to 29% at the MOR, 16 to 39% at the KOR, and 14 to 25% at the DOR in different studies. In accordance with its partial agonism, although naltrexone is described as a pure opioid receptor antagonist, it has shown some evidence of weak opioid effects in clinical and preclinical studies. By itself, naltrexone acts as an antagonist or weak partial agonist of the opioid receptors. In combination with agonists of the MOR such as morphine however, naltrexone appears to become an inverse agonist of the MOR. Conversely, the naltrexone remains a neutral antagonist (or weak partial agonist) of the KOR and DOR. In contrast to naltrexone, 6β-naltrexol is purely a neutral antagonist of the opioid receptors. The MOR inverse agonism of naltrexone, when it is co-present with MOR agonists, may in part underlie its ability to precipitate withdrawal in opioid-dependent individuals. This may be due to suppression of basal MOR signaling via inverse agonism. Occupancy of the opioid receptors in the brain by naltrexone has been studied using positron emission tomography (PET). Naltrexone at a dose of 50 mg/day has been found to occupy approximately 90 to 95% of brain MORs and 20 to 35% of brain DORs. Naltrexone at a dose of 100 mg/day has been found to achieve 87% and 92% brain occupancy of the KOR in different studies. Per simulation, a lower dose of naltrexone of 25 mg/day might be expected to achieve around 60% brain occupancy of the KOR but still close to 90% occupancy of the MOR. In a study of the duration of MOR blockade with naltrexone, the drug with a single 50 mg dose showed 91% blockade of brain [11C]carfentanil (a selective MOR ligand) binding at 48 hours (2 days), 80% blockade at 72 hours (3 days), 46% blockade at 120 hours (5 days), and 30% blockade at 168 hours (7 days). The half-time of brain MOR blockade by naltrexone in this study was 72 to 108 hours (3.0 to 4.5 days). Based on these findings, doses of naltrexone of even less than 50 mg/day would be expected to achieve virtually complete brain MOR occupancy. Blockade of brain MORs with naltrexone is much longer-lasting than with other opioid antagonists like naloxone (half-time of ~1.7 hours intranasally) or nalmefene (half-time of ~29 hours). The half-life of occupancy of the brain MOR and duration of clinical effect of naltrexone are much longer than suggested by its plasma elimination half-life. A single 50 mg oral dose of naltrexone has been found to block brain MORs and opioid effects for at least 48 to 72 hours. The half-time of brain MOR blockade by naltrexone (72–108 hours) is much longer than the fast plasma clearance component of naltrexone and 6β-naltrexol (~4–12 hours) but was reported to correspond well to the longer terminal phase of plasma naltrexone clearance (96 hours). As an alternative possibility, the prolonged brain MOR occupancy by opioid antagonists like naltrexone and nalmefene may be due to slow dissociation from MORs consequent to their very high MOR affinity (<1.0 nM). Naltrexone blocks the effects of MOR agonists like morphine, heroin, and hydromorphone in humans via its MOR antagonism. Following a single 100 mg dose of naltrexone, the subjective and objective effects of heroin were blocked by 90% at 24 hours, with blockade then decreasing up to 72 hours. Similarly, 20 to 200 mg naltrexone dose-dependently antagonized the effects of heroin for up to 72 hours. Naltrexone also blocks the effects of KOR agonists like salvinorin A, pentazocine, and butorphanol in humans via its KOR antagonism. In addition to opioids, naltrexone has been found to block or reduce the rewarding and other effects of other euphoriant drugs including alcohol, nicotine, and amphetamines. The opioid receptors are involved in neuroendocrine regulation. MOR agonists produce increases in levels of prolactin and decreases in levels of luteinizing hormone (LH) and testosterone. Doses of naltrexone of 25 to 150 mg/day have been found to produce significant increases in levels of β-endorphin, cortisol, and LH, equivocal changes in levels of prolactin and testosterone, and no significant changes in levels of adrenocorticotrophic hormone (ACTH) or follicle-stimulating hormone (FSH). Naltrexone influences the hypothalamic–pituitary–adrenal axis (HPA axis) probably through interference with opioid receptor signaling by endorphins. Blockade of MORs is thought to be the mechanism of action of naltrexone in the management of opioid dependence—it reversibly blocks or attenuates the effects of opioids. It is also thought to be involved in the effectiveness of naltrexone in alcohol dependence by reducing the euphoric effects of alcohol. The role of KOR modulation by naltrexone in its effectiveness for alcohol dependence is unclear but this action may also be involved based on theory and animal studies. Other activities In addition to the opioid receptors, naltrexone binds to and acts as an antagonist of the opioid growth factor receptor (OGFR) and toll-like receptor 4 (TLR4) and interacts with high- and low-affinity binding sites in filamin A (FLNA). It is said that very low doses of naltrexone (<0.001–1 mg/day) interact with FLNA, low doses (1 to 5 mg/day) produce TLR4 antagonism, and standard clinical doses (50 to 100 mg/day) exert opioid receptor and OGFR antagonism. The interactions of naltrexone with FLNA and TLR4 are claimed to be involved in the therapeutic effects of low-dose naltrexone. Pharmacokinetics Absorption The absorption of naltrexone with oral administration is rapid and nearly complete (96%). The bioavailability of naltrexone with oral administration is 5 to 60% due to extensive first-pass metabolism. Peak concentrations of naltrexone are 19 to 44 μg/L after a single 100 mg oral dose and time to peak concentrations of naltrexone and 6β-naltrexol (metabolite) is within 1 hour. Linear increases in circulating naltrexone and 6β-naltrexol concentrations occur over an oral dose range of 50 to 200 mg. Naltrexone does not appear to be accumulated with repeated once-daily oral administration and there is no change in time to peak concentrations with repeated administration. Distribution The plasma protein binding of naltrexone is about 20% over a naltrexone concentration range of 0.1 to 500 μg/L. Its apparent volume of distribution at 100 mg orally is 16.1 L/kg after a single dose and 14.2 L/kg with repeated doses. Metabolism Naltrexone is metabolized in the liver mainly by dihydrodiol dehydrogenases into 6β-naltrexol (6β-hydroxynaltrexone). Levels of 6β-naltrexol are 10- to 30-fold higher than those of naltrexone with oral administration due to extensive first-pass metabolism. Conversely, 6β-naltrexol exposure is only about 2-fold higher than that of naltrexone with intramuscular injection of naltrexone in microspheres (brand name Vivitrol). 6β-Naltrexol is an opioid receptor antagonist similarly to naltrexone and shows a comparable binding profile to the opioid receptors. However, 6β-naltrexol is peripherally selective and crosses into the brain much less readily than does naltrexone. In any case, 6β-naltrexol does still show some central activity and may contribute significantly to the central actions of oral naltrexone. Other metabolites of naltrexone include 2-hydroxy-3-methoxy-6β-naltrexol and 2-hydroxy-3-methoxynaltrexone. Following their formation, the metabolites of naltrexone are further metabolized by conjugation with glucuronic acid to form glucuronides. Naltrexone is not metabolized by the cytochrome P450 system and has low potential for drug interactions. Elimination The elimination of naltrexone is biexponential and rapid over the first 24 hours followed by a third extremely slow decline after 24 hours. The fast elimination half-lives of naltrexone and its metabolite 6β-naltrexol are about 4 hours and 13 hours, respectively. In Contrave oral tablets, which also contain bupropion and are described as extended-release, the half-life of naltrexone is 5 hours. The slow terminal-phase elimination half-life of naltrexone is approximately 96 hours. As microspheres of naltrexone by intramuscular injection (Vivitrol), the elimination half-lives of naltrexone and 6β-naltrexol are both 5 to 10 days. Whereas oral naltrexone is administered daily, naltrexone in microspheres by intramuscular injection is suitable for administration once every 4 weeks or once per month. Naltrexone and its metabolites are excreted in urine. Pharmacogenetics Tentative evidence suggests that family history and presence of the Asn40Asp polymorphism predict naltrexone being effective. Chemistry Naltrexone, also known as N-cyclopropylmethylnoroxymorphone, is a derivative of oxymorphone (14-hydroxydihydromorphinone). It is specifically the derivative of oxymorphone in which the tertiary amine methyl substituent is replaced with methylcyclopropane. Analogues The closely related medication, methylnaltrexone (N-methylnaltrexone), is used to treat opioid-induced constipation but does not treat addiction as it does not cross the blood–brain barrier. Nalmefene (6-desoxy-6-methylenenaltrexone) is similar to naltrexone and is used for the same purposes as naltrexone. Naltrexone should not be confused with naloxone (N-allylnoroxymorphone), which is used in emergency cases of opioid overdose. Other opioid antagonists related to naltrexone include 6β-naltrexol (6β-hydroxynaltrexone), samidorphan (3-carboxamido-4-hydroxynaltrexone), β-funaltrexamine (naltrexone fumarate methyl ester), nalodeine (N-allylnorcodeine), nalorphine (N-allylnormorphine), and nalbuphine (N-cyclobutylmethyl-14-hydroxydihydronormorphine). History Naltrexone was first synthesized in 1963 by Metossian at Endo Laboratories, a small pharmaceutical company in New York City. It was characterized by Blumberg, Dayton, and Wolf in 1965 and was found to be an orally active, long-acting, and very potent opioid antagonist. The drug showed advantages over earlier opioid antagonists such as cyclazocine, nalorphine, and naloxone, including its oral activity, a long duration of action allowing for once-daily administration, and a lack of dysphoria, and was selected for further development. It was patented by Endo Laboratories in 1967 under the developmental code name EN-1639A and Endo Laboratories was acquired by DuPont in 1969. Clinical trials for opioid dependence began in 1973, and a developmental collaboration of DuPont with the National Institute on Drug Abuse for this indication started the next year in 1974. The drug was approved by the FDA for the oral treatment of opioid dependence in 1984, with the brand name Trexan, and for the oral treatment of alcohol dependence in 1995, when the brand name was changed by DuPont to Revia. A depot formulation for intramuscular injection was approved by the FDA under the brand name Vivitrol for alcohol dependence in 2006 and opioid dependence in 2010. Society and culture Generic names Naltrexone is the generic name of the drug and its , , , , and , while naltrexone hydrochloride is its and . Brand names Naltrexone is or has been sold under a variety of brand names, including Adepend, Antaxone, Celupan, Depade, Destoxican, Nalorex, Narcoral, Nemexin, Nodict, Revia, Trexan, Vivitrex, and Vivitrol. It is also marketed in combination with bupropion (naltrexone/bupropion) as Contrave, and was marketed with morphine (morphine/naltrexone) as Embeda. A combination of naltrexone with buprenorphine (buprenorphine/naltrexone) has been developed, but has not been marketed. Controversies The FDA authorized use of injectable naltrexone (Vivitrol) for opioid addiction using a single study that was led by Evgeny Krupitsky at Bekhterev Research Psychoneurological Institute, St Petersburg State Pavlov Medical University, St Petersburg, Russia, a country where opioid agonists such as methadone and buprenorphine are not available. The study was a "double-blind, placebo-controlled, randomized", 24-week trial running "from July 3, 2008, through October 5, 2009" with "250 patients with opioid dependence disorder" at "13 clinical sites in Russia" on the use of injectable naltrexone (XR-NTX) for opioid dependence. The study was funded by the Boston-based biotech Alkermes firm which produces and markets naltrexone in the United States. Critics charged that the study violated ethical guidelines since it compared the formulation of naltrexone not to the best available, evidence-based treatment (methadone or buprenorphine), but to a placebo. Further, the trial did not follow patients who dropped out of the trial to evaluate subsequent risk of fatal overdose, a major health concern . Subsequent trials in Norway and the US did compare injectable naltrexone to buprenorphine and found them to be similar in outcomes for patients willing to undergo the withdrawal symptoms required before naltrexone administration. Nearly 30% of patients in the US trial did not complete induction. In real-world settings, a review of more than 40,000 patient records found that while methadone and buprenorphine reduced risk of fatal overdose, naltrexone administration showed no greater effect on overdose or subsequent emergency care than counseling alone. Despite these findings, naltrexone's manufacturer and some health authorities have promoted the medicine as superior to methadone and buprenorphine since it is not an opioid and does not induce dependence. The manufacturer has also marketed directly to law enforcement and criminal justice officials, spending millions of dollars on lobbying and providing thousands of free doses to jails and prisons. The technique has been successful, with the criminal justice system in 43 states now incorporating long-acting naltrexone. Many do this through Vivitrol courts that offer only this option, leading some to characterize this as "an offer that cannot be refused." The company's marketing techniques have led to a Congressional investigation, and warning from the FDA about failure to adequately state risks of fatal overdose to patients receiving the medicine. In May 2017, United States Secretary of Health and Human Services Tom Price praised [Vivitrol] as the future of opioid addiction treatment after visiting the company's plant in Ohio. His remarks set off sharp criticism with almost 700 experts in the field of substance use submitting a letter to Price cautioning him about Vivitrol's "marketing tactics" and warning him that his comments "ignore widely accepted science". The experts pointed out that Vivitrol's competitors, buprenorphine and methadone, are "less expensive", "more widely used", and have been "rigorously studied". Price had claimed that buprenorphine and methadone were "simply substitute[s]" for "illicit drugs" whereas according to the letter, "the substantial body of research evidence supporting these treatments is summarized in guidance from within your own agency, including the Substance Abuse and Mental Health Services Administration, the US Surgeon General, the National Institute on Drug Abuse, and the Centers for Disease Control and Prevention. Buprenorphine and methadone have been demonstrated to be highly effective in managing the core symptoms of opioid use disorder, reducing the risk of relapse and fatal overdose, and encouraging long-term recovery." Film One Little Pill was a 2014 documentary film about the use of naltrexone to treat alcohol use disorder. Four Good Days is a 2020 film about the four days a drug addict woman has to stay sober to get a shot of naltrexone in a detox facility. Research Depersonalization Naltrexone is sometimes used in the treatment of dissociative symptoms such as depersonalization and derealization. Some studies suggest it might help. Other small, preliminary studies have also shown benefit. Blockade of the KOR by naltrexone and naloxone is thought to be responsible for their effectiveness in ameliorating depersonalization and derealization. Since these drugs are less efficacious in blocking the KOR relative to the MOR, higher doses than typically used seem to be necessary. Low-dose naltrexone Naltrexone has been used off-label at low doses for diseases not related to chemical dependency or intoxication, such as multiple sclerosis. Evidence for recommending low-dose naltrexone is lacking. This treatment has received attention on the Internet. In 2022, four studies (in a few hundred patients) were conducted on naltrexone for long COVID. Self-injury One study suggests that self-injurious behaviors present in persons with developmental disabilities (including autism) can sometimes be remedied with naltrexone. In these cases, the self-injury is believed to be done to release beta-endorphin, which binds to the same receptors as heroin and morphine. If the "rush" generated by self-injury is removed, the behavior may stop. Behavioral disorders Some indications exist that naltrexone might be beneficial in the treatment of impulse-control disorders such as kleptomania, compulsive gambling, or trichotillomania (compulsive hair pulling), but evidence of its effectiveness for gambling is conflicting. A 2008 case study reported successful use of naltrexone in suppressing and treating an internet pornography addiction. Interferon alpha Naltrexone is effective in suppressing the cytokine-mediated adverse neuropsychiatric effects of interferon alpha therapy. Critical addiction studies Some historians and sociologists have suggested that the meanings and uses attributed to anti-craving medicine, such as naltrexone, are context-dependent. Studies have suggested the use of naltrexone in drug courts or healthcare rehabs is a form of "post-social control," or "post-disciplinary control," whereby control strategies for managing offenders and addicts shift from imprisonment and supervision toward more direct control over biological processes. Sexual addiction Small studies have shown a reduction of sexual addiction and problematic sexual behaviours from naltrexone. References Alcohol and health Cyclopropyl compounds Delta-opioid receptor antagonists 4,5-Epoxymorphinans Ethers GABAA receptor negative allosteric modulators Hallucinogen antidotes Hepatotoxins Kappa-opioid receptor agonists Kappa-opioid receptor antagonists Ketones Mu-opioid receptor antagonists Hydroxyarenes Wikipedia medicine articles ready to translate World Health Organization essential medicines
Naltrexone
Chemistry
7,687
17,669,599
https://en.wikipedia.org/wiki/Sociology%20of%20the%20Internet
The sociology of the Internet (or the social psychology of the internet) involves the application of sociological or social psychological theory and method to the Internet as a source of information and communication. The overlapping field of digital sociology focuses on understanding the use of digital media as part of everyday life, and how these various technologies contribute to patterns of human behavior, social relationships, and concepts of the self. Sociologists are concerned with the social implications of the technology; new social networks, virtual communities and ways of interaction that have arisen, as well as issues related to cyber crime. The Internet—the newest in a series of major information breakthroughs—is of interest for sociologists in various ways: as a tool for research, for example, in using online questionnaires instead of paper ones, as a discussion platform, and as a research topic. The sociology of the Internet in the stricter sense concerns the analysis of online communities (e.g. as found in newsgroups), virtual communities and virtual worlds, organizational change catalyzed through new media such as the Internet, and social change at-large in the transformation from industrial to informational society (or to information society). Online communities can be studied statistically through network analysis and at the same time interpreted qualitatively, such as through virtual ethnography. Social change can be studied through statistical demographics or through the interpretation of changing messages and symbols in online media studies. Emergence of the discipline The Internet is a relatively new phenomenon. As Robert Darnton wrote, it is a revolutionary change that "took place yesterday, or the day before, depending on how you measure it." The Internet developed from the ARPANET, dating back to 1969; as a term it was coined in 1974. The World Wide Web as we know it was shaped in the mid-1990s, when graphical interface and services like email became popular and reached wider (non-scientific and non-military) audiences and commerce. Internet Explorer was first released in 1995; Netscape a year earlier. Google was founded in 1998. Wikipedia was founded in 2001. Facebook, MySpace, and YouTube in the mid-2000s. Web 2.0 is still emerging. The amount of information available on the net and the number of Internet users worldwide has continued to grow rapidly. The term 'digital sociology' is now becoming increasingly used to denote new directions in sociological research into digital technologies since Web 2.0. Digital sociology The first scholarly article to have the term digital sociology in the title appeared in 2009. The author reflects on the ways in which digital technologies may influence both sociological research and teaching. In 2010, 'digital sociology' was described, by Richard Neal, in terms of bridging the growing academic focus with the increasing interest from global business. It was not until 2013 that the first purely academic book tackling the subject of 'digital sociology' was published. The first sole-authored book entitled Digital Sociology was published in 2015, and the first academic conference on "Digital Sociology" was held in New York, NY in the same year. Coming to real life example : There is a girl priyadarshini raj in banglore ,india - with a good minset towards society , we have to take an inspiration from her. Although the term digital sociology has not yet fully entered the cultural lexicon, sociologists have engaged in research related to the Internet since its inception. These sociologists have addressed many social issues relating to online communities, cyberspace and cyber-identities. This and similar research has attracted many different names such as cyber-sociology, the sociology of the internet, the sociology of online communities, the sociology of social media, the sociology of cyberculture, or something else again. Digital sociology differs from these terms in that it is wider in its scope, addressing not only the Internet or cyberculture but also the impact of the other digital media and devices that have emerged since the first decade of the twenty-first century. Since the Internet has become more pervasive and linked with everyday life, references to the 'cyber' in the social sciences seems now to have been replaced by the 'digital'. 'Digital sociology' is related to other sub-disciplines such as digital humanities and digital anthropology. It is beginning to supersede and incorporate the other titles above, as well as including the newest Web 2.0 digital technologies into its purview, such as wearable technology, augmented reality, smart objects, the Internet of Things and big data. Research trends According to DiMaggio et al. (1999), research tends to focus on the Internet's implications in five domains: inequality (the issues of digital divide) public and social capital (the issues of date displacement) political participation (the issues of public sphere, deliberative democracy and civil society) organizations and other economic institutions participatory culture and cultural diversity Early on, there were predictions that the Internet would change everything (or nothing); over time, however, a consensus emerged that the Internet, at least in the current phase of development, complements rather than displaces previously implemented media. This has meant a rethinking of the 1990s ideas of "convergence of new and old media". Further, the Internet offers a rare opportunity to study changes caused by the newly emerged—and likely, still evolving—information and communication technology (ICT). Social impact The Internet has created social network services, forums of social interaction and social relations, such as Facebook, MySpace, Meetup, and CouchSurfing which facilitate both online and offline interaction. Though virtual communities were once thought to be composed of strictly virtual social ties, researchers often find that even those social ties formed in virtual spaces are often maintained both online and offline There are ongoing debates about the impact of the Internet on strong and weak ties, whether the Internet is creating more or less social capital, the Internet's role in trends towards social isolation, and whether it creates a more or less diverse social environment. It is often said the Internet is a new frontier, and there is a line of argument to the effect that social interaction, cooperation and conflict among users resembles the anarchistic and violent American frontier of the early 19th century. In March 2014, researchers from the Benedictine University at Mesa in Arizona studied how online interactions affect face-to-face meetings. The study is titled, "Face to Face Versus Facebook: Does Exposure to Social Networking Web Sites Augment or Attenuate Physiological Arousal Among the Socially Anxious," published in Cyberpsychology, Behavior, and Social Networking. They analyzed 26 female students with electrodes to measure social anxiety. Prior to meeting people, the students were shown pictures of the subject they were expected to meet. Researchers found that meeting someone face-to-face after looking at their photos increases arousal, which the study linked to an increase in social anxiety. These findings confirm previous studies that found that socially anxious people prefer online interactions. The study also recognized that the stimulated arousal can be associated with positive emotions and could lead to positive feelings. Recent research has taken the Internet of Things within its purview, as global networks of interconnected everyday objects are said to be the next step in technological advancement. Certainly, global space- and earth-based networks are expanding coverage of the IoT at a fast pace. This has a wide variety of consequences, with current applications in the health, agriculture, traffic and retail fields. Companies such as Samsung and Sigfox have invested heavily in said networks, and their social impact will have to be measured accordingly, with some sociologists suggesting the formation of socio-technical networks of humans and technical systems. Issues of privacy, right to information, legislation and content creation will come into public scrutiny in light of these technological changes. Digital Sociology and Data Emotions Digital sociology is connected with data and data emotions Data emotions happens when people use digital technologies that can effect their decision-making skills or emotions. Social media platforms collects users data while also effecting their emotional state of mind, which causes either solidarity or social engagement amongst users. Social media platforms such as Instagram and Twitter can evoke emotions of love, affection, and empathy. Viral challenges such as the 2014 Ice Bucket Challenge and viral memes has brought people together through mass participation displaying cultural knowledge and understanding of self. Mass participation in viral events prompts users to spread information (data) to one another effecting psychological state of mind and emotions. The link between digital sociology and data emotions is formed through the integration of technological devices within everyday life and activities. The impact on children Researchers have investigated the use of technology (as opposed to the Internet) by children and how it can be used excessively, where it can cause medical health and psychological issues. The use of technological devices by children can cause them to become addicted to them and can lead them to experience negative effects such as depression, attention problems, loneliness, anxiety, aggression and solitude. Obesity is another result from the use of technology by children, due to how children may prefer to use their technological devices rather than doing any form of physical activity. Parents can take control and implement restrictions to the use of technological devices by their children, which will decrease the negative results technology can have if it is prioritized as well as help put a limit to it being used excessively. Children can use technology to enhance their learning skills - for example: using online programs to improve the way they learn how to read or do math. The resources technology provides for children may enhance their skills, but children should be cautious of what they get themselves into due to how cyber bullying may occur. Cyber bullying can cause academic and psychological effects due to how children are suppressed by people who bully them through the Internet. When technology is introduced to children they are not forced to accept it, but instead children are permitted to have an input on what they feel about either deciding to use their technological device or not. . The routines of children have changed due to the increasing popularity of internet connected devices, with Social Policy researcher Janet Heaton concluding that, "while the children's health and quality of life benefited from the technology, the time demands of the care routines and lack of compatibility with other social and institutional timeframes had some negative implications". Children's frequent use of technology commonly leads to decreased time available to pursue meaningful friendships, hobbies and potential career options. While technology can have negative impacts on the lives of children, it can also be used as a valuable learning tool that can encourage cognitive, linguistic and social development. In a 2010 study by the University of New Hampshire, children that used technological devices exhibited greater improvements in problem-solving, intelligence, language skills and structural knowledge in comparison to those children who did not incorporate the use of technology in their learning. In a 1999 paper, it was concluded that "studies did find improvements in student scores on tests closely related to material covered in computer-assisted instructional packages", which demonstrates how technology can have positive influences on children by improving their learning capabilities. Problems have arisen between children and their parents as well when parents limit what children can use their technological devices for, specifically what they can and cannot watch on their devices, making children frustrated. Political organization and censorship The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States became famous for its ability to generate donations via the Internet, and the 2008 campaign of Barack Obama became even more so. Increasingly, social movements and other organizations use the Internet to carry out both traditional and the new Internet activism. Some governments are also getting online. Some countries, such as those of Cuba, Iran, North Korea, Myanmar, the People's Republic of China, and Saudi Arabia use filtering and censoring software to restrict what people in their countries can access on the Internet. In the United Kingdom, they also use software to locate and arrest various individuals they perceive as a threat. Other countries including the United States, have enacted laws making the possession or distribution of certain material such as child pornography illegal but do not use filtering software. In some countries Internet service providers have agreed to restrict access to sites listed by police. Economics While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. Philanthropy The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites such as Donors Choose and Global Giving now allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed prior being funded by lenders and borrowers do not communicate with lenders themselves. However, the recent spread of cheap Internet access in developing countries has made genuine peer-to-peer connections increasingly feasible. In 2009 the US-based nonprofit Zidisha tapped into this trend to offer the first peer-to-peer microlending platform to link lenders and borrowers across international borders without local intermediaries. Inspired by interactive websites such as Facebook and eBay, Zidisha's microlending platform facilitates direct dialogue between lenders and borrowers and a performance rating system for borrowers. Web users worldwide can fund loans for as little as a dollar. Leisure The Internet has been a major source of leisure since before the World Wide Web, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much of the main traffic. Today, many Internet forums have sections devoted to games and funny videos; short cartoons in the form of Flash movies are also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas. The pornography and gambling industries have both taken full advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites. Although governments have made attempts to censor Internet porn, Internet service providers have told governments that these plans are not feasible. Also many governments have attempted to put restrictions on both industries' use of the Internet, this has generally failed to stop their widespread popularity. One area of leisure on the Internet is online gaming. This form of leisure creates communities, bringing people of all ages and origins to enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. This has revolutionized the way many people interact and spend their free time on the Internet. While online gaming has been around since the 1970s, modern modes of online gaming began with services such as GameSpy and MPlayer, to which players of games would typically subscribe. Non-subscribers were limited to certain types of gameplay or certain games. Many use the Internet to access and download music, movies and other works for their enjoyment and relaxation. As discussed above, there are paid and unpaid sources for all of these, using centralized servers and distributed peer-to-peer technologies. Discretion is needed as some of these sources take more care over the original artists' rights and over copyright laws than others. Many use the World Wide Web to access news, weather and sports reports, to plan and book holidays and to find out more about their random ideas and casual interests. People use chat, messaging and e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking websites like MySpace, Facebook and many others like them also put and keep people in contact for their enjoyment. The Internet has seen a growing number of Web desktops, where users can access their files, folders, and settings via the Internet. Cyberslacking has become a serious drain on corporate resources; the average UK employee spends 57 minutes a day surfing the Web at work, according to a study by Peninsula Business Services. Subfields Four aspects of digital sociology have been identified by Lupton (2012): Professional digital practice: using digital media tools for professional purposes: to build networks, construct an e-profile, publicise and share research and instruct students. Sociological analyses of digital use: researching the ways in which people's use of digital media configures their sense of selves, their embodiment and their social relations. Digital data analysis: using digital data for social research, either quantitative or qualitative. Critical digital sociology: undertaking reflexive and critical analysis of digital media informed by social and cultural theory. Professional digital practice Although they have been reluctant to use social and other digital media for professional academics purposes, sociologists are slowly beginning to adopt them for teaching and research. An increasing number of sociological blogs are beginning to appear and more sociologists are joining Twitter, for example. Some are writing about the best ways for sociologists to employ social media as part of academic practice and the importance of self-archiving and making sociological research open access, as well as writing for Wikipedia. Sociological analyses of digital media use Digital sociologists have begun to write about the use of wearable technologies as part of quantifying the body and the social dimensions of big data and the algorithms that are used to interpret these data. Others have directed attention at the role of digital technologies as part of the surveillance of people's activities, via such technologies as CCTV cameras and customer loyalty schemes as well as the mass surveillance of the Internet that is being conducted by secret services such as the NSA. The 'digital divide', or the differences in access to digital technologies experienced by certain social groups such as the socioeconomically disadvantaged, those of lower education levels, women and the elderly, has preoccupied many researchers in the social scientific study of digital media. However several sociologists have pointed out that while it is important to acknowledge and identify the structural inequalities inherent in differentials in digital technology use, this concept is rather simplistic and fails to incorporate the complexities of access to and knowledge about digital technologies. There is a growing interest in the ways in which social media contributes to the development of intimate relationships and concepts of the self. One of the best-known sociologists who has written about social relationships, selfhood and digital technologies is Sherry Turkle. In her most recent book, Turkle addresses the topic of social media. She argues that relationships conducted via these platforms are not as authentic as those encounters that take place "in real life". Visual media allows the viewer to be a more passive consumer of information. Viewers are more likely to develop online personas that differ from their personas in the real world. This contrast between the digital world (or 'cyberspace') and the 'real world', however, has been critiqued as 'digital dualism', a concept similar to the 'aura of the digital'. Other sociologists have argued that relationships conducted through digital media are inextricably part of the 'real world'. Augmented reality is an interactive experience where reality is being altered in some way by the use of digital media but not replaced. The use of social media for social activism have also provided a focus for digital sociology. For example, numerous sociological articles, and at least one book have appeared on the use of such social media platforms as Twitter, YouTube and Facebook as a means of conveying messages about activist causes and organizing political movements. Research has also been done on how racial minorities and the use of technology by racial minorities and other groups. These "digital practice" studies explore the ways in which the practices that groups adopt when using new technologies mitigate or reproduce social inequalities. Digital data analysis Digital sociologists use varied approaches to investigating people's use of digital media, both qualitative and quantitative. These include ethnographic research, interviews and surveys with users of technologies, and also the analysis of the data produced from people's interactions with technologies: for example, their posts on social media platforms such as Facebook, Reddit, 4chan, Tumblr and Twitter or their consuming habits on online shopping platforms. Such techniques as data scraping, social network analysis, time series analysis and textual analysis are employed to analyze both the data produced as a byproduct of users' interactions with digital media and those that they create themselves. For Contents Analysis, in 2008, Yukihiko Yoshida did a study called "Leni Riefenstahl and German expressionism: research in Visual Cultural Studies using the trans-disciplinary semantic spaces of specialized dictionaries." The study took databases of images tagged with connotative and denotative keywords (a search engine) and found Riefenstahl's imagery had the same qualities as imagery tagged "degenerate" in the title of the exhibition, "Degenerate Art" in Germany at 1937. The emergence of social media has provided sociologists with a new way of studying social phenomenon. Social media networks, such as Facebook and Twitter, are increasingly being mined for research. For example, Twitter data is easily available to researchers through the Twitter API. Twitter provides researchers with demographic data, time and location data, and connections between users. From these data, researchers gain insight into user moods and how they communicate with one another. Furthermore, social networks can be graphed and visualized. Using large data sets, like those obtained from Twitter, can be challenging. First of all, researchers have to figure out how to store this data effectively in a database. Several tools commonly used in Big Data analytics are at their disposal. Since large data sets can be unwieldy and contain numerous types of data (i.e. photos, videos, GIF images), researchers have the option of storing their data in non-relational databases, such as MongoDB and Hadoop. Processing and querying this data is an additional challenge. However, there are several options available to researchers. One common option is to use a querying language, such as Hive, in conjunction with Hadoop to analyze large data sets. The Internet and social media have allowed sociologists to study how controversial topics are discussed over time—otherwise known as Issue Mapping. Sociologists can search social networking sites (i.e. Facebook or Twitter) for posts related to a hotly-debated topic, then parse through and analyze the text. Sociologists can then use a number of easily accessible tools to visualize this data, such as MentionMapp or Twitter Streamgraph. MentionMapp shows how popular a hashtag is and Twitter Streamgraph depicts how often certain words are paired together and how their relationship changes over time. Digital surveillance Digital surveillance occurs when digital devices record people's daily activities, collecting and storing personal data, and invading privacy. With the advancement of new technologies, the act of monitoring and watching people online has increased between the years of 2010 to 2020. The invasion of privacy and recording people without consent leads to people doubting the usage of technologies which are supposed to secure and protect personal information. The storage of data and intrusiveness in digital surveillance affects human behavior. The psychological implications of digital surveillance can cause people to have concern, worry, or fear about feeling monitored all the time. Digital data is stored within security technologies, apps, social media platforms, and other technological devices that can be used in various ways for various reasons. Data collected from people using the internet can be subject to being monitored and viewed by private and public companies, friends, and other known or unknown entities. Critical digital sociology This aspect of digital sociology is perhaps what makes it distinctive from other approaches to studying the digital world. In adopting a critical reflexive approach, sociologists are able to address the implications of the digital for sociological practice itself. It has been argued that digital sociology offers a way of addressing the changing relations between social relations and the analysis of these relations, putting into question what social research is, and indeed, what sociology is now as social relations and society have become in many respects mediated via digital technologies. How should sociology respond to the emergent forms of both 'small data' and 'big data' that are collected in vast amounts as part of people's interactions with digital technologies and the development of data industries using these data to conduct their own social research? Does this suggest that a "coming crisis in empirical sociology" might be on the horizon? How are the identities and work practices of sociologists themselves becoming implicated within and disciplined by digital technologies such as citation metrics? These questions are central to critical digital sociology, which reflects upon the role of sociology itself in the analysis of digital technologies as well as the impact of digital technologies upon sociology. To these four aspects add the following subfields of digital sociology: Public digital sociology Public sociology using digital media is a form of public sociology that involves publishing sociological materials in online accessible spaces and subsequent interaction with publics in these spaces. This has been referred to as "e-public sociology". Social media has changed the ways the public sociology was perceived and given rise to digital evolution in this field. The vast open platform of communication has provided opportunities for sociologists to come out from the notion of small group sociology or publics to a vast audience. Blogging was the initial social media platform being utilized by sociologists. Sociologists like Eszter Hargittai, Chris Bertram, and Kieran Healy were few amongst those who started using blogging for sociology. New discussion groups about sociology and related philosophy were the consequences of social media impact. The vast number of comments and discussions thus became a part of understanding sociology. One of such famous groups was Crooked Timber. Getting feedback on such social sites is faster and impactful. Disintermediation, visibility, and measurement are the major effects of e-public sociology. Other social media tools like Twitter and Facebook also became the tools for a sociologist. "Public Sociology in the Age of Social Media". Digital transformation of sociological theory Information and communication technology as well as the proliferation of digital data are revolutionizing sociological research. Whereas there is already much methodological innovation in digital humanities and computational social sciences, theory development in the social sciences and humanities still consists mainly of print theories of computer cultures or societies. These analogue theories of the digital transformation, however, fail to account for how profoundly the digital transformation of the social sciences and humanities is changing the epistemic core of these fields. Digital methods constitute more than providers of ever-bigger digital datasets for testing of analogue theories, but also require new forms of digital theorising. The ambition of research programmes on the digital transformation of social theory is therefore to translate analogue into digital social theories so as to complement traditional analogue social theories of the digital transformation by digital theories of digital societies. See also Anthropology of cyberspace Computational social science Cyber-dissident Digital anthropology Digital humanities Digital Revolution Internet culture Internet vigilantism Slacktivism Social informatics Social web Sociology of science and technology Software studies Technology and society Tribe (internet) Virtual volunteering References Further reading John A. Bargh and Katelyn Y. A. McKenna, The Internet and Social Life, Annual Review of Psychology, Vol. 55: 573-560 (Volume publication date February 2004), Allison Cavanagh, Sociology in the Age of the Internet, McGraw-Hill International, 2007, Christine Hine, Virtual Methods: Issues in Social Research on the Internet, Berg Publishers, 2005, Rob Kling, The Internet for Sociologists, Contemporary Sociology, Vol. 26, No. 4 (Jul., 1997), pp. 434–758 Joan Ferrante-Wallace, Joan Ferrante, Sociology.net: Sociology on the Internet, Thomson Wadsworth, 1996, Daniel A. Menchik and Xiaoli Tian. (2008) "Putting Social Context into Text: The Semiotics of Email Interaction." The American Journal of Sociology. 114:2 pp. 332–70. Carla G. Surratt, "The Internet and Social Change", McFarland, 2001, D. R. Wilson, Researching Sociology on the Internet, Thomson/Wadsworth, 2004, Cottom, T.M. Why is Digital Sociology''. https://tressiemc.com/uncategorized/why-is-digital-sociology External links What is Internet Sociology and Why Does it Matter? Internet Sociology in Germany Website of Germany's first Internet Sociologist Stephan G. Humer, established in 1999 Sociology and the Internet (A short introduction, originally put-together for delegates to the ATSS 2001 Conference.) Peculiarities of Cyberspace — Building Blocks for an Internet Sociology (Articles the social structure and dynamic of internetcommunities. Presented by dr Albert Benschop, University of Amsterdam.) Communication and Information Technologies Section of the American Sociological Association The Impact of the Internet on Sociology: The Importance of the Communication and Information Technologies Section of the American Sociological Association Sociology and the Internet (course) Sociology of the Internet (link collection) Internet sociologist The Sociology of the Internet Digital Sociology Culture Digitally blog Cyborgology blog Digital Sociology storify Internet culture Social networks Social influence Internet
Sociology of the Internet
Technology
6,036
176,695
https://en.wikipedia.org/wiki/Audio%20feedback
Audio feedback (also known as acoustic feedback, simply as feedback) is a positive feedback situation that may occur when an acoustic path exists between an audio output (for example, a loudspeaker) and its audio input (for example, a microphone or guitar pickup). In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting howl is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. The principles of audio feedback were first discovered by Danish scientist Søren Absalon Larsen, hence it is also known as the Larsen effect. Feedback is almost always considered undesirable when it occurs with a singer's or public speaker's microphone at an event using a sound reinforcement system or PA system. Audio engineers typically use directional microphones with cardioid pickup patterns and various electronic devices, such as equalizers and, since the 1990s, automatic feedback suppressors, to prevent feedback, which detracts from the audience's enjoyment of the event and may damage equipment or hearing. Since the 1960s, electric guitar players in rock music bands using loud guitar amplifiers, speaker cabinets and distortion effects have intentionally created guitar feedback to create different sounds including long sustained tones that cannot be produced using standard playing techniques. The sound of guitar feedback is considered to be a desirable musical effect in heavy metal music, hardcore punk and grunge. Jimi Hendrix was an innovator in the intentional use of guitar feedback in his guitar solos to create unique musical sounds. History and theory The conditions for feedback follow the Barkhausen stability criterion, namely that, with sufficiently high gain, a stable oscillation can (and usually will) occur in a feedback loop whose frequency is such that the phase delay is an integer multiple of 360 degrees and the gain at that frequency is equal to 1. If the small-signal gain is greater than 1 for some frequency then the system will start to oscillate at that frequency because noise at that frequency will be amplified. Sound will be produced without anyone actually playing. The sound level will increase until the output starts clipping, reducing the loop gain to exactly unity. This is the principle upon which electronic oscillators are based; in that case, although the feedback loop is purely electronic, the principle is the same. If the gain is large but slightly less than 1, then ringing will be introduced, but only when at least some input sound is already being sent through the system. Early academic work on acoustical feedback was done by Dr. C. Paul Boner. Boner was responsible for establishing basic theories of acoustic feedback, room-ring modes, and room-sound system equalizing techniques. Boner reasoned that when feedback happened, it did so at one precise frequency. He also reasoned that it could be stopped by inserting a very narrow notch filter at that frequency in the loudspeaker's signal chain. He worked with Gifford White, founder of White Instruments to hand craft notch filters for specific feedback frequencies in specific rooms. Distance To maximize gain before feedback, the amount of sound energy that is fed back to the microphones must be reduced as much as is practical. As sound pressure falls off with 1/r with respect to the distance r in free space, or up to a distance known as reverberation distance in closed spaces (and the energy density with 1/r²), it is important to keep the microphones at a large enough distance from the speaker systems. As well, microphones should not be positioned in front of speakers and individuals using mics should be asked to avoid pointing the microphone at speaker enclosures. Directivity Additionally, the loudspeakers and microphones should have non-uniform directivity and should stay out of the maximum sensitivity of each other, ideally in a direction of cancellation. Public address speakers often achieve directivity in the mid and treble region (and good efficiency) via horn systems. Sometimes the woofers have a cardioid characteristic. Professional setups circumvent feedback by placing the main speakers away from the band or artist, and then having several smaller speakers known as monitors pointing back at each band member, but in the opposite direction to that in which the microphones are pointing taking advantage of microphones with a cardioid pickup pattern which are common in sound reinforcement applications. This configuration reduces the opportunities for feedback and allows independent control of the sound pressure levels for the audience and the performers. Frequency response Almost always, the natural frequency response of a sound reinforcement systems is not ideally flat as this leads to acoustical feedback at the frequency with the highest loop gain, which may be a resonance with much higher than the average gain over all frequencies. It is therefore helpful to apply some form of equalization to reduce the gain at this frequency. Feedback can be reduced manually by ringing out a sound system prior to a performance. The sound engineer can increase the level of a microphone until feedback occurs. The engineer can then attenuate the relevant frequency on an equalizer preventing feedback at that frequency but allowing sufficient volume at other frequencies. Many professional sound engineers can identify feedback frequencies by ear but others use a real-time analyzer to identify the ringing frequency. To avoid feedback, automatic feedback suppressor can be used. Some of these work by shifting the frequency slightly, with this upshift resulting in a chirp-sound instead of a howling sound of unaddressed feedback. Other devices use sharp notch filters to filter out offending frequencies. Adaptive algorithms are often used to automatically tune these notch filters. Deliberate uses To intentionally create feedback, an electric guitar player needs a guitar amplifier with very high gain (amplification) or the guitar brought near the speaker. The guitarist then allows the strings to vibrate freely and brings the guitar close to the loudspeaker of the guitar amp. The use of distortion effects units adds additional gain and facilitates the creation of intentional feedback. Early examples in popular music A deliberate use of acoustic feedback was pioneered by blues and rock and roll guitarists such as Willie Johnson, Johnny Watson and Link Wray. According to AllMusic's Richie Unterberger, the very first use of feedback on a commercial rock record is the introduction of the song "I Feel Fine" by the Beatles, recorded in 1964. Jay Hodgson agrees that this feedback created by John Lennon leaning a semi-acoustic guitar against an amplifier was the first chart-topper to showcase feedback distortion. The Who's 1965 hits "Anyway, Anyhow, Anywhere" and "My Generation" featured feedback manipulation by Pete Townshend, with an extended solo in the former and the shaking of his guitar in front of the amplifier to create a throbbing noise in the latter. Canned Heat's "Fried Hockey Boogie" also featured guitar feedback produced by Henry Vestine during his solo to create a highly amplified distorted boogie style of feedback. In 1963, the teenage Brian May and his father custom-built his signature guitar Red Special, which was purposely designed to feed back. Feedback was used extensively after 1965 by the Monks, Jefferson Airplane, the Velvet Underground and the Grateful Dead, who included in many of their live shows a segment named Feedback, a several-minute long feedback-driven improvisation. Feedback has since become a striking characteristic of rock music, as electric guitar players such as Jeff Beck, Pete Townshend, Dave Davies, Steve Marriott and Jimi Hendrix deliberately induced feedback by holding their guitars close to the amplifier's speaker. An example of feedback can be heard on Hendrix's performance of "Can You See Me?" at the Monterey Pop Festival. The entire guitar solo was created using amplifier feedback. Jazz guitarist Gábor Szabó was one of the earliest jazz musicians to use controlled feedback in his music, which is prominent on his live album The Sorcerer (1967). Szabó's method included the use of a flat-top acoustic guitar with a magnetic pickup. Lou Reed created his album Metal Machine Music (1975) entirely from loops of feedback played at various speeds. Introductions, transitions, and fade-outs In addition to "I Feel Fine", feedback was used on the introduction to songs including Jimi Hendrix's "Foxy Lady", the Beatles' "It's All Too Much", Hendrix's "Crosstown Traffic", David Bowie's "Little Wonder", the Strokes's "New York City Cops", Ben Folds Five's "Fair", Midnight Juggernauts's "Road to Recovery", Nirvana's "Radio Friendly Unit Shifter", the Jesus and Mary Chain's "Tumbledown" and "Catchfire", the Stone Roses's "Waterfall", Porno for Pyros's "Tahitian Moon", Tool's "Stinkfist", and the Cure's "Prayer For Rain". Examples of feedback combined with a quick volume swell used as a transition include Weezer's "My Name Is Jonas" and "Say It Ain't So"; The Strokes' "Reptilia", "New York City Cops", and "Juicebox"; Dream Theater's "As I Am"; as well as numerous tracks by Meshuggah and Tool. Cacophonous feedback fade-outs ending a song are most often used to generate rather than relieve tension, often cross-faded too after a thematic and musical release. Examples include Modwheelmood's remix of Nine Inch Nail's "The Great Destroyer"; and the Jesus and Mary Chain's "Teenage Lust", "Tumbledown", "Catchfire", "Sundown", and "Frequency". Examples in modern classical music Though closed circuit feedback was a prominent feature in many early experimental electronic music compositions, intentional acoustic feedback as sound material gained more prominence with compositions such as John Cage's Variations II (1961) performed by David Tudor and Robert Ashley's The Wolfman (1964). Steve Reich makes extensive use of audio feedback in his work Pendulum Music (1968) by swinging a series of microphones back and forth in front of their corresponding amplifiers. Hugh Davies and Alvin Lucier both use feedback in their works. Roland Kayn based much of his compositional oeuvre, which he termed "cybernetic music," on audio systems incorporating feedback. More recent examples can be found in the work of, for example, Lara Stanic, Paul Craenen, Anne Wellmer, Adam Basanta, Lesley Flanigan, Ronald Boersen and Erfan Abdi. Pitched feedback Pitched melodies may be created entirely from feedback by changing the angle between a guitar and amplifier after establishing a feedback loop. Examples include Tool's "Jambi", Robert Fripp's guitar on David Bowie's "Heroes" (album version), and Jimi Hendrix's "Third Stone from the Sun" and his live performance of "Wild Thing" at the Monterey Pop Festival. Regarding Fripp's work on "Heroes": Contemporary uses Audio feedback became a signature feature of many underground rock bands during the 1980s. American noise-rockers Sonic Youth melded the rock-feedback tradition with a compositional and classical approach (notably covering Reich's "Pendulum Music"), and guitarist/producer Steve Albini's group Big Black also worked controlled feedback into the makeup of their songs. With the alternative rock movement of the 1990s, feedback again saw a surge in popular usage by suddenly mainstream acts like Nirvana, the Red Hot Chili Peppers, Rage Against the Machine and the Smashing Pumpkins. The use of the "no-input-mixer" method for sound generation by feeding a mixing console back into itself has been adopted in experimental electronic and noise music by practitioners such as Toshimaru Nakamura. Devices The principle of feedback is used in many guitar sustain devices. Examples include handheld devices like the EBow, built-in guitar pickups that increase the instrument's sonic sustain, and sonic transducers mounted on the head of a guitar. Intended closed-circuit feedback can also be created by an effects unit, such as a delay pedal or effect fed back into a mixing console. The feedback can be controlled by using the fader to determine a volume level. The Boss DF-2 Super Feedbacker and Distortion pedal is an electronic effect unit that helps electric guitarists create feedback effects. The halldorophone is an electro-acoustic string instrument specifically made to work with string based feedback. See also Circuit bending Comb filter Echo suppression and cancellation Video feedback References External links Audio effects Audio electronics Rock music Guitar performance techniques Feedback he:היזון חוזר (מוזיקה)
Audio feedback
Engineering
2,638
30,818,571
https://en.wikipedia.org/wiki/Wagner%27s%20gene%20network%20model
Wagner's gene network model is a computational model of artificial gene networks, which explicitly modeled the developmental and evolutionary process of genetic regulatory networks. A population with multiple organisms can be created and evolved from generation to generation. It was first developed by Andreas Wagner in 1996 and has been investigated by other groups to study the evolution of gene networks, gene expression, robustness, plasticity and epistasis. Assumptions The model and its variants have a number of simplifying assumptions. Three of them are listing below. The organisms are modeled as gene regulatory networks. The models assume that gene expression is regulated exclusively at the transcriptional level; The product of a gene can regulate the expression of (be a regulator of) that source gene or other genes. The models assume that a gene can only produce one active transcriptional regulator; The effects of one regulator are independent of effects of other regulators on the same target gene. Genotype The model represents individuals as networks of interacting transcriptional regulators. Each individual expresses genes encoding transcription factors. The product of each gene can regulate the expression level of itself and/or the other genes through cis-regulatory elements. The interactions among genes constitute a gene network that is represented by a × regulatory matrix in the model. The elements in matrix R represent the interaction strength. Positive values within the matrix represent the activation of the target gene, while negative ones represent repression. Matrix elements with value 0 indicate the absence of interactions between two genes. Phenotype The phenotype of each individual is modeled as the gene expression pattern at time . It is represented by a state vector in this model. whose element denotes the expression state of gene i at time t. In the original Wagner model, ∈ where 1 represents the gene is expressed while -1 implies the gene is not expressed. The expression pattern can only be ON or OFF. The continuous expression pattern between -1 (or 0) and 1 is also implemented in some other variants. Development The development process is modeled as the development of gene expression states. The gene expression pattern at time is defined as the initial expression state. The interactions among genes change the expression states during the development process. This process is modeled by the following differential equations where τ) represents the expression state of at time . It is determined by a filter function σ. represents the weighted sum of regulatory effects () of all genes on gene at time . In the original Wagner model, the filter function is a step function In other variants, the filter function is implemented as a sigmoidal function In this way, the expression states will acquire a continuous distribution. The gene expression will reach the final state if it reaches a stable pattern. Evolution Evolutionary simulations are performed by reproduction-mutation-selection life cycle. Populations are fixed at size and they will not go extinct. Non-overlapping generations are employed. In a typical evolutionary simulation, a single random viable individual that can produce a stable gene expression pattern is chosen as the founder. Cloned individuals are generated to create a population of identical individuals. According to the asexual or sexual reproductive mode, offspring are produced by randomly choosing (with replacement) parent individual(s) from current generation. Mutations can be acquired with probability μ and survive with probability equal to their fitness. This process is repeated until N individuals are produced that go on to found the following generation. Fitness Fitness in this model is the probability that an individual survives to reproduce. In the simplest implementation of the model, developmentally stable genotypes survive (i.e. their fitness is ) and developmentally unstable ones do not (i.e. their fitness is ). Mutation Mutations are modeled as the changes in gene regulation, i.e., the changes of the elements in the regulatory matrix . Reproduction Both sexual and asexual reproductions are implemented. Asexual reproduction is implemented as producing the offspring's genome (the gene network) by directly copying the parent's genome. Sexual reproduction is implemented as the recombination of the two parents' genomes. Selection An organism is considered viable if it reaches a stable gene expression pattern. An organism with oscillated expression pattern is discarded and cannot enter the next generation. References External links Andreas Wagner Lab Webpage Gene expression Networks Systems biology
Wagner's gene network model
Chemistry,Biology
856
5,906,939
https://en.wikipedia.org/wiki/Shinobi%20vs.%20Dragon%20Ninja
"Shinobi vs. Dragon Ninja" is a song by Welsh rock band Lostprophets. The song was released in 2001 as the first single from the band's debut studio album, The Fake Sound of Progress. It was the only charting single on the Billboard charts from the album, and was still on the band's tour setlist until they broke up in 2013. Writing and production The band wrote the song in under an hour. According to frontman Ian Watkins, it is a song about nostalgia for when the band were growing up together in their home town of Pontypridd, South Wales, and was originally inspired by the Shinobi arcade game they used to play at the Park View Café in Pontypridd. The song's name is derived from the video games Shinobi and Bad Dudes vs. DragonNinja. Release and reception "Shinobi vs. Dragon Ninja" was released in the summer of 2001 and became the most successful song from The Fake Sound of Progress on the American rock charts. It appeared on Billboard magazine's Modern Rock Tracks chart at 33. In the United Kingdom the single peaked at 41 on the UK Singles Chart in 2001 and stayed on the charts for two weeks. After the release of the follow-up single "The Fake Sound of Progress" in 2002 "Shinobi vs. Dragon Ninja" re-charted and peaked at 161. Music video The video for this single is one of few Lostprophets videos actually filmed in the UK. It features the band performing to a crowd of people on the roof of a disused multi-storey car park in Edmonton, North London. The video received significant airplay on MTV2. The music video was directed by Mike Piscitelli, who would direct the music video for "The Fake Sound of Progress," the follow-up single to "Shinobi vs. Dragon Ninja." The shooting for the music video started on 5 October on a Friday at an undisclosed location. Any fan that wanted to participate in the music video could simply email their name, age, gender and contact number to the band. Once the email was received, the band chose who they wanted to include in the music video. An alternate version of the music video exists in a completely different setting, showing the band performing live. This version was also shot in black and white. Track listing Personnel Ian Watkins – lead vocals Lee Gaze – lead guitar Mike Lewis – rhythm guitar Stuart Richardson – bass guitar Mike Chiplin – drums, percussion Jamie Oliver – synth, turntables, samples In popular culture The song was featured on the soundtrack of ATV Offroad Fury 2. Chart positions References Lostprophets songs 2001 debut singles Works about video games Songs about nostalgia 2000 songs Nu metal songs
Shinobi vs. Dragon Ninja
Technology
562
15,582,305
https://en.wikipedia.org/wiki/HR%205553
HR 5553 is a binary star system located thirty-eight light-years away from the Sun, in the northern constellation Boötes. It has the variable star designation DE Boötis, and is classified as an RS Canum Venaticorum variable that ranges in apparent visual magnitude from 5.97 down to 6.04, which is bright enough to be dimly visible to the naked eye. The system is drifting closer to the Sun with a radial velocity of −30 km/s, and is expected to come as close as in 210,000 years. Orbital elements for this single-lined spectroscopic binary was first calculated in 1981 using radial velocity measurements from David Dunlap Observatory combined with older measurements from Mount Wilson Observatory and Dominion Astrophysical Observatory. The two stars orbit each other with a period of 125 days and a large eccentricity of 0.51. Marcel Golay listed the star as a suspected variable star in 1973. Gregory W. Henry et al. confirmed that it is a variable star in 1995. It was given its variable star designation, DE Boötis, in 1997. The primary, designated component A, is a K-type main sequence star with a stellar classification of K0 V. It is around one billion years old and is spinning with a projected rotational velocity of 4 km/s. The star has 84% of the mass of the Sun and 86% of the Sun's radius. It is radiating 50% of the luminosity of the Sun from its photosphere at an effective temperature of 5,313 K. Component B has an estimated 45% of the mass of the Sun. An infrared excess has been detected around this system, most likely indicating the presence of a circumstellar disk at a radius of 34.2 AU. The temperature of this dust is 40 K. The estimated mass of the dust is 0.0002 times the mass of the Earth. It is aligned to within 10° of the plane of the binary system. References External links NStars: 1453+1909 K-type main-sequence stars Bootis, DE RS Canum Venaticorum variables Binary stars Spectroscopic binaries Boötes Durchmusterung objects 0567 131511 072848 5553 Boötis, DE
HR 5553
Astronomy
464
26,218,190
https://en.wikipedia.org/wiki/Polyaspartic%20esters
Polyaspartic ester chemistry was first introduced in the early 1990s making it a relatively new technology. The patents were issued to Bayer in Germany and Miles Corporation in the United States. It utilizes the aza-Michael addition reaction. These products are then used in coatings, adhesives, sealants and elastomers. Pure polyurea reacts extremely quickly making them almost unusable without plural component spray equipment. Polyaspartic technology utilizes a partially blocked amine to react more slowly with the isocyanates and thus produce a modified polyurea. The amine/diamine or even triamine functional coreactant for aliphatic polyisocyanate is typically reacted with a maleate. Polyaspartic esters (PAE) initially found use in conventional solvent-borne two-component polyurethane coatings. Chemistry To manufacture a polyaspartic ester, an amine is reacted with dialkyl maleate by the aza-Michael reaction. Diethyl maleate is the usual maleate used. This converts the primary amines to secondary amines and also introduces bulky groups to the molecule which causes steric hindrance, slowing the reaction down. As the resulting aspartic molecule is now much bigger, less of the isocyanate is needed on a weight for weight basis. The isocyanate is often the most expensive part of the system especially if an aliphatic isocyanate oligomer is used and so may result in an overall lower system cost per applied film thickness. Isocyanates are known pulmonary sensitizers and hence oligomeric forms are often used with polyaspartic technology as these are much less volatile. Uses Eventually, the advantages of using polyaspartic esters as the main component of the co-reactant for reaction with an aliphatic polyisocyanate in low to zero volatile organic compound (VOC) coatings were realized. The rate of reaction of polyaspartic esters can be manipulated, thus extending the pot life and controlling the cure rate of aliphatic coatings. This allows formulators to create high solids coatings systems which are user-friendly with longer working times and still maintain a fast-cure. Traditional aliphatic polyurea formulations required high-pressure, temperature-controlled plural component spray systems to be applied due to fast initial reaction rates. Aliphatic polyaspartics can be formulated with slower reaction rates to accommodate batch-mixing and application by roller-applied methods or spray-applied through conventional single components paint sprayers without the use of solvent. As with aliphatic polyurethane or acrylic coatings, polyaspartic coatings made with aliphatic isocyanates and derivatives are UV and light stable and have a low yellowing tendency. When coating concrete, polyaspartics can be installed in both clear and pigmented form. Additionally, broadcast media such as quartz and/or vinyl paint chips can be incorporating, as well as metallic pigments. Polymer science Once the aspartic ester is formed, it is basically a sterically hindered diamine and thus in polymer science terms is a Chain extender rather than a chain terminator. Chain extenders (f = 2) and cross linkers (f ≥ 3) are low molecular weight amine terminated compounds that play an important role in polyurea compounds, coatings, elastomers and adhesives. However, the isocyanate component is often an oligomer that is trifunctional and so the crosslinking comes from that part of the cured polymer. Producers Major producers of polyaspartic esters and polyaspartic coatings are: Cargill BASF Corporation Laticrete International Rust-Oleum Sherwin-Williams See also Diethyl maleate Isocyanate Polyurea Polyurethane References External websites Covestro (formerly Bayer Material Science) Arnette Polymers LLC Pflaumer Aspartic amines Cargill Polyaspartate product line TSE Industries Plastics Polyurethanes Synthetic resins Polymers
Polyaspartic esters
Physics,Chemistry,Materials_science
852
872,023
https://en.wikipedia.org/wiki/Complementary%20experiments
In physics, two experimental techniques are often called complementary if they investigate the same subject in two different ways such that two different (ideally non-overlapping) properties or aspects can be investigated. For example, X-ray scattering and neutron scattering experiments are often said to be complementary because the former reveals information about the electron density of the atoms in the target but gives no information about the nuclei (because they are too small to affect the X-rays significantly), while the latter allows one to investigate the nuclei of the atoms but cannot tell one anything about their electron hulls (because the neutrons, being neutral, do not interact with the charged electrons). Scattering experiments are sometimes also called complementary when they investigate the same physical property of a system from two complementary view points in the sense of Bohr. For example, time-resolved and energy-resolved experiments are said to be complementary. The former uses a pulse which is well-defined in time. The latter uses a monochromatic pulse well defined in energy (its frequency is well known). See also Complement (disambiguation) References Experimental physics
Complementary experiments
Physics
225
996,278
https://en.wikipedia.org/wiki/Molecular%20geometry
Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom. Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity. The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties. Determination The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances, dihedral angles, angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas. The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions of these atoms in space, evoking bond lengths of two joined atoms, bond angles of three connected atoms, and torsion angles (dihedral angles) of three consecutive bonds. Influence of thermal excitation Since the motions of the atoms in a molecule are determined by quantum mechanics, "motion" must be defined in a quantum mechanical way. The overall (external) quantum mechanical motions translation and rotation hardly change the geometry of the molecule. (To some extent rotation influences the geometry via Coriolis forces and centrifugal distortion, but this is negligible for the present discussion.) In addition to translation and rotation, a third type of motion is molecular vibration, which corresponds to internal motions of the atoms such as bond stretching and bond angle variation. The molecular vibrations are harmonic (at least to good approximation), and the atoms oscillate about their equilibrium positions, even at the absolute zero of temperature. At absolute zero all atoms are in their vibrational ground state and show zero point quantum mechanical motion, so that the wavefunction of a single vibrational mode is not a sharp peak, but approximately a Gaussian function (the wavefunction for n = 0 depicted in the article on the quantum harmonic oscillator). At higher temperatures the vibrational modes may be thermally excited (in a classical interpretation one expresses this by stating that "the molecules will vibrate faster"), but they oscillate still around the recognizable geometry of the molecule. To get a feeling for the probability that the vibration of molecule may be thermally excited, we inspect the Boltzmann factor , where ΔE is the excitation energy of the vibrational mode, k the Boltzmann constant and T the absolute temperature. At 298 K (25 °C), typical values for the Boltzmann factor β are: β = 0.089 for ΔE = 500 cm−1 β = 0.008 for ΔE = 1000 cm−1 β = 0.0007 for ΔE = 1500 cm−1. (The reciprocal centimeter is an energy unit that is commonly used in infrared spectroscopy; 1 cm−1 corresponds to ). When an excitation energy is 500 cm−1, then about 8.9 percent of the molecules are thermally excited at room temperature. To put this in perspective: the lowest excitation vibrational energy in water is the bending mode (about 1600 cm−1). Thus, at room temperature less than 0.07 percent of all the molecules of a given amount of water will vibrate faster than at absolute zero. As stated above, rotation hardly influences the molecular geometry. But, as a quantum mechanical motion, it is thermally excited at relatively (as compared to vibration) low temperatures. From a classical point of view it can be stated that at higher temperatures more molecules will rotate faster, which implies that they have higher angular velocity and angular momentum. In quantum mechanical language: more eigenstates of higher angular momentum become thermally populated with rising temperatures. Typical rotational excitation energies are on the order of a few cm−1. The results of many spectroscopic experiments are broadened because they involve an averaging over rotational states. It is often difficult to extract geometries from spectra at high temperatures, because the number of rotational states probed in the experimental averaging increases with increasing temperature. Thus, many spectroscopic observations can only be expected to yield reliable molecular geometries at temperatures close to absolute zero, because at higher temperatures too many higher rotational states are thermally populated. Bonding Molecules, by definition, are most often held together with covalent bonds involving single, double, and/or triple bonds, where a "bond" is a shared pair of electrons (the other method of bonding between atoms is called ionic bonding and involves a positive cation and a negative anion). Molecular geometries can be specified in terms of 'bond lengths', 'bond angles' and 'torsional angles'. The bond length is defined to be the average distance between the nuclei of two atoms bonded together in any given molecule. A bond angle is the angle formed between three atoms across at least two bonds. For four atoms bonded together in a chain, the torsional angle is the angle between the plane formed by the first three atoms and the plane formed by the last three atoms. There exists a mathematical relationship among the bond angles for one central atom and four peripheral atoms (labeled 1 through 4) expressed by the following determinant. This constraint removes one degree of freedom from the choices of (originally) six free bond angles to leave only five choices of bond angles. (The angles θ11, θ22, θ33, and θ44 are always zero and that this relationship can be modified for a different number of peripheral atoms by expanding/contracting the square matrix.) Molecular geometry is determined by the quantum mechanical behavior of the electrons. Using the valence bond approximation this can be understood by the type of bonds between the atoms that make up the molecule. When atoms interact to form a chemical bond, the atomic orbitals of each atom are said to combine in a process called orbital hybridisation. The two most common types of bonds are sigma bonds (usually formed by hybrid orbitals) and pi bonds (formed by unhybridized p orbitals for atoms of main group elements). The geometry can also be understood by molecular orbital theory where the electrons are delocalised. An understanding of the wavelike behavior of electrons in atoms and molecules is the subject of quantum chemistry. Isomers Isomers are types of molecules that share a chemical formula but have difference geometries, resulting in different properties: A pure substance is composed of only one type of isomer of a molecule (all have the same geometrical structure). Structural isomers have the same chemical formula but different physical arrangements, often forming alternate molecular geometries with very different properties. The atoms are not bonded (connected) together in the same orders. Functional isomers are special kinds of structural isomers, where certain groups of atoms exhibit a special kind of behavior, such as an ether or an alcohol. Stereoisomers may have many similar physicochemical properties (melting point, boiling point) and at the same time very different biochemical activities. This is because they exhibit a handedness that is commonly found in living systems. One manifestation of this chirality or handedness is that they have the ability to rotate polarized light in different directions. Protein folding concerns the complex geometries and different isomers that proteins can take. Types of molecular structure A bond angle is the geometric angle between two adjacent bonds. Some common shapes of simple molecules include: Linear: In a linear model, atoms are connected in a straight line. The bond angles are set at 180°. For example, carbon dioxide and nitric oxide have a linear molecular shape. Trigonal planar: Molecules with the trigonal planar shape are somewhat triangular and in one plane (flat). Consequently, the bond angles are set at 120°. For example, boron trifluoride. Angular: Angular molecules (also called bent or V-shaped) have a non-linear shape. For example, water (H2O), which has an angle of about 105°. A water molecule has two pairs of bonded electrons and two unshared lone pairs. Tetrahedral: Tetra- signifies four, and -hedral relates to a face of a solid, so "tetrahedral" literally means "having four faces". This shape is found when there are four bonds all on one central atom, with no extra unshared electron pairs. In accordance with the VSEPR (valence-shell electron pair repulsion theory), the bond angles between the electron bonds are arccos(−) = 109.47°. For example, methane (CH4) is a tetrahedral molecule. Octahedral: Octa- signifies eight, and -hedral relates to a face of a solid, so "octahedral" means "having eight faces". The bond angle is 90 degrees. For example, sulfur hexafluoride (SF6) is an octahedral molecule. Trigonal pyramidal: A trigonal pyramidal molecule has a pyramid-like shape with a triangular base. Unlike the linear and trigonal planar shapes but similar to the tetrahedral orientation, pyramidal shapes require three dimensions in order to fully separate the electrons. Here, there are only three pairs of bonded electrons, leaving one unshared lone pair. Lone pair – bond pair repulsions change the bond angle from the tetrahedral angle to a slightly lower value. For example, ammonia (NH3). VSEPR table The bond angles in the table below are ideal angles from the simple VSEPR theory (pronounced "Vesper Theory"), followed by the actual angle for the example given in the following column where this differs. For many cases, such as trigonal pyramidal and bent, the actual angle for the example differs from the ideal angle, and examples differ by different amounts. For example, the angle in H2S (92°) differs from the tetrahedral angle by much more than the angle for H2O (104.48°) does. The greater the number of lone pairs contained in a molecule, the smaller the angles between the atoms of that molecule. The VSEPR theory predicts that lone pairs repel each other, thus pushing the different atoms away from them. In art Molecule Art is a relatively obscure form of abstract art in which Molecular Geometry, most often a skeletal formation. 3D representations Line or stick – atomic nuclei are not represented, just the bonds as sticks or lines. As in 2D molecular structures of this type, atoms are implied at each vertex. Electron density plot – shows the electron density determined either crystallographically or using quantum mechanics rather than distinct atoms or bonds. Ball and stick – atomic nuclei are represented by spheres (balls) and the bonds as sticks. Spacefilling models or CPK models (also an atomic coloring scheme in representations) – the molecule is represented by overlapping spheres representing the atoms. Cartoon – a representation used for proteins where loops, beta sheets, and alpha helices are represented diagrammatically and no atoms or bonds are explicitly represented (e.g. the protein backbone is represented as a smooth pipe). See also Jemmis mno rules Lewis structure Molecular design software Molecular graphics Molecular mechanics Molecular modelling Molecular symmetry Molecule editor Polyhedral skeletal electron pair theory Quantum chemistry Ribbon diagram Styx rule (for boranes) Topology (chemistry) References External links Molecular Geometry & Polarity Tutorial 3D visualization of molecules to determine polarity. Molecular Geometry using Crystals 3D structure visualization of molecules using Crystallography.
Molecular geometry
Physics,Chemistry
2,577
29,662,478
https://en.wikipedia.org/wiki/Normalized%20Google%20distance
The normalized Google distance (NGD) is a semantic similarity measure derived from the number of hits returned by the Google search engine for a given set of keywords. Keywords with the same or similar meanings in a natural language sense tend to be "close" in units of normalized Google distance, while words with dissimilar meanings tend to be farther apart. Specifically, the NGD between two search terms x and y is where N is the total number of web pages searched by Google multiplied by the average number of singleton search terms occurring on pages; f(x) and f(y) are the number of hits for search terms x and y, respectively; and f(x, y) is the number of web pages on which both x and y occur. If the then x and y are viewed as alike as possible, but if then x and y are very different. If the two search terms x and y never occur together on the same web page, but do occur separately, the NGD between them is infinite. If both terms always occur together, their NGD is zero. Example: On 9 April 2013, googling for "Shakespeare" gave 130,000,000 hits; googling for "Macbeth" gave 26,000,000 hits; and googling for "Shakespeare Macbeth" gave 20,800,000 hits. The number of pages indexed by Google was estimated by the number of hits of the search term "the" which was 25,270,000,000 hits. Assuming there are about 1,000 search terms on the average page this gives . Hence . "Shakespeare" and "Macbeth" are very much alike according to the relative semantics supplied by Google. Introduction The normalized Google distance is derived from the earlier normalized compression distance. Namely, objects can be given literally, like the literal four-letter genome of a mouse, or the literal text of Macbeth by Shakespeare. The similarity of these objects is given by the NCD. For simplicity we take it that all meaning of the object is represented by the literal object itself. Objects can also be given by name, like 'the four-letter genome of a mouse,' or 'the text of Macbeth by Shakespeare.' There are also objects that cannot be given literally, but only by name, and that acquire their meaning from their contexts in background common knowledge in humankind, like "home" or "red". The similarity between names for objects is given by the NGD. Google distribution and Google code The probabilities of Google search terms, conceived as the frequencies of page counts returned by Google divided by the number of pages indexed by Google (multiplied by the average number of search terms in those pages), approximate the actual relative frequencies of those search terms as actually used in society. Based on this premise, the relations represented by the normalized Google distance approximately capture the assumed true semantic relations governing the search terms. In the NGD, the World Wide Web and Google are used. Other text corpora include Wikipedia, the King James version of the Bible or the Oxford English Dictionary together with appropriate search engines. Properties The following properties are proved in: The NGD is roughly in between 0 and . It can be slightly negative. For example, "red red" gives about 20% more hits of Google on the World Wide Web than "red". (Mid 2013 there were 4.260.000.000 hits for "red" and 5.500.000.000 hits for "red red". Presently, "red red" now returns far fewer results than "red".) If the then we view x and y as very dissimilar. The NGD is not a metric. The NGD is zero for x and y that are not equal provided x and y do always occur together on the same web page. From the NGD formula we see that it is symmetric. The triangle property is not satisfied by the NGD. However, these results are theoretic. It is hard to come up with practical examples of the World Wide Web using Google that violate the triangle property. Applications Applications to colors versus numbers, primes versus non-primes and so are given in, as well as a randomized massive experiment using WordNet categories. In the primes versus non-primes case and the WordNet experiment the NGD method is augmented with a support vector machine classifier. The experiments consist of 25 positive examples and 25 negative ones. The WordNet experiment consisted of 100 random WordNet categories. The NGD method had a success rate of 87.25%. The mean is 0.8725 while the standard deviation was 0.1169. These rates are about agreement with the WordNet categories which represent the knowledge of researchers with PhDs which entered them. It is rare to see agreement less than 75%. References Further reading (Includes comparison of NGD to other algorithms.) (the use of NGD for term clustering) Computational linguistics Statistical distance
Normalized Google distance
Physics,Technology
1,013
488,743
https://en.wikipedia.org/wiki/Carbon%20monoxide%20poisoning
Carbon monoxide poisoning typically occurs from breathing in carbon monoxide (CO) at excessive levels. Symptoms are often described as "flu-like" and commonly include headache, dizziness, weakness, vomiting, chest pain, and confusion. Large exposures can result in loss of consciousness, arrhythmias, seizures, or death. The classically described "cherry red skin" rarely occurs. Long-term complications may include chronic fatigue, trouble with memory, and movement problems. CO is a colorless and odorless gas which is initially non-irritating. It is produced during incomplete burning of organic matter. This can occur from motor vehicles, heaters, or cooking equipment that run on carbon-based fuels. Carbon monoxide primarily causes adverse effects by combining with hemoglobin to form carboxyhemoglobin (symbol COHb or HbCO) preventing the blood from carrying oxygen and expelling carbon dioxide as carbaminohemoglobin. Additionally, many other hemoproteins such as myoglobin, Cytochrome P450, and mitochondrial cytochrome oxidase are affected, along with other metallic and non-metallic cellular targets. Diagnosis is typically based on a HbCO level of more than 3% among nonsmokers and more than 10% among smokers. The biological threshold for carboxyhemoglobin tolerance is typically accepted to be 15% COHb, meaning toxicity is consistently observed at levels in excess of this concentration. The FDA has previously set a threshold of 14% COHb in certain clinical trials evaluating the therapeutic potential of carbon monoxide. In general, 30% COHb is considered severe carbon monoxide poisoning. The highest reported non-fatal carboxyhemoglobin level was 73% COHb. Efforts to prevent poisoning include carbon monoxide detectors, proper venting of gas appliances, keeping chimneys clean, and keeping exhaust systems of vehicles in good repair. Treatment of poisoning generally consists of giving 100% oxygen along with supportive care. This procedure is often carried out until symptoms are absent and the HbCO level is less than 3%/10%. Carbon monoxide poisoning is relatively common, resulting in more than 20,000 emergency room visits a year in the United States. It is the most common type of fatal poisoning in many countries. In the United States, non-fire related cases result in more than 400 deaths a year. Poisonings occur more often in the winter, particularly from the use of portable generators during power outages. The toxic effects of CO have been known since ancient history. The discovery that hemoglobin is affected by CO emerged with an investigation by James Watt and Thomas Beddoes into the therapeutic potential of hydrocarbonate in 1793, and later confirmed by Claude Bernard between 1846 and 1857. Background Carbon monoxide is not toxic to all forms of life, and the toxicity is a classical dose-dependent example of hormesis. Small amounts of carbon monoxide are naturally produced through many enzymatic and non-enzymatic reactions across phylogenetic kingdoms where it can serve as an important neurotransmitter (subcategorized as a gasotransmitter) and a potential therapeutic agent. In the case of prokaryotes, some bacteria produce, consume and respond to carbon monoxide whereas certain other microbes are susceptible to its toxicity. Currently, there are no known adverse effects on photosynthesizing plants. The harmful effects of carbon monoxide are generally considered to be due to tightly binding with the prosthetic heme moiety of hemoproteins that results in interference with cellular operations, for example: carbon monoxide binds with hemoglobin to form carboxyhemoglobin which affects gas exchange and cellular respiration. Inhaling excessive concentrations of the gas can lead to hypoxic injury, nervous system damage, and even death. As pioneered by Esther Killick, different species and different people across diverse demographics may have different carbon monoxide tolerance levels. The carbon monoxide tolerance level for any person is altered by several factors, including genetics (hemoglobin mutations), behavior such as activity level, rate of ventilation, a pre-existing cerebral or cardiovascular disease, cardiac output, anemia, sickle cell disease and other hematological disorders, geography and barometric pressure, and metabolic rate. Physiology Carbon monoxide is produced naturally by many physiologically relevant enzymatic and non-enzymatic reactions best exemplified by heme oxygenase catalyzing the biotransformation of heme (an iron protoporphyrin) into biliverdin and eventually bilirubin. Aside from physiological signaling, most carbon monoxide is stored as carboxyhemoglobin at non-toxic levels below 3% HbCO. Therapeutics Small amounts of CO are beneficial and enzymes exist that produce it at times of oxidative stress. A variety of drugs are being developed to introduce small amounts of CO, these drugs are commonly called carbon monoxide-releasing molecules. Historically, the therapeutic potential of factitious airs, notably carbon monoxide as hydrocarbonate, was investigated by Thomas Beddoes, James Watt, Tiberius Cavallo, James Lind, Humphry Davy, and others in many labs such as the Pneumatic Institution. Signs and symptoms On average, exposures at 100 ppm or greater is dangerous to human health. The WHO recommended levels of indoor CO exposure in 24 hours is 4 mg/m3. Acute exposure should not exceed 10 mg/m3 in 8 hours, 35 mg/m3 in one hour and 100 mg/m3 in 15 minutes. Acute poisoning The main manifestations of carbon monoxide poisoning develop in the organ systems most dependent on oxygen use, the central nervous system and the heart. The initial symptoms of acute carbon monoxide poisoning include headache, nausea, malaise, and fatigue. These symptoms are often mistaken for a virus such as influenza or other illnesses such as food poisoning or gastroenteritis. Headache is the most common symptom of acute carbon monoxide poisoning; it is often described as dull, frontal, and continuous. Increasing exposure produces cardiac abnormalities including fast heart rate, low blood pressure, and cardiac arrhythmia; central nervous system symptoms include delirium, hallucinations, dizziness, unsteady gait, confusion, seizures, central nervous system depression, unconsciousness, respiratory arrest, and death. Less common symptoms of acute carbon monoxide poisoning include myocardial ischemia, atrial fibrillation, pneumonia, pulmonary edema, high blood sugar, lactic acidosis, muscle necrosis, acute kidney failure, skin lesions, and visual and auditory problems. Carbon monoxide exposure may lead to a significantly shorter life span due to heart damage. One of the major concerns following acute carbon monoxide poisoning is the severe delayed neurological manifestations that may occur. Problems may include difficulty with higher intellectual functions, short-term memory loss, dementia, amnesia, psychosis, irritability, a strange gait, speech disturbances, Parkinson's disease-like syndromes, cortical blindness, and a depressed mood. Depression may occur in those who did not have pre-existing depression. These delayed neurological sequelae may occur in up to 50% of poisoned people after 2 to 40 days. It is difficult to predict who will develop delayed sequelae; however, advanced age, loss of consciousness while poisoned, and initial neurological abnormalities may increase the chance of developing delayed symptoms. Chronic poisoning Chronic exposure to relatively low levels of carbon monoxide may cause persistent headaches, lightheadedness, depression, confusion, memory loss, nausea, hearing disorders and vomiting. It is unknown whether low-level chronic exposure may cause permanent neurological damage. Typically, upon removal from exposure to carbon monoxide, symptoms usually resolve themselves, unless there has been an episode of severe acute poisoning. However, one case noted permanent memory loss and learning problems after a three-year exposure to relatively low levels of carbon monoxide from a faulty furnace. Chronic exposure may worsen cardiovascular symptoms in some people. Chronic carbon monoxide exposure might increase the risk of developing atherosclerosis. Long-term exposures to carbon monoxide present the greatest risk to persons with coronary heart disease and in females who are pregnant. In experimental animals, carbon monoxide appears to worsen noise-induced hearing loss at noise exposure conditions that would have limited effects on hearing otherwise. In humans, hearing loss has been reported following carbon monoxide poisoning. Unlike the findings in animal studies, noise exposure was not a necessary factor for the auditory problems to occur. Fatal poisoning One classic sign of carbon monoxide poisoning is more often seen in the dead rather than the living – people have been described as looking red-cheeked and healthy. However, since this "cherry-red" appearance is more common in the dead, it is not considered a useful diagnostic sign in clinical medicine. In autopsy examinations, the appearance of carbon monoxide poisoning is notable because unembalmed dead persons are normally bluish and pale, whereas dead carbon-monoxide poisoned people may appear unusually lifelike in coloration. The colorant effect of carbon monoxide in such postmortem circumstances is thus analogous to its use as a red colorant in the commercial meat-packing industry. Epidemiology The true number of cases of carbon monoxide poisoning is unknown, since many non-lethal exposures go undetected. From the available data, carbon monoxide poisoning is the most common cause of injury and death due to poisoning worldwide. Poisoning is typically more common during the winter months. This is due to increased domestic use of gas furnaces, gas or kerosene space heaters, and kitchen stoves during the winter months, which if faulty and/or used without adequate ventilation, may produce excessive carbon monoxide. Carbon monoxide detection and poisoning also increases during power outages, when electric heating and cooking appliances become inoperative and residents may temporarily resort to fuel-burning space heaters, stoves, and grills (some of which are safe only for outdoor use but nonetheless are errantly burned indoors). It has been estimated that more than 40,000 people per year seek medical attention for carbon monoxide poisoning in the United States. 95% of carbon monoxide poisoning deaths in Australia are due to gas space heaters. In many industrialized countries, carbon monoxide is the cause of more than 50% of fatal poisonings. In the United States, approximately 200 people die each year from carbon monoxide poisoning associated with home fuel-burning heating equipment. Carbon monoxide poisoning contributes to the approximately 5,613 smoke inhalation deaths each year in the United States. The CDC reports, "Each year, more than 500 Americans die from unintentional carbon monoxide poisoning, and more than 2,000 commit suicide by intentionally poisoning themselves." For the 10-year period from 1979 to 1988, 56,133 deaths from carbon monoxide poisoning occurred in the United States, with 25,889 of those being suicides, leaving 30,244 unintentional deaths. A report from New Zealand showed that 206 people died from carbon monoxide poisoning in the years of 2001 and 2002. In total carbon monoxide poisoning was responsible for 43.9% of deaths by poisoning in that country. In South Korea, 1,950 people had been poisoned by carbon monoxide with 254 deaths from 2001 through 2003. A report from Jerusalem showed 3.53 per 100,000 people were poisoned annually from 2001 through 2006. In Hubei, China, 218 deaths from poisoning were reported over a 10-year period with 16.5% being from carbon monoxide exposure. Causes Carbon monoxide is a product of combustion of organic matter under conditions of restricted oxygen supply, which prevents complete oxidation to carbon dioxide (CO2). Sources of carbon monoxide include cigarette smoke, house fires, faulty furnaces, heaters, wood-burning stoves, internal combustion vehicle exhaust, electrical generators, propane-fueled equipment such as portable stoves, and gasoline-powered tools such as leaf blowers, lawn mowers, high-pressure washers, concrete cutting saws, power trowels, and welders. Exposure typically occurs when equipment is used in buildings or semi-enclosed spaces. Riding in the back of pickup trucks has led to poisoning in children. Idling automobiles with the exhaust pipe blocked by snow has led to the poisoning of car occupants. Any perforation between the exhaust manifold and shroud can result in exhaust gases reaching the cabin. Generators and propulsion engines on boats, notably houseboats, have resulted in fatal carbon monoxide exposures. Poisoning may also occur following the use of a self-contained underwater breathing apparatus (SCUBA) due to faulty diving air compressors. In caves carbon monoxide can build up in enclosed chambers due to the presence of decomposing organic matter. In coal mines incomplete combustion may occur during explosions resulting in the production of afterdamp. The gas is up to 3% CO and may be fatal after just a single breath. Following an explosion in a colliery, adjacent interconnected mines may become dangerous due to the afterdamp leaking from mine to mine. Such an incident followed the Trimdon Grange explosion which killed men in the Kelloe mine. Another source of poisoning is exposure to the organic solvent dichloromethane, also known as methylene chloride, found in some paint strippers, as the metabolism of dichloromethane produces carbon monoxide. In November 2019, an EPA ban on dichloromethane in paint strippers for consumer use took effect in the United States. Prevention Detectors Prevention remains a vital public health issue, requiring public education on the safe operation of appliances, heaters, fireplaces, and internal-combustion engines, as well as increased emphasis on the installation of carbon monoxide detectors. Carbon monoxide is tasteless, odourless, and colourless, and therefore can not be detected by visual cues or smell. The United States Consumer Product Safety Commission has stated, "carbon monoxide detectors are as important to home safety as smoke detectors are," and recommends each home have at least one carbon monoxide detector, and preferably one on each level of the building. These devices, which are relatively inexpensive and widely available, are either battery- or AC-powered, with or without battery backup. In buildings, carbon monoxide detectors are usually installed around heaters and other equipment. If a relatively high level of carbon monoxide is detected, the device sounds an alarm, giving people the chance to evacuate and ventilate the building. Unlike smoke detectors, carbon monoxide detectors do not need to be placed near ceiling level. The use of carbon monoxide detectors has been standardized in many areas. In the US, NFPA 720–2009, the carbon monoxide detector guidelines published by the National Fire Protection Association, mandates the placement of carbon monoxide detectors/alarms on every level of the residence, including the basement, in addition to outside sleeping areas. In new homes, AC-powered detectors must have battery backup and be interconnected to ensure early warning of occupants at all levels. NFPA 720-2009 is the first national carbon monoxide standard to address devices in non-residential buildings. These guidelines, which now pertain to schools, healthcare centers, nursing homes, and other non-residential buildings, include three main points: 1. A secondary power supply (battery backup) must operate all carbon monoxide notification appliances for at least 12 hours, 2. Detectors must be on the ceiling in the same room as permanently installed fuel-burning appliances, and 3. Detectors must be located on every habitable level and in every HVAC zone of the building. Gas organizations will often recommend getting gas appliances serviced at least once a year. Legal requirements The NFPA standard is not necessarily enforced by law. As of April 2006, the US state of Massachusetts requires detectors to be present in all residences with potential CO sources, regardless of building age and whether they are owner-occupied or rented. This is enforced by municipal inspectors and was inspired by the death of 7-year-old Nicole Garofalo in 2005 due to snow blocking a home heating vent. Other jurisdictions may have no requirement or only mandate detectors for new construction or at time of sale. World Health Organization recommendations The following guideline values (ppm values rounded) and periods of time-weighted average exposures have been determined in such a way that the carboxyhemoglobin (COHb) level of 2.5% is not exceeded, even when a normal subject engages in light or moderate exercise: 100 mg/m3 (87 ppm) for 15 min 60 mg/m3 (52 ppm) for 30 min 30 mg/m3 (26 ppm) for 1 h 10 mg/m3 (9 ppm) for 8 h 7 mg/m3 (6 ppm) for 24 h (for indoor air quality, so as not to exceed 2% COHb for chronic exposure) Diagnosis As many symptoms of carbon monoxide poisoning also occur with many other types of poisonings and infections (such as the flu), the diagnosis is often difficult. A history of potential carbon monoxide exposure, such as being exposed to a residential fire, may suggest poisoning, but the diagnosis is confirmed by measuring the levels of carbon monoxide in the blood. This can be determined by measuring the amount of carboxyhemoglobin compared to the amount of hemoglobin in the blood. The ratio of carboxyhemoglobin to hemoglobin molecules in an average person may be up to 5%, although cigarette smokers who smoke two packs per day may have levels up to 9%. In symptomatic poisoned people they are often in the 10–30% range, while persons who die may have postmortem blood levels of 30–90%. As people may continue to experience significant symptoms of CO poisoning long after their blood carboxyhemoglobin concentration has returned to normal, presenting to examination with a normal carboxyhemoglobin level (which may happen in late states of poisoning) does not rule out poisoning. Measuring Carbon monoxide may be quantitated in blood using spectrophotometric methods or chromatographic techniques in order to confirm a diagnosis of poisoning in a person or to assist in the forensic investigation of a case of fatal exposure. A CO-oximeter can be used to determine carboxyhemoglobin levels. Pulse CO-oximeters estimate carboxyhemoglobin with a non-invasive finger clip similar to a pulse oximeter. These devices function by passing various wavelengths of light through the fingertip and measuring the light absorption of the different types of hemoglobin in the capillaries. The use of a regular pulse oximeter is not effective in the diagnosis of carbon monoxide poisoning as these devices may be unable to distinguish carboxyhemoglobin from oxyhemoglobin. Breath CO monitoring offers an alternative to pulse CO-oximetry. Carboxyhemoglobin levels have been shown to have a strong correlation with breath CO concentration. However, many of these devices require the user to inhale deeply and hold their breath to allow the CO in the blood to escape into the lung before the measurement can be made. As this is not possible in people who are unresponsive, these devices may not appropriate for use in on-scene emergency care detection of CO poisoning. Differential diagnosis There are many conditions to be considered in the differential diagnosis of carbon monoxide poisoning. The earliest symptoms, especially from low level exposures, are often non-specific and readily confused with other illnesses, typically flu-like viral syndromes, depression, chronic fatigue syndrome, chest pain, and migraine or other headaches. Carbon monoxide has been called a "great mimicker" due to the presentation of poisoning being diverse and nonspecific. Other conditions included in the differential diagnosis include acute respiratory distress syndrome, altitude sickness, lactic acidosis, diabetic ketoacidosis, meningitis, methemoglobinemia, or opioid or toxic alcohol poisoning. Treatment Initial treatment for carbon monoxide poisoning is to immediately remove the person from the exposure without endangering further people. Those who are unconscious may require CPR on site. Administering oxygen via non-rebreather mask shortens the half-life of carbon monoxide from 320 minutes, when breathing normal air, to only 80 minutes. Oxygen hastens the dissociation of carbon monoxide from carboxyhemoglobin, thus turning it back into hemoglobin. Due to the possible severe effects in the baby, pregnant women are treated with oxygen for longer periods of time than non-pregnant people. Hyperbaric oxygen Hyperbaric oxygen is also used in the treatment of carbon monoxide poisoning, as it may hasten dissociation of CO from carboxyhemoglobin and cytochrome oxidase to a greater extent than normal oxygen. Hyperbaric oxygen at three times atmospheric pressure reduces the half life of carbon monoxide to 23 minutes, compared to 80 minutes for oxygen at regular atmospheric pressure. It may also enhance oxygen transport to the tissues by plasma, partially bypassing the normal transfer through hemoglobin. However, it is controversial whether hyperbaric oxygen actually offers any extra benefits over normal high flow oxygen, in terms of increased survival or improved long-term outcomes. There have been randomized controlled trials in which the two treatment options have been compared; of the six performed, four found hyperbaric oxygen improved outcome and two found no benefit for hyperbaric oxygen. Some of these trials have been criticized for apparent flaws in their implementation. A review of all the literature concluded that the role of hyperbaric oxygen is unclear and the available evidence neither confirms nor denies a medically meaningful benefit. The authors suggested a large, well designed, externally audited, multicentre trial to compare normal oxygen with hyperbaric oxygen. While hyperbaric oxygen therapy is used for severe poisonings, the benefit over standard oxygen delivery is unclear. Other Further treatment for other complications such as seizure, hypotension, cardiac abnormalities, pulmonary edema, and acidosis may be required. Hypotension requires treatment with intravenous fluids; vasopressors may be required to treat myocardial depression. Cardiac dysrhythmias are treated with standard advanced cardiac life support protocols. If severe, metabolic acidosis is treated with sodium bicarbonate. Treatment with sodium bicarbonate is controversial as acidosis may increase tissue oxygen availability. Treatment of acidosis may only need to consist of oxygen therapy. The delayed development of neuropsychiatric impairment is one of the most serious complications of carbon monoxide poisoning. Brain damage is confirmed following MRI or CAT scans. Extensive follow up and supportive treatment is often required for delayed neurological damage. Outcomes are often difficult to predict following poisoning, especially people who have symptoms of cardiac arrest, coma, metabolic acidosis, or have high carboxyhemoglobin levels. One study reported that approximately 30% of people with severe carbon monoxide poisoning will have a fatal outcome. It has been reported that electroconvulsive therapy (ECT) may increase the likelihood of delayed neuropsychiatric sequelae (DNS) after carbon monoxide (CO) poisoning. A device that also provides some carbon dioxide to stimulate faster breathing (sold under the brand name ClearMate) may also be used. Pathophysiology The precise mechanisms by which the effects of carbon monoxide are induced upon bodily systems are complex and not yet fully understood. Known mechanisms include carbon monoxide binding to hemoglobin, myoglobin and mitochondrial cytochrome c oxidase and restricting oxygen supply, and carbon monoxide causing brain lipid peroxidation. Hemoglobin Carbon monoxide has a higher diffusion coefficient compared to oxygen, and the main enzyme in the human body that produces carbon monoxide is heme oxygenase, which is located in nearly all cells and platelets. Most endogenously produced CO is stored bound to hemoglobin as carboxyhemoglobin. The simplistic understanding for the mechanism of carbon monoxide toxicity is based on excess carboxyhemoglobin decreasing the oxygen-delivery capacity of the blood to tissues throughout the body. In humans, the affinity between hemoglobin and carbon monoxide is approximately 240 times stronger than the affinity between hemoglobin and oxygen. However, certain mutations such as the Hb-Kirklareli mutation has a relative 80,000 times greater affinity for carbon monoxide than oxygen resulting in systemic carboxyhemoglobin reaching a sustained level of 16% COHb. Hemoglobin is a tetramer with four prosthetic heme groups to serve as oxygen binding sites. The average red blood cell contains 250 million hemoglobin molecules, therefore 1 billion heme sites capable of binding gas. The binding of carbon monoxide at any one of these sites increases the oxygen affinity of the remaining three sites, which causes the hemoglobin molecule to retain oxygen that would otherwise be delivered to the tissue; therefore carbon monoxide binding at any site may be as dangerous as carbon monoxide binding to all sites. Delivery of oxygen is largely driven by the Bohr effect and Haldane effect. To provide a simplified synopsis of the molecular mechanism of systemic gas exchange in layman's terms, upon inhalation of air it was widely thought oxygen binding to any of the heme sites triggers a conformational change in the globin/protein unit of hemoglobin which then enables the binding of additional oxygen to each of the other vacant heme sites. Upon arrival to the cell/tissues, oxygen release into the tissue is driven by "acidification" of the local pH (meaning a relatively higher concentration of 'acidic' protons/hydrogen ions) caused by an increase in the biotransformation of carbon dioxide waste into carbonic acid via carbonic anhydrase. In other words, oxygenated arterial blood arrives at cells in the "hemoglobin R-state" which has deprotonated/unionized amino acid residues (regarding nitrogen/amines) due to the less-acidic arterial pH environment (arterial blood averages pH 7.407 whereas venous blood is slightly more acidic at pH 7.371). The "T-state" of hemoglobin is deoxygenated in venous blood partially due to protonation/ionization caused by the acidic environment hence causing a conformation unsuited for oxygen-binding (in other words, oxygen is 'ejected' upon arrival to the cell because acid "attacks" the amines of hemoglobin causing ionization/protonation of the amine residues resulting in a conformation change unsuited for retaining oxygen). Furthermore, the mechanism for formation of carbaminohemoglobin generates additional 'acidic' hydrogen ions that may further stabilize the protonated/ionized deoxygenated hemoglobin. Upon return of venous blood into the lung and subsequent exhalation of carbon dioxide, the blood is "de-acidified" (see also: hyperventilation) allowing for the deprotonation/unionization of hemoglobin to then re-enable oxygen-binding as part of the transition to arterial blood (note this process is complex due to involvement of chemoreceptors and other physiological functionalities). Carbon monoxide is not 'ejected' due to acid, therefore carbon monoxide poisoning disturbs this physiological process hence the venous blood of poisoning patients is bright red akin to arterial blood since the carbonyl/carbon monoxide is retained. Hemoglobin is dark in deoxygenated venous blood, but it has a bright red color when carrying blood in oxygenated arterial blood and when converted into carboxyhemoglobin in both arterial and venous blood, so poisoned cadavers and even commercial meats treated with carbon monoxide acquire an unnatural lively reddish hue. At toxic concentrations, carbon monoxide as carboxyhemoglobin significantly interferes with respiration and gas exchange by simultaneously inhibiting acquisition and delivery of oxygen to cells and preventing formation of carbaminohemoglobin which accounts for approximately 30% of carbon dioxide exportation. Therefore, a patient with carbon monoxide poisoning may experience severe hypoxia and acidosis (potentially both respiratory acidosis and metabolic acidosis) in addition to the toxicities of excess carbon monoxide inhibiting numerous hemoproteins, metallic and non-metallic targets which affect cellular machinery. Myoglobin Carbon monoxide also binds to the hemeprotein myoglobin. It has a high affinity for myoglobin, about 60 times greater than that of oxygen. Carbon monoxide bound to myoglobin may impair its ability to utilize oxygen. This causes reduced cardiac output and hypotension, which may result in brain ischemia. A delayed return of symptoms have been reported. This results following a recurrence of increased carboxyhemoglobin levels; this effect may be due to a late release of carbon monoxide from myoglobin, which subsequently binds to hemoglobin. Cytochrome oxidase Another mechanism involves effects on the mitochondrial respiratory enzyme chain that is responsible for effective tissue utilization of oxygen. Carbon monoxide binds to cytochrome oxidase with less affinity than oxygen, so it is possible that it requires significant intracellular hypoxia before binding. This binding interferes with aerobic metabolism and efficient adenosine triphosphate synthesis. Cells respond by switching to anaerobic metabolism, causing anoxia, lactic acidosis, and eventual cell death. The rate of dissociation between carbon monoxide and cytochrome oxidase is slow, causing a relatively prolonged impairment of oxidative metabolism. Central nervous system effects The mechanism that is thought to have a significant influence on delayed effects involves formed blood cells and chemical mediators, which cause brain lipid peroxidation (degradation of unsaturated fatty acids). Carbon monoxide causes endothelial cell and platelet release of nitric oxide, and the formation of oxygen free radicals including peroxynitrite. In the brain this causes further mitochondrial dysfunction, capillary leakage, leukocyte sequestration, and apoptosis. The result of these effects is lipid peroxidation, which causes delayed reversible demyelination of white matter in the central nervous system known as Grinker myelinopathy, which can lead to edema and necrosis within the brain. This brain damage occurs mainly during the recovery period. This may result in cognitive defects, especially affecting memory and learning, and movement disorders. These disorders are typically related to damage to the cerebral white matter and basal ganglia. Hallmark pathological changes following poisoning are bilateral necrosis of the white matter, globus pallidus, cerebellum, hippocampus and the cerebral cortex. Pregnancy Carbon monoxide poisoning in pregnant women may cause severe adverse fetal effects. Poisoning causes fetal tissue hypoxia by decreasing the release of maternal oxygen to the fetus. Carbon monoxide also crosses the placenta and combines with fetal hemoglobin, causing more direct fetal tissue hypoxia. Additionally, fetal hemoglobin has a 10 to 15% higher affinity for carbon monoxide than adult hemoglobin, causing more severe poisoning in the fetus than in the adult. Elimination of carbon monoxide is slower in the fetus, leading to an accumulation of the toxic chemical. The level of fetal morbidity and mortality in acute carbon monoxide poisoning is significant, so despite mild maternal poisoning or following maternal recovery, severe fetal poisoning or death may still occur. History Humans have maintained a complex relationship with carbon monoxide since first learning to control fire circa 800,000 BC. Primitive cavemen probably discovered the toxicity of carbon monoxide upon introducing fire into their dwellings. The early development of metallurgy and smelting technologies emerging circa 6,000 BC through the Bronze Age likewise plagued humankind with carbon monoxide exposure. Apart from the toxicity of carbon monoxide, indigenous Native Americans may have experienced the neuroactive properties of carbon monoxide through shamanistic fireside rituals. Early civilizations developed mythological tales to explain the origin of fire, such as Vulcan, Pkharmat, and Prometheus from Greek mythology who shared fire with humans. Aristotle (384–322 BC) first recorded that burning coals produced toxic fumes. Greek physician Galen (129–199 AD) speculated that there was a change in the composition of the air that caused harm when inhaled, and symptoms of CO poisoning appeared in Cassius Iatrosophista's Quaestiones Medicae et Problemata Naturalia circa 130 AD. Julian the Apostate, Caelius Aurelianus, and several others similarly documented early knowledge of the toxicity symptoms of carbon monoxide poisoning as caused by coal fumes in the ancient era. Documented cases by Livy and Cicero allude to carbon monoxide being used as a method of suicide in ancient Rome. Emperor Lucius Verus used smoke to execute prisoners. Many deaths have been linked to carbon monoxide poisoning including Emperor Jovian, Empress Fausta, and Seneca. The most high-profile death by carbon monoxide poisoning may possibly have been Cleopatra or Edgar Allan Poe. In the fifteenth century, coal miners believed sudden death was caused by evil spirits; carbon monoxide poisoning has been linked to supernatural and paranormal experiences, witchcraft, etc. throughout the following centuries including in the modern present day exemplified by Carrie Poppy's investigations. Georg Ernst Stahl mentioned carbonarii halitus in 1697 in reference to toxic vapors thought to be carbon monoxide. Friedrich Hoffmann conducted the first modern scientific investigation into carbon monoxide poisoning from coal in 1716, notably rejecting villagers attributing death to demonic superstition. Herman Boerhaave conducted the first scientific experiments on the effect of carbon monoxide (coal fumes) on animals in the 1730s. Joseph Priestley is credited with first synthesizing carbon monoxide in 1772 which he had called heavy inflammable air, and Carl Wilhelm Scheele isolated carbon monoxide from coal in 1773 suggesting it to be the toxic entity. The dose-dependent risk of carbon monoxide poisoning as hydrocarbonate was investigated in the late 1790s by Thomas Beddoes, James Watt, Tiberius Cavallo, James Lind, Humphry Davy, and many others in the context of inhalation of factitious airs, much of which occurred at the Pneumatic Institution. William Cruickshank discovered carbon monoxide as a molecule containing one carbon and one oxygen atom in 1800, thereby initiating the modern era of research exclusively focused on carbon monoxide. The mechanism for toxicity was first suggested by James Watt in 1793, followed by Adrien Chenot in 1854 and finally demonstrated by Claude Bernard after 1846 as published in 1857 and also independently published by Felix Hoppe-Seyler in the same year. The first controlled clinical trial studying the toxicity of carbon monoxide occurred in 1973. Historical detection Carbon monoxide poisoning has plagued coal miners for many centuries. In the context of mining, carbon monoxide is widely known as whitedamp. John Scott Haldane identified carbon monoxide as the lethal constituent of afterdamp, the gas created by combustion, after examining many bodies of miners killed in pit explosions. By 1911, Haldane introduced the use of small animals for miners to detect dangerous levels of carbon monoxide underground, either white mice or canaries which have little tolerance for carbon monoxide thereby offering an early warning, i.e. canary in a coal mine. The canary in British pits was replaced in 1986 by the electronic gas detector. The first qualitative analytical method to detect carboxyhemoglobin emerged in 1858 with a colorimetric method developed by Felix Hoppe-Seyler, and the first quantitative analysis method emerged in 1880 with Josef von Fodor. Historical treatment The use of oxygen emerged with anecdotal reports such as Humphry Davy having been treated with oxygen in 1799 upon inhaling three quarts of hydrocarbonate (water gas). Samuel Witter developed an oxygen inhalation protocol in response to carbon monoxide poisoning in 1814. Similarly, an oxygen inhalation protocol was recommend for malaria (literally translated to "bad air") in 1830 based on malaria symptoms aligning with carbon monoxide poisoning. Other oxygen protocols emerged in the late 1800s. The use of hyperbaric oxygen in rats following poisoning was studied by Haldane in 1895 while its use in humans began in the 1960s. Incidents The worst accidental mass poisoning from carbon monoxide was the Balvano train disaster which occurred on 3 March 1944 in Italy, when a freight train with many illegal passengers stalled in a tunnel, leading to the death of over 500 people. Over 50 people are suspected to have died from smoke inhalation as a result of the Branch Davidian Massacre during the Waco siege in 1993. On 14 December 2024 12 individuals died by carbon monoxide poisoning in Gudauri (Georgia) as electric generators using fuel oil were placed in a closed area near their rooms. Weaponization In ancient history, Hannibal executed Roman prisoners with coal fumes during the Second Punic War. The extermination of stray dogs by a carbon monoxide gas chamber was described in 1874. In 1884, an article appeared in Scientific American describing the use of a carbon monoxide gas chamber for slaughterhouse operations as well as euthanizing a variety of animals. As part of the Holocaust during World War II, the Nazis used gas vans at Chelmno extermination camp and elsewhere to murder an estimated 700,000 or more people by carbon monoxide poisoning. This method was also used in the gas chambers of several death camps such as Treblinka, Sobibor, and Belzec. Gassing with carbon monoxide started in Action T4. The gas was supplied by IG Farben in pressurized cylinders and fed by tubes into the gas chambers built at various mental hospitals, such as Hartheim Euthanasia Centre. Exhaust fumes from tank engines, for example, were used to supply the gas to the chambers. References External links Centers for Disease Control and Prevention (CDC) – Carbon Monoxide – NIOSH Workplace Safety and Health Topic International Programme on Chemical Safety (1999). Carbon Monoxide, Environmental Health Criteria 213, Geneva: WHO Poisoning Accidents Industrial hygiene Medical emergencies Natural gas safety Suicide by poison Toxic effects of substances chiefly nonmedicinal as to source Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate
Carbon monoxide poisoning
Chemistry,Environmental_science
8,008
11,322,757
https://en.wikipedia.org/wiki/Corticium%20theae
Corticium theae is fungus that is a plant pathogen. References Further reading External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Corticiales Fungus species
Corticium theae
Biology
42
76,172,865
https://en.wikipedia.org/wiki/Advanced%20Micro-Fabrication%20Equipment
Advanced Micro-Fabrication Equipment (AMEC; ) is a partially state-owned publicly listed Chinese company that manufactures semiconductor chip production equipment. It is currently one of the largest semiconductor equipment manufacturers in China. History AMEC was co-founded in May 2004 by Gerald Yin who had previously worked at Applied Materials and Lam Research. Yin who was 60 at the time and thinking of retirement, was invited by a middle school classmate who was now a government official in China to come home. Yin took a group of 15 employees with to him to Shanghai to establish AMEC. Early investors in AMEC included Qualcomm and Samsung. Within three years, AMEC developed its own etching machine. In 2007, Applied Materials filed a lawsuit against AMEC claiming that Yin and several other employees at the firm were all its former staff and were involved in patent problems. In 2010, the two companies reached a settlement where they agreed to jointly own the patent family. In 2009, Lam Research field a patent lawsuit against AMEC in Taiwan but was dismissed by the court. In December 2010. AMEC launched a counterclaim against Lam Research in Shanghai alleging infringement of trade secrets related to plasma etching equipment. In March 2017, the court sided in favour of AMEC but Lam Research appealed the case. On 30 June 2023, the Shanghai High People's Court issued a final judgement in favour of AMEC. It was speculated AMEC's victory would allow to clear intellectual property barriers that Lam Research enacted to prevent it entering the Taiwan Market. In 2017, Veeco filed a patent lawsuit against AMEC but in January 2018 Chinese customs detained detained Veeco's imported Metalorganic vapour-phase epitaxy (MOCVD) equipment on ground its infringed AMEC's patents. The incident was seen as crucial leverage for AMEC. In February 2018, both sides reached a settlement agreement regarding the patent lawsuit. On 22 July 2019, AMEC held its initial public offering becoming a publicly listed company on the Shanghai Stock Exchange STAR Market. The largest shareholders of AMEC include Shanghai Venture Capital Co., Hong Kong Exchanges and Clearing and the China Integrated Circuit Industry Investment Fund. AMEC reported in July 2022, six of its executives included Yin planned to sell no more than 2.44 million shares in the following three months. This was done for a personal demand of cash. On 7 October that year, the United States New Export Controls on Advanced Computing and Semiconductors to China came into effect and the first trading day afterwards, shares of AMEC dropped more than 19%. On that day AMEC annourced that Yin had sold 549,500 of the 6.43 million shares he owned for 74.4 million yuan (US$10.5 million) by the end of September. Prior to that announcement, Yin had never sold any of his shares after the company went public. However, despite speculations of insider trading, there was no evidence to suggest that AMEC's executives knew about the US government upcoming restriction controls when they sold their shares. In September 2023, Yin stated that US import restrictions would have a negligible effect on AMEC's capacity to operate. 80% of imported equipment which was now restricted could be replaced but domestic alternatives by end of the year. By the second half of 2024, AMEC could resume full operational capacity due to China's drive to achieve semiconductor self-sufficiency. On 31 January 2024, the United States Department of Defense (DOD) added AMEC to the Chinese Military Companies (CMC) list in accordance with Section 1260H of the National Defense Authorization Act for Fiscal Year 2021. It had been previously added on 14 January 2021 before being removed from the list on 3 June 2021. AMEC stated it has no connections to the People's Liberation Army and that the list had no specific sanctions which would have a material impact on its operations. In August 2024, AMEC announced it had filed a lawsuit against the DOD for adding it to the blacklist. In December 2024, AMEC was removed from the list again. At the start of February 2024, AMEC announced a share repurchase plan valued at 300-500 million yuan (US$41.6-69.4 million). This was seen as the company's strong confidence in China's chip making industry. Products AMEC's Primo Nanova etching machines cover application for 14 nm, 7 nm and 5 nm processes. It is one of the five suppliers of etching equipment for TSMC's 7 nm production lines. See also Semiconductor industry in China Applied Materials Lam Research References External links 2004 establishments in China 2019 initial public offerings Companies based in Shanghai Companies listed on the Shanghai Stock Exchange Electronics companies established in 2004 Equipment semiconductor companies Government-owned companies of China Semiconductor companies of China Companies in the CSI 100 Index
Advanced Micro-Fabrication Equipment
Engineering
995
1,889,482
https://en.wikipedia.org/wiki/Sunshower
A sunshower, or sun shower, is a meteorological phenomenon in which rain falls while the Sun is seen shining. A sunshower is usually a result of winds associated with a rain storm sometimes miles away, blowing the airborne raindrops into an area where there are no clouds. Sometimes a sunshower is created when a single rain shower cloud passes overhead, and the Sun's angle keeps the sunlight from being obstructed by overhead clouds. Sunshower conditions often lead to the appearance of a rainbow, if the sun is at a sufficiently low angle. Names Although the term "sunshower" is used in the United States, Canada, Australia, New Zealand, Ireland and the UK, it is rarely found in dictionaries. The phenomenon has a wide range of sometimes remarkably similar folkloric names in cultures around the world. A common theme is that of clever animals and tricksters like the devil or witches getting married, although many variations of this theme exist. The Americas In Mexico, two phrases are common: In northern Mexico, it is said that "a doe is giving birth" (está pariendo una venada), whereas in southern Mexico, it is said that "two elders are getting married" (se están casando los viejitos). In Argentina is referred as "The Monkey at the Wedding of Mirtha Legrand" (El Mono del Casamiento de Mirtha Legrand), the phrase "A hag is getting married" (Se casa una vieja) is also common. In the Southern United States, a sunshower is said to occur when "the devil is beating his wife." A regional variation from Tennessee is "the devil is kissing his wife". Ketchikan, Alaska, is one of the most consistently rainy places on Earth, and has a meter in town to measure "Liquid Sunshine". This makes it an Alaskan Panhandle colloquialism. Asia In Bihar it is called a "siyaar ke biyaah" ("jackals' wedding") with children singing about it. In Garo, it is called "peru bia ka'enga", which means "fox's/jackal's marriage". In several parts of Japan, such as Kantō region, Chūbu region, Kansai region, Chūgoku region, Shikoku, and Kyushu, sunshowers are called "kitsune no yomeiri" (狐の嫁入り, "the fox's wedding") In Korea, it is called "the day of the fox's marriage" (여우 시집가는 날) or "the day of the tiger's marriage"(호랑이 장가가는 날) In Maharashtra region of India, in Marathi, it is called "Kolhyache Lagna" (कोल्ह्याचे लग्न), which means "marriage of a fox". In Malayalam, it is called "the fox's wedding" (). In Oriya, it is called "the fox's wedding" (). In some Pahari languages of Himachal Pradesh, they say Takri: 𑚌𑚮𑚛𑚖𑚯𑚣𑚭𑚫 𑚤𑚭 𑚠𑚶𑚣𑚭𑚩, ISO: gidaḍīyām̐ rā byāh, meaning "Female Foxes' Wedding". In the Philippines, it is said the tikbalang is marrying. Europe In Belgium, Flanders and The Netherlands: The traditional belief is that of "Duiveltjeskermis" or "Devil's fair". In France, it is either "" "the devil beats his wife and marries his daughter", or "" "the devil beats his wife to have crêpes", and both were inspired from Plutarch's poem in Eusebius' Praeparatio Evangelica, where Zeus, angry with Hera, made her believe that he was marrying Daedale when in fact it was a wooden statue. Hera, jealous, provoked a heavy downpour on the wedding day but at the same time realised the trick. In order to redeem herself, she turned her cries into laughter, reconciled herself with Zeus, and happily took the lead of the wedding party, instituting the festival of Daedala in memory of the event. In Galicia, the traditional belief is that the vixen or the fox is getting married: casa a raposa / casa o raposo; sometimes the wolf and the vixen: Estanse casando o lobo coa raposa. A wide range of expressions are attested in German-speaking countries, many of them historically, e.g. "There's a feast day in hell" (Oldenburg), "marriage [in hell]" (East Frisia), "funfair [in hell]" (Westphalia, Rhineland), the latter one attested already in 1630. Others are "They're baking in hell", "The devil is making pancakes" (Oldenburg), "Frau Holle hosts a funfair" (Lower Rhineland), "There's a marriage among the heathens/gypsies" (Switzerland), "The devil's dancing with his grandmother" (Winsen district, Lower Saxony), "The devil is marrying" (Schleswig-Holstein), "The devil is endowing his daughters" (Mecklenburg). Often, the phenomenon is interpreted as a struggle between rain and sunshine. "The devil is beating his wife/grandmother/mother-in-law" (Bavaria, Austria, Lunenburg), "The deviless gets beaten" (Eger country, Bohemia), "The devil is stabbing his wife with a sword" (Celle, Lower Saxony), "The devil has hanged his mother" (Moselle). The versions referring to the devil's wife (instead of grandmother etc.) are the older ones. Praetorius (Blockes Berges Verrichtung, Leipzig 1668) mentions „Der Teufel schlägt seine Mutter, daß sie öl gibt“ (The devil is beating his mother so she will give oil). In Schleswig-Holstein and Oldenburg, there is also: "The devil is bleaching his grandmother", as this usually involved repeated dampening of cloth in the sun – quite fitting for the weather phenomenon. Otherwise, idioms refer to witches. "The witches are dancing", "The old witch is making pancakes" (Schleswig-Holstein), "The witches are making butter" (Silesia), "The witches are being buried at the end of the world" (North Frisia). Although later on witches are often depicted as the devil's mistresses, not a single idiom about sunshowers shows them as such. Around the Baltic Sea, there are also references to sunshowers and "whore's children", i.e. illegitimate children: "Now a whore's child has been sired/baptised" (Mecklenburg). Similar expressions could be found in Finland. Furthermore, there are humorous versions like: "A lieutenant is paying his debts" (Rhineland), "A nobleman goes to heaven" (Lunenburg), "A tailor goes to heaven" (Schleswig-Holstein, Upper Saxony), "The devil gets a lawyer's soul" (Oldenburg). Completely different in origin are "The wolf has fever/bellyache" or "Now the wolves are pissing" (Mecklenburg). In Russian, it is called , "mushroom rain", as such conditions are traditionally believed to be favorable to growing mushrooms. Also, it is called , "blind rain", because it doesn't see that it shouldn't be raining. Africa In South Africa, it is often referred to as a "monkey's wedding". In Afrikaans the idiom Jakkals trou met Wolf se vrou is used to refer to a sunshower. It translates to "Jackel marries Wolf's wife". It's a common belief in Nigeria that an elephant or lion is giving birth. See also April shower References Further reading Blust, Robert (1998) The Fox's Wedding. Manuscript, University of Hawaii. Evgen'jeva, A. P., ed. (1985-) Slovar' russkogo jazyka v 4 tomakh, 3rd edition. Moscow. Kuusi, Matti (1957) Regen bei Sonnenschein: Zur Weltgeschichte einer Redensart. "Folklore Fellows Communications" n. 171, Helsinki 1957 (it appeared translated into Italian in the journal Quaderni di Semantica 13 (1992) and 14 (1993)). Hoffmann-Krayer, E. (1930–31) Handwörterbuch des deutschen Aberglaubens. Berlin and Leipzig: Walter de Gruyter. External links Languagehat Word-detective.com Theidioms.com The Wyandot Nation of Kansas – Myth on the origin of sunshowers Precipitation Weather lore
Sunshower
Physics
1,891
638,899
https://en.wikipedia.org/wiki/Vertex%20%28graph%20theory%29
In discrete mathematics, and more specifically in graph theory, a vertex (plural vertices) or node is the fundamental unit of which graphs are formed: an undirected graph consists of a set of vertices and a set of edges (unordered pairs of vertices), while a directed graph consists of a set of vertices and a set of arcs (ordered pairs of vertices). In a diagram of a graph, a vertex is usually represented by a circle with a label, and an edge is represented by a line or arrow extending from one vertex to another. From the point of view of graph theory, vertices are treated as featureless and indivisible objects, although they may have additional structure depending on the application from which the graph arises; for instance, a semantic network is a graph in which the vertices represent concepts or classes of objects. The two vertices forming an edge are said to be the endpoints of this edge, and the edge is said to be incident to the vertices. A vertex w is said to be adjacent to another vertex v if the graph contains an edge (v,w). The neighborhood of a vertex v is an induced subgraph of the graph, formed by all vertices adjacent to v. Types of vertices The degree of a vertex, denoted 𝛿(v) in a graph is the number of edges incident to it. An isolated vertex is a vertex with degree zero; that is, a vertex that is not an endpoint of any edge (the example image illustrates one isolated vertex). A leaf vertex (also pendant vertex) is a vertex with degree one. In a directed graph, one can distinguish the outdegree (number of outgoing edges), denoted 𝛿 +(v), from the indegree (number of incoming edges), denoted 𝛿−(v); a source vertex is a vertex with indegree zero, while a sink vertex is a vertex with outdegree zero. A simplicial vertex is one whose neighbors form a clique: every two neighbors are adjacent. A universal vertex is a vertex that is adjacent to every other vertex in the graph. A cut vertex is a vertex the removal of which would disconnect the remaining graph; a vertex separator is a collection of vertices the removal of which would disconnect the remaining graph into small pieces. A k-vertex-connected graph is a graph in which removing fewer than k vertices always leaves the remaining graph connected. An independent set is a set of vertices no two of which are adjacent, and a vertex cover is a set of vertices that includes at least one endpoint of each edge in the graph. The vertex space of a graph is a vector space having a set of basis vectors corresponding with the graph's vertices. A graph is vertex-transitive if it has symmetries that map any vertex to any other vertex. In the context of graph enumeration and graph isomorphism it is important to distinguish between labeled vertices and unlabeled vertices. A labeled vertex is a vertex that is associated with extra information that enables it to be distinguished from other labeled vertices; two graphs can be considered isomorphic only if the correspondence between their vertices pairs up vertices with equal labels. An unlabeled vertex is one that can be substituted for any other vertex based only on its adjacencies in the graph and not based on any additional information. Vertices in graphs are analogous to, but not the same as, vertices of polyhedra: the skeleton of a polyhedron forms a graph, the vertices of which are the vertices of the polyhedron, but polyhedron vertices have additional structure (their geometric location) that is not assumed to be present in graph theory. The vertex figure of a vertex in a polyhedron is analogous to the neighborhood of a vertex in a graph. See also Node (computer science) Graph theory Glossary of graph theory References Berge, Claude, Théorie des graphes et ses applications. Collection Universitaire de Mathématiques, II Dunod, Paris 1958, viii+277 pp. (English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition. Dover, New York 2001) External links Graph theory
Vertex (graph theory)
Mathematics
884
36,291,111
https://en.wikipedia.org/wiki/Henry%20adsorption%20constant
The Henry adsorption constant is the constant appearing in the linear adsorption isotherm, which formally resembles Henry's law; therefore, it is also called Henry's adsorption isotherm. It is named after British chemist William Henry. This is the simplest adsorption isotherm in that the amount of the surface adsorbate is represented to be proportional to the partial pressure of the adsorptive gas: where: X - surface coverage, P - partial pressure, KH - Henry's adsorption constant. For solutions, concentrations, or activities, are used instead of the partial pressures. The linear isotherm can be used to describe the initial part of many practical isotherms. It is typically taken as valid for low surface coverages, and the adsorption energy being independent of the coverage (lack of inhomogeneities on the surface). The Henry adsorption constant can be defined as: where: is the number density at free phase, is the surface number density, Application at a permeable wall Source: If a solid body is modeled by a constant field and the structure of the field is such that it has a penetrable core, then Here is the position of the dividing surface, is the external force field, simulating a solid, is the field value deep in the solid, , is the Boltzmann constant, and is the temperature. Introducing "the surface of zero adsorption" where and we get and the problem of determination is reduced to the calculation of . Taking into account that for Henry absorption constant we have where is the number density inside the solid, we arrive at the parametric dependence where Application at a static membrane Source: If a static membrane is modeled by a constant field and the structure of the field is such that it has a penetrable core and vanishes when , then We see that in this case the sign and value depend on the potential and temperature only. Application at an impermeable wall Source: If a solid body is modeled by a constant hard-core field, then or where Here For the hard solid potential where is the position of the potential discontinuity. So, in this case Choice of the dividing surface Sources: The choice of the dividing surface, strictly speaking, is arbitrary, however, it is very desirable to take into account the type of external potential . Otherwise, these expressions are at odds with the generally accepted concepts and common sense. First, must lie close to the transition layer (i.e., the region where the number density varies), otherwise it would mean the attribution of the bulk properties of one of the phase to the surface. Second. In the case of weak adsorption, for example, when the potential is close to the stepwise, it is logical to choose close to . (In some cases, choosing , where is particle radius, excluding the "dead" volume.) In the case of pronounced adsorption it is advisable to choose close to the right border of the transition region. In this case all particles from the transition layer will be attributed to the solid, and is always positive. Trying to put in this case will lead to a strong shift of to the solid body domain, which is clearly unphysical. Conversely, if (fluid on the left), it is advisable to choose lying on the left side of the transition layer. In this case the surface particles once again refer to the solid and is back positive. Thus, except in the case of static membrane, we can always avoid the "negative adsorption" for one-component systems. See also Freundlich equation Langmuir adsorption model Brunauer–Emmett–Teller (BET) theory References Physical chemistry Statistical mechanics
Henry adsorption constant
Physics,Chemistry
773
7,949,613
https://en.wikipedia.org/wiki/Carpaine
Carpaine is one of the major alkaloid components of papaya leaves which has been studied for its cardiovascular effects. Carpaine extracted from Carica papaya trees has been reported to have diverse biological properties, such as anti-malarial, anti-inflammatory, anti-oxidant, and vasodilatory effects. Especially, Carpaine possessed significant anti-plasmodial activity in vitro (IC50 of 0.2 μM) and high selectivity towards the parasites. Circulatory effects of carpaine were studied in Wistar male rats weighing 314 +/- 13 g, under pentobarbital (30 mg/kg) anesthesia. Increasing dosages of carpaine from 0.5 mg/kg to 2.0 mg/kg resulted in progressive decrease in systolic, diastolic, and mean arterial blood pressure. Selective autonomic nervous blockade with atropine sulfate (1 mg/kg) or propranolol hydrochloride (8 mg/kg) did not alter the circulatory response to carpaine. Carpaine, 2 mg/kg, reduced cardiac output, stroke volume, stroke work, and cardiac power, but the calculated total peripheral resistance remained unchanged. It is concluded from these results that carpaine affects the myocardium directly. The effects of carpaine may be related to its macrocyclic dilactone structure, a possible cation chelating structure. History After the first isolation of Carpaine by Greshoff in 1890, Merck & Company assigned the empirical formula to it, which was soon corrected to by van Rijn. In 1930s, Barger and his collogues investigated various degradation products of Carpaine and was able to obtain a series of chemical structures of Carpaine. Then in 1953, Rapoport and his collogues at the University of California obtained a new form of Carpaine chemical structure which they found the nitrogen-containing ring had a piperidine structure instead of the pyrrolidine as previously thought; they also located the position of the lactone ring between atoms numbered 3 and 6 on the piperidine nucleus. Later work from Govindachari & Narasimhan and Tichy and Sicher further confirmed this structural formula. However, Spiteller-Friedmann and Spiteller used Mass Spectrometry to discover that the molecular weight of Carpaine is closer to 478 g/mol, which is represented by twice of the original empirical formula. The new finding proved that Carpaine consists of two identical halves, which form a 26-membered cyclic diester, or dilactone, with an empirical formula of , and the configuration was finally determined by Coke and Rice in 1965. Isolation of Carpaine Carpaine occurs in papaya leaves in concentrations as high as 0.4%, which is enough to make it available commercially at very reasonable costs. One possible extraction route was accomplished first drying the leaves in an electric blast drying oven and milled to fine powder. The powdered plant material were macerated with a mixed solution of ethanol/water/ for 24 hrs at room temperature. Then the extract was dissolved in water/ mixture, filtered, and extracted with petroleum ether to remove fat materials. The acid fraction was adjusted to pH 8.0 ~ 9.0 using solution and extracted with chloroform. Finally, the chloroform fractions were pooled and evaporated and the whole operation was repeated again so the crude alkaloid Carpaine was obtained. Another extraction route reported that mechanical blending of the leaves prior to extraction significantly enhances the yield of Carpaine. After blending the leaves with water and freeze-dried, the samples were soaked in ethanol. This mixture was then concentrated and purified using an acid-base method followed by chloroform extraction to isolate the Carpaine. Finally, the purity and structure were analyzed using NMR and LC-MS. Potential Medical Uses Dengue Fever Treatment Recent research highlights the possible efficacy of Carpaine in managing the symptoms and severe complications associated with dengue fever. Carpaine in papaya leaves extract is the major active compounds that contributes to the anti-thrombocytopenic activity (raising the platelet counts in patient's blood). For example, a treatment used for a 45-year-old male patient in Pakistan diagonosed with dengue fever involved administering 25mL of the extracted Carpaine twice daily for five consecutive days. The treatment showed significant improvement in hematological parameters, a substantial increase in platelet and blood cell counts and neutrophil levels. Cardioprotective Effects In the setting of ischemia-reperfusion injury (IRI), studies have shown Carpaine provided significant protection to recover the wounded area affected by the hydrogen peroxide treatment by activating key pathway that promotes cell cycle progression and prevents cell death during stressful condition. Furthermore, Carpaine treatment further demonstrates its cardioprotective effects by improving mitochondrial membrane potential and reducing the overproduction of reactive oxygen species. Anti-inflammatory Properties Studies have shown Carpaine’s ability to modulate the body’s inflammatory response by inhibiting the production of pro-inflammatory cytokines, such as tumor necrosis factor-alpha (TNF-α) and interleukin-6 (IL-6), which would be beneficial in treating chronic disease, including rheumatoid arthritis, asthma, etc. Anti-oxidant Properties The enhanced anti-oxidant activity in papaya leaves demonstrated in studies is due to the high concentration of polyphenols, which are known for their strong anti-oxidant properties for combatting oxidative stress in the body, that can lead to cellular damage and various chronic diseases. The anti-oxidant capacity was measured using the DPPH (1,1-diphenyl-2-picrylhydrazyl) assay, where blended young papaya leaves exhibited significantly lower IC50 values (IC50 = 293 μg/mL per 100 mg) with a stronger anti-oxidant potency than old leaves (IC50 = 382 μg/mL per 100 mg). References Lactones Piperidine alkaloids Macrocycles
Carpaine
Chemistry
1,284
28,888,905
https://en.wikipedia.org/wiki/Somerset%20Space%20Walk
The Somerset Space Walk is a sculpture trail model of the Solar System, located in Somerset, England. The model uses the towpath of the Bridgwater and Taunton Canal to display a model of the Sun and its planets in their proportionally correct sizes and distances apart. Unusually for a Solar System model, there are two sets of planets, so that the diameter of the orbits is represented. Aware of the inadequacies of printed pictures of the Solar System, the inventor Pip Youngman designed the Space Walk as a way of challenging people's perceptions of space and experiencing the vastness of the Solar System. The model is built to a scale of 1:530,000,000, meaning that one millimetre on the model equates to 530 kilometres. The Sun is sited at Higher Maunsel Lock, and one set of planets is installed in each direction along the canal towards Taunton and Bridgwater; the distance between the Sun and each model of Pluto being . For less hardy walkers, the inner planets are within of the Sun, and near to the Maunsel Canal Centre (and tea shop) at Lower Maunsel Lock, where a more detailed leaflet about the model is available. The Space Walk was opened on 9 August 1997 by British astronomer Heather Couper. In 2007, a project team from Somerset County Council refurbished some of the models. Background The Walk is a joint venture between the Taunton Solar Model Group and British Waterways, with support from Somerset County Council, Taunton Deane Borough Council and the Somerset Waterways Development Trust. The Taunton Solar Model Group comprised Pip Youngman, Trevor Hill – a local physics teacher who had been awarded the title of "Institute of Physics (IOP) Physics Teacher of the Year" – and David Applegate who, during his time as Mayor of Taunton, had expressed a wish to see some kind of science initiative in the area. Youngman came up with the idea for the Space Walk, and Hill assisted by calculating the respective positions and sizes of the planets. Funding for the project came from the Committee on the Public Understanding of Science (COPUS), the initial advertising leaflet was paid for by the Particle Physics and Astronomy Research Council (PPARC) and there was also a small grant from Sustrans, who fund art installations along cyclepaths, to deal with maintenance requirements in the years before Somerset County Council took on that responsibility. In order to apply for the COPUS funding Youngman needed two 'sponsors', so he wrote to Arthur C. Clarke (a local boy himself, then living in Sri Lanka) and Patrick Moore, who both wrote warm letters in support. Arthur C Clarke's brother Fred read out his letter at the opening ceremony. ReadyMix Concrete supplied the concrete for the plinths, and Avimo (now part of Thales Group), a local defence contractor, supplied the steel for the models. Individual models The model of the Sun is a -wide 14-ton concrete sphere, with a vertical segment removed to give two vertical faces upon which explanatory plates are mounted. The solid sphere was cast by Pip Youngman and Trevor Hill in the grounds of what was the SWEB storage yard adjacent to the Obridge Viaduct in Taunton. Originally 'natural' in colour (matching the other models) it was painted yellow as part of the refurbishment, making it much more visible. Each of the smallest planet models is contained within a round-topped concrete plinth about high. The stainless steel model is held inside a circular hole through the side of the plinth; hence the model of the planet may be viewed by looking through the hole. The plinths were created by Youngman using fibreglass moulds which he had also made. The models of the largest gas giants, Saturn and Jupiter, are moulded as part of the top face of the concrete pillars. Originally concrete-coloured, they have been painted as part of the refurbishment. Each pillar doubles-up as a milepost: the distance to Bridgwater and Taunton is cast in the concrete at ground level – below a depiction of the British Waterways 'bridge' logo – although the sculptures are sited according to the spacings needed for the model, and not at kilometre increments for the convenience of boaters. On each pillar is a plaque containing a short inscription describing the planet. The Earth inscription reads: Nearest star The installation does not include a model of the Solar System's nearest star for comparison. On the same scale as the other models, the nearest star (Proxima Centauri, which is about one-seventh the size of the Sun) would need to be a red ball in diameter sited away (roughly twice the circumference of Earth). Pip Youngman The Space Walk's designer, Philip Robert Vassar Youngman (born: 26 August 1924, Hunstanton, Norfolk – died: 23 May 2007, Taunton, Somerset), known as 'Pip', was a designer and inventor of mechanical apparatus. Around 1969, Youngman was approached by the Open University to adapt a mechanical calculator he had designed, originally prototyped in Lego, into a product suitable for school use. The result was the "Ball Operated Binary Calculator And Tutor" (BOBCAT), a mechanical model for teaching binary arithmetic and the inner workings of the computer, using ball bearings for binary data bits and plastic levers for the calculating logic. Location The trail can be walked either from Taunton's Brewhouse Theatre to Maunsel Lock (Pluto to the Sun) or from Bridgwater's Morrison's Supermarket to Maunsel Lock (also Pluto to the Sun) or of course, vice versa. The locations of the end and middle point (with postcodes and coordinates) are: Pluto in Taunton – TA1 1JL Maunsel Canal Centre – TA7 0DH Pluto in Bridgwater – TA6 3RF Model gallery The models of the Solar System, in order: See also Solar System model Sweden Solar System Nine Views References External links   – with map of canal showing approximate locations of models Maunsel Lock Canal Centre Somerset Space Walk at HiddenSomerset.com – includes travel directions + pictures of Sun and Earth models Photos of first seven models + the Sun – before refurbishing Pip Youngman's projects A collection of multiple images of all "space" milestones on the Geograph.org.uk website Astronomy in the United Kingdom British art Sculpture gardens, trails and parks in the United Kingdom Tourist attractions in Somerset 1997 sculptures Physics education in the United Kingdom Scale modeling Space art Solar System models Science and technology in Somerset Footpaths in Somerset Buildings and structures in Sedgemoor 1997 establishments in England
Somerset Space Walk
Physics,Astronomy
1,374
75,164,026
https://en.wikipedia.org/wiki/Bandh%20Baretha
Bandh Baretha is a freshwater man-made wetland and wildlife sanctuary covering an area of 10 square kilometers. It is located approximately 50 kilometers south of Bharatpur city, in the Bayana tehsil of Bharatpur, India. This sanctuary serves as a significant winter resort for migratory birds and plays a crucial role in storing drinking water for the region. The sanctuary is situated near the small river Kakund, which enters the south-western border of Bayana tehsil from the Karauli side. Here, the river's waters are held in the Baretha reservoir. During low rainfall years, the population of water birds increases, making it a large, permanent, and legally protected wetland. Bundh Baretha is home to a diverse avian population, with a total of 67 water bird species, including six globally threatened species. It is an essential refuge for birds, especially when adverse conditions prevail in the nearby Keoladeo National Park wetlands. The aquatic vegetation in this sanctuary is similar to that found in Keoladeo National Park, further highlighting its ecological significance. References Bharatpur, Rajasthan Wetlands of India Constructed wetlands
Bandh Baretha
Chemistry,Engineering,Biology
231
3,568,175
https://en.wikipedia.org/wiki/Nested%20polymerase%20chain%20reaction
Nested polymerase chain reaction (nested PCR) is a modification of polymerase chain reaction intended to reduce non-specific binding in products due to the amplification of unexpected primer binding sites. Polymerase chain reaction Polymerase chain reaction itself is the process used to amplify DNA samples, via a temperature-mediated DNA polymerase. The products can be used for sequencing or analysis, and this process is a key part of many genetics research laboratories, along with uses in DNA fingerprinting for forensics and other human genetic cases. Conventional PCR requires primers complementary to the termini of the target DNA. The amount of product from the PCR increases with the number of temperature cycles that the reaction is subjected to. A commonly occurring problem is primers binding to incorrect regions of the DNA, giving unexpected products. This problem becomes more likely with an increased number of cycles of PCR. Primers Nested polymerase chain reaction involves two sets of primers, used in two successive runs of polymerase chain reaction, the second set intended to amplify a secondary target within the first run product. This allows amplification for a low number of runs in the first round, limiting non-specific products. The second nested primer set should only amplify the intended product from the first round of amplification and not non-specific product. This allows running more total cycles while minimizing non-specific products. This is useful for rare templates or PCR with high background. Processes The target DNA undergoes the first run of polymerase chain reaction with the first set of primers, shown in green. The selection of alternative and similar primer binding sites gives a selection of products, only one containing the intended sequence. The product from the first reaction undergoes a second run with the second set of primers, shown in red. It is very unlikely that any of the unwanted PCR products contain binding sites for both the new primers, ensuring the product from the second PCR has little contamination from unwanted products. References Molecular biology Laboratory techniques Amplifiers Polymerase chain reaction
Nested polymerase chain reaction
Chemistry,Technology,Biology
426
66,793
https://en.wikipedia.org/wiki/Tetrahymena
Tetrahymena is a genus of free-living ciliates, examples of unicellular eukaryotes. The genus Tetrahymena is the most widely studied member of its phylum. It can produce, store and react with different types of hormones. Tetrahymena cells can recognize both related and hostile cells. They can also switch from commensalistic to pathogenic modes of survival. They are common in freshwater lakes, ponds, and streams. Tetrahymena species used as model organisms in biomedical research are T. thermophila and T. pyriformis. T. thermophila: a model organism in experimental biology As a ciliated protozoan, Tetrahymena thermophila exhibits nuclear dimorphism: two types of cell nuclei. They have a bigger, non-germline macronucleus and a small, germline micronucleus in each cell at the same time and these two carry out different functions with distinct cytological and biological properties. This unique versatility allows scientists to use Tetrahymena to identify several key factors regarding gene expression and genome integrity. In addition, Tetrahymena possess hundreds of cilia and has complicated microtubule structures, making it an optimal model to illustrate the diversity and functions of microtubule arrays. Because Tetrahymena can be grown in a large quantity in the laboratory with ease, it has been a great source for biochemical analysis for years, specifically for enzymatic activities and purification of sub-cellular components. In addition, with the advancement of genetic techniques it has become an excellent model to study the gene function in vivo. The recent sequencing of the macronucleus genome should ensure that Tetrahymena will be continuously used as a model system. Tetrahymena thermophila exists in seven different sexes (mating types) that can reproduce in 21 different combinations, and a single tetrahymena cannot reproduce sexually with itself. Each organism "decides" which sex it will become during mating, through a stochastic process. Studies on Tetrahymena have contributed to several scientific milestones including: First cell which showed synchronized division, which led to the first insights into the existence of mechanisms which control the cell cycle. Identification and purification of the first cytoskeleton based motor protein such as dynein. Aid in the discovery of lysosomes and peroxisomes. Early molecular identification of somatic genome rearrangement. Discovery of the molecular structure of telomeres, telomerase enzyme, the templating role of telomerase RNA and their roles in cellular senescence and chromosome healing (for which a Nobel Prize was won). Nobel Prize–winning co-discovery (1989, in Chemistry) of catalytic RNA (ribozyme). Discovery of the function of histone acetylation. Demonstration of the roles of posttranslational modification such as acetylation and glycylation on tubulins and discovery of the enzymes responsible for some of these modifications (glutamylation) Crystal structure of 40S ribosome in complex with its initiation factor eIF1 First demonstration that two of the "universal" stop codons, UAA and UAG, will code for the amino acid glutamine in some eukaryotes, leaving UGA as the only termination codon in these organisms. Life cycle The life cycle of T. thermophila consists of an alternation between asexual and sexual stages. In nutrient rich media during vegetative growth cells reproduce asexually by binary fission. This type of cell division occurs by a sequence of morphogenetic events that results in the development of duplicate sets of cell structures, one for each daughter cell. Only during starvation conditions will cells commit to sexual conjugation, pairing and fusing with a cell of opposite mating type. Tetrahymena has seven mating types; each of which can mate with any of the other six without preference, but not its own. Typical of ciliates, T. thermophila differentiates its genome into two functionally distinct types of nuclei, each specifically used during the two different stages of the life cycle. The diploid germline micronucleus is transcriptionally silent and only plays a role during sexual life stages. The germline nucleus contains 5 pairs of chromosomes which encode the heritable information passed down from one sexual generation to the next. During sexual conjugation, haploid micronuclear meiotic products from both parental cells fuse, leading to the creation of a new micro- and macronucleus in progeny cells. Sexual conjugation occurs when cells starved for at least 2hrs in a nutrient-depleted media encounter a cell of complementary mating type. After a brief period of co-stimulation (~1hr), starved cells begin to pair at their anterior ends to form a specialized region of membrane called the conjugation junction. It is at this junctional zone that several hundred fusion pores form, allowing for the mutual exchange of protein, RNA and eventually a meiotic product of their micronucleus. This whole process takes about 12 hours at 30 °C, but even longer than this at cooler temperatures. The sequence of events during conjugation is outlined in the accompanying figure. The larger polyploid macronucleus is transcriptionally active, meaning its genes are actively expressed, and so it controls somatic cell functions during vegetative growth. The polyploid nature of the macronucleus refers to the fact that it contains approximately 200–300 autonomously replicating linear DNA mini-chromosomes. These minichromosomes have their own telomeres and are derived via site-specific fragmentation of the five original micronuclear chromosomes during sexual development. In T. thermophila each of these minichromosomes encodes multiple genes and exists at a copy number of approximately 45-50 within the macronucleus. The exception to this is the minichromosome encoding the rDNA, which is massively upregulated, existing at a copy number of approximately 10,000 within the macronucleus. Because the macronucleus divides amitotically during binary fission, these minichromosomes are un-equally divided between the clonal daughter cells. Through natural or artificial selection, this method of DNA partitioning in the somatic genome can lead to clonal cell lines with different macronuclear phenotypes fixed for a particular trait, in a process called phenotypic assortment. In this way, the polyploid genome can fine-tune its adaptation to environmental conditions through gain of beneficial mutations on any given mini-chromosome whose replication is then selected for, or conversely, loss of a minichromosome which accrues a negative mutation. However, the macronucleus is only propagated from one cell to the next during the asexual, vegetative stage of the life cycle, and so it is never directly inherited by sexual progeny. Only beneficial mutations that occur in the germline micronucleus of T. thermophila are passed down between generations, but these mutations would never be selected for environmentally in the parental cells because they are not expressed. Behavior Free swimming cells of Tetrahymena are attracted to certain chemicals by chemokinesis. The major chemo-attractants are peptides and/or proteins. A 2016 study found that cultured Tetrahymena have the capacity to 'learn' the shape and size of their swimming space. Cells confined in a droplet of a water for a short time were, upon release, found to repeat the circular swimming trajectories 'learned' in the droplet. The diameter and duration of these swimming paths reflected the size of the droplet and time allowed to adapt. DNA repair It is common among protists that the sexual cycle is inducible by stressful conditions such as starvation. Such conditions often cause DNA damage. A central feature of meiosis is homologous recombination between non-sister chromosomes. In T. thermophila this process of meiotic recombination may be beneficial for repairing DNA damages caused by starvation. Exposure of T. thermophila to UV light resulted in a greater than 100-fold increase in Rad51 gene expression. Treatment with the DNA alkylating agent methyl methanesulfonate also resulted in substantially elevated Rad 51 protein levels. These findings suggest that ciliates such as T. thermophila utilize a Rad51-dependent recombinational pathway to repair damaged DNA. The Rad51 recombinase of T. thermophila is a homolog of the Escherichia coli RecA recombinase. In T. thermophila, Rad51 participates in homologous recombination during mitosis, meiosis and in the repair of double-strand breaks. During conjugation, Rad51 is necessary for completion of meiosis. Meiosis in T. thermophila appears to employ a Mus81-dependent pathway that does not use a synaptonemal complex and is considered secondary in most other model eukaryotes. This pathway includes the Mus81 resolvase and the Sgs1 helicase. The Sgs1 helicase appears to promote the non-crossover outcome of meiotic recombinational repair of DNA, a pathway that generates little genetic variation. Phenotypic and genotypic plasticity Many species of Tetrahymena are known to display unique response mechanisms to stress and various environmental pressures. The unique genomic architecture of the ciliates (presence of a MIC, high ploidy, large number of chromosomes, etc.) allows for differential gene expression, as well as increased genomic flexibility. The following is a non-exhaustive list of examples of phenotypic and genotypic plasticity in the Tetrahymena genus. Inducible trophic polymorphisms T. vorax is known for its inducible trophic polymorphisms, an ecologically offensive tactic that allows it to change its feeding strategy and diet by altering its morphology. Normally, T. vorax is a bacterivorous microstome around 60 μm in length. However, it has the ability to switch into a carnivorous macrostome around 200 μm in length that can feed on larger competitors. If T. vorax cells are too nutrient starved to undertake transformation, they have also been recorded as transforming into a third "tailed"-microstome morph, thought to be a defense mechanism in response to cannibalistic pressure. While T. vorax is the most well studied Tetrahymena that exhibits inducible trophic polymorphisms, many lesser known species are able to undertake transformation as well, including T. paulina and T. paravorax. However, only T. vorax has been recorded as having both a macrostome and tailed-microstome form. These morphological switches are triggered by an abundance of stomatin in the environment, a mixture of metabolic compounds released by competitor species, such as Paramecium, Colpidium, and other Tetrahymena. Specifically, chromatographic analysis has revealed that ferrous iron, hypoxanthine, and uracil are the chemicals in stomatin responsible for triggering the morphological change. Many researchers cite "starvation conditions" as inducing the transformation, as in nature, the compound inducers are in highest concentration after microstomal ciliates have grazed down bacterial populations, and ciliate populations are high. When the chemical inducers are in high concentration, T. vorax cells will transform at higher rates, allowing them to prey on their former trophic competitors. The exact genetic, and structural mechanisms that underlie T. vorax transformation are unknown. However, some progress has been made in identifying candidate genes. Researchers from the University of Alabama have used cDNA subtraction to remove actively transcribed DNA from microstome and macrostome T. vorax cells, leaving only differentially transcribed cDNA molecules. While nine differentiation-specific genes were found, the most frequently expressed candidate gene was identified as a novel sequence, SUBII-TG. The sequenced region of SUBII-TG was 912 bp long and consists of three largely identical 105 bp open-reading frames. A northern blot analysis revealed that low levels of transcription are detected in microstome cells, while high levels of transcription occur in macrostome cells. Furthermore, when the researchers limited SUBII-TG expression in the presence of stomatin (using antisense oligonucleotide methods), a 55% reduction in SUBII-TG mRNA correlated with a 51% decrease in transformation, supporting the notion that the gene is at least partially responsible for controlling the transformation in T. vorax. However, very little is known about the SUBII-TG gene. Researchers were only able to sequence a portion of the entire open-reading frame, and other candidate genes have not been investigated thoroughly. mRNA and amino acid sequencing indicate that ubiquitin may play a crucial role in allowing transformation to take place as well. However, no known genes in the ubiquitin family have been identified in T. vorax. Finally, the genetic mechanisms of the "tailed" microstome morph are completely unknown. Metal resistance, gene and genome amplification Other related species exhibit their own unique responses to various stressors. In T. thermophila, chromosome amplification and gene expansion are inducible responses to common organometallic pollutants such as cadmium, copper, and lead. Strains of T. thermophila that were exposed to large quantities of Cd2+ over time were found to have a 5-fold increase of MTT1, and MTT3 (metallothionein genes that code for Cadmium and Lead binding proteins) as well as CNBDP, an unrelated gene that lies just upstream of MTT1 on the same chromosome. The fact that a non-metallothionein gene on the same locus as MTT1 and MTT3 increased copy number indicates that the entire chromosome had been amplified, as opposed to just specific genes. Tetrahymena species are 45-ploid for their macronucleus, meaning that the wild type of T. thermophila normally contains 45 copies of each chromosome. While the actual number of unique chromosomes are unknown, the number is thought to be around 187 in the MAC, and 5 in the MIC. Thus, the Ca2+ adapted strain contained 225 copies of the specific chromosome in question. This resulted in a nearly 28-fold increase in detected expression levels of MTT1, and slightly less in MTT3. When researchers grew a sample of the T. thermophila population in normal growth medium (lacking Cd2+) for one month, the number of MTT1, MTT3, and CNBDP genes decreased to an average of three copies (135C). By seven months in normal growth medium, the T. thermophila cells were found reduced to just the wild type copy number (45C). When researchers returned cells from the same colony to Cd2+ medium, within a week MTT1, MTT3, and CNBDP genes increased to three copies once again (135C). Thus, the authors argue that chromosome amplification is an inducible and reversible mechanism in the Tetrahymena genetic response to metal stress.   Researchers also used gene-knockdown experiments, where the copy number of another metallothionein gene on a different chromosome, MTT5, was dramatically reduced. Within a week, the new strain was found to have developed four novel genes from at least one duplication of MTT1. However, chromosome duplication had not taken place, as indicated by the wild-type ploidy and the normal quantity of other genes on the same chromosomes. Rather, researchers believe that the duplication resulted from homologous recombination events, producing transcriptionally active, upregulated genes that carry repeated MTT1. Enhanced motility and dispersal T. thermophila also undergoes phenotypic changes when faced with limited resource availability. Cells are capable of changing their shape and size, along with behavioral swimming strategies in response to starvation. The more motile cells that change in response to starvation are known as dispersers, or disperser cells. While rates and levels of phenotypic change differ between strains, disperser cells form in nearly all strains of T. thermophila when faced with starvation. Dispersers, and non-dispersing cells both become dramatically thinner and smaller, increasing the basal body and cilia density, allowing them to swim between two and three times faster than normal cells. Some strains of T. thermophila have also been found to develop a single, non-beating, enlarged cilia that assists the cell in steering or directing movement. While the behavior has been shown to correlate with faster dispersal and form as a reversible trait in Tetrahymena cells, little is known about the genetic or cellular mechanisms that allow for its development. Furthermore, other studies show that when genetically variable populations of T. thermophila were starved, dispersal cells actually increased in cell length, despite still becoming thinner. More research is needed to determine the genetic mechanisms that underlie disperser formation. Species in genus Species in this genus include. Tetrahymena americanis Tetrahymena asiatica Tetrahymena australis Tetrahymena bergeri Tetrahymena borealis Tetrahymena canadensis Tetrahymena capricornis Tetrahymena caudata Tetrahymena chironomi Tetrahymena corlissi Tetrahymena cosmopolitanis Tetrahymena dimorpha Tetrahymena edaphoni Tetrahymena elliotti Tetrahymena empidokyrea Tetrahymena farahensis Tetrahymena farleyi Tetrahymena furgasoni Tetrahymena glochidiophila Tetrahymena hegewischi Tetrahymena hyperangularis Tetrahymena leucophrys Tetrahymena limacis Tetrahymena lwoffi Tetrahymena malaccensis Tetrahymena mimbres Tetrahymena mobilis Tetrahymena nanneyi Tetrahymena nipissingi Tetrahymena paravorax Tetrahymena patula Tetrahymena pigmentosa Tetrahymena pyriformis Tetrahymena rostrata Tetrahymena rotunda Tetrahymena setifera Tetrahymena setigera Tetrahymena setosa Tetrahymena shanghaiensis Tetrahymena sialidos Tetrahymena silvana Tetrahymena skappus Tetrahymena sonneborni Tetrahymena stegomyiae Tetrahymena thermophila Tetrahymena tropicalis Tetrahymena vorax In education Cornell University offers a National Institutes of Health (NIH) funded program through the Science Education Partnership Award (SEPA) Program called Advancing Secondary Science Education thru Tetrahymena (ASSET). The group develops stand-alone labs or lessons using Tetrahymena as training modules that teachers can use in classes. References Further reading External links Tetrahymena Stock Center at Cornell University ASSET: Advancing Secondary Science Education thru Tetrahymena Tetrahymena Genome Database Biogeography and Biodiversity of Tetrahymena Tetrahymena thermophila Genome Project at The Institute for Genomic Research Tetrahymena thermophila Genome Sequence Synopsis Tetrahymena thermophila genome paper Tetrahymena experiments on Journal of Visualized Experiments (JoVE) website Microbial Digital Specimen Archives: Tetrahymena image gallery All Creatures Great and Small: Elizabeth Blackburn Ciliate genera Oligohymenophorea Model organisms
Tetrahymena
Biology
4,258
14,626,644
https://en.wikipedia.org/wiki/Bromsulfthalein
Bromsulfthalein (also known as bromsulphthalein, bromosulfophthalein, and BSP) is a phthalein dye used in liver function tests. Determining the rate of removal of the dye from the blood stream gives a measure of liver function. The mechanism by which the liver detoxifies BSP is to attach it to glutathione which is the body’s master antioxidant. References Triarylmethane dyes Phthalides Bromobenzene derivatives Phenols Benzenesulfonates Organic sodium salts
Bromsulfthalein
Chemistry
122
11,421,962
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD31
In molecular biology, Small nucleolar RNA SNORD31 (U31) is a member of the C/D class of snoRNA which contain the C (UGAUGA) and D (CUGA) box motifs. U31 is encoded within the U22 snoRNA host gene (UHG) in mammals and is thought to act as a 2'-O-ribose methylation guide for ribosomal RNA. References External links Small nuclear RNA
Small nucleolar RNA SNORD31
Chemistry
97
13,700,439
https://en.wikipedia.org/wiki/TA-CD
TA-CD is a vaccine developed by the Xenova Group and designed to negate the effects of cocaine, making it suitable for use in treatment of addiction. It is created by combining norcocaine with inactivated cholera toxin. It works in much the same way as a regular vaccine. A large protein molecule attaches to cocaine, which stimulates response from antibodies, which destroy the molecule. This also prevents the cocaine from crossing the blood–brain barrier, negating the euphoric high and rewarding effect of cocaine caused from stimulation of dopamine release in the mesolimbic reward pathway. The vaccine does not affect the user's "desire" for cocaine—only the physical effects of the drug. Results Phase III Clinical Trials completed in 2014 showed no significant difference between users given placebo and users given TA-CD. Patients in the high antibody group had a lower drop out rate and fewer positive cocaine urine results in the last 2 weeks of the trial, but it was not significant versus the low antibody or placebo group. However at other points of the study, high antibody users had more positive urine results. This is most likely due to users trying to overcome the antibodies by taking more excessive amounts of cocaine. This vaccine does not have any effect on the underlying neurobiological cause of addiction which is a possible explanation for the clinical trial's failure. See also Cocaine haptens – structures which elicit anti-bodies against cocaine TA-NIC – Similar nicotine vaccine Notes References Scientific American Mind: Cocaine Vaccine External links Would you vaccinate your child against cocaine? A thermostable bacterial cocaine esterase rapidly eliminates cocaine from brain in nonhuman primates. Translational Psychiatry (2014) 4, e407; doi:10.1038/tp.2014.48 Vaccines against drugs Cocaine Vaccines
TA-CD
Biology
376
5,295
https://en.wikipedia.org/wiki/Character%20encoding
Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using computers. The numerical values that make up a character encoding are known as code points and collectively comprise a code space, a code page, or character map. Early character encodings that originated with optical or electrical telegraphy and in early computers could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. Over time, character encodings capable of representing more characters were created, such as ASCII, the ISO/IEC 8859 encodings, various computer vendor encodings, and Unicode encodings such as UTF-8 and UTF-16. The most popular character encoding on the World Wide Web is UTF-8, which is used in 98.2% of surveyed web sites, as of May 2024. In application programs and operating system tasks, both UTF-8 and UTF-16 are popular options. History The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode). Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well-defined and extensible encoding system, has replaced most earlier character encodings, but the path of code development to the present is fairly well known. The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name baudot has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often improved by many equipment manufacturers, sometimes creating compatibility issues. Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine. When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code. IBM used several binary-coded decimal (BCD) six-bit character encoding schemes, starting as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series, as well as in associated peripherals. Since the punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. These BCD encodings extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping them easily to punch-card encoding which was already in widespread use. IBM's codes were used primarily with IBM equipment; other computer vendors of the era had their own character codes, often six-bit, such as the encoding used by the , but usually had the ability to read tapes produced on IBM equipment. IBM's BCD encodings were the precursors of their Extended Binary-Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters. In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler seven-bit code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard. Eight-bit extended ASCII encodings, such as various vendor extensions and the ISO/IEC 8859 series, supported all ASCII characters as well as additional non-ASCII characters. In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count. The compromise solution that was eventually found and was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point. Terminology Informally, the terms "character encoding", "character map", "character set" and "code page" are often used interchangeably. Historically, the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units — usually with a single character per code unit. However, due to the emergence of more sophisticated character encodings, the distinction between these terms has become important. A character is a minimal unit of text that has semantic value. A character set is a collection of elements used to represent text. For example, the Latin alphabet and Greek alphabet are both character sets. A coded character set is a character set mapped to a set of unique numbers. For historical reasons, this is also often referred to as a code page. A character repertoire is the set of characters that can be represented by a particular coded character set. The repertoire may be closed, meaning that no additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series); or it may be open, allowing additions (as is the case with Unicode and to a limited extent Windows code pages). A code point is a value or position of a character in a coded character set. A code space is the range of numerical values spanned by a coded character set. A code unit is the minimum bit combination that can represent a character in a character encoding (in computer science terms, it is the word size of the character encoding). For example, common code units include 7-bit, 8-bit, 16-bit, and 32-bit. In some encodings, some characters are encoded using multiple code units; such an encoding is referred to as a variable-width encoding. Code pages "Code page" is a historical name for a coded character set. Originally, a code page referred to a specific page number in the IBM standard character set manual, which would define a particular character encoding. Other vendors, including Microsoft, SAP, and Oracle Corporation, also published their own sets of code pages; the most well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437). Despite no longer referring to specific page numbers in a standard, many character encodings are still referred to by their code page number; likewise, the term "code page" is often still used to refer to character encodings in general. The term "code page" is not used in Unix or Linux, where "charmap" is preferred, usually in the larger context of locales. IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP". Code units The code unit size is equivalent to the bit measurement for the particular encoding: A code unit in ASCII consists of 7 bits; A code unit in UTF-8, EBCDIC and GB 18030 consists of 8 bits; A code unit in UTF-16 consists of 16 bits; A code unit in UTF-32 consists of 32 bits. Code points A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding: UTF-8: code points map to a sequence of one, two, three or four code units. UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 is encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: "Unicode surrogate pairs". UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit. GB 18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units. Characters Exactly what constitutes a character varies between character encodings. For example, for letters with diacritics, there are two distinct approaches that can be taken to encode them: they can be encoded either as a single unified character (known as a precomposed character), or as separate characters that combine into a single glyph. The former simplifies the text handling system, but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems. Exactly how to handle glyph variants is a choice that must be made when constructing a particular character encoding. Some writing systems, such as Arabic and Hebrew, need to accommodate things like graphemes that are joined in different ways in different contexts, but represent the same semantic character. Unicode encoding model Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a unified standard for character encoding. Rather than mapping characters directly to bytes, Unicode separately defines a coded character set that maps characters to unique natural numbers (code points), how those code points are mapped to a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets (bytes). The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model precisely, Unicode uses its own set of terminology to describe its process: An abstract character repertoire (ACR) is the full set of abstract characters that a system supports. Unicode has an open repertoire, meaning that new characters will be added to the repertoire over time. A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" by 66, and so on. Multiple coded character sets may share the same character repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points. A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF. A character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE, and UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using a byte order mark or escape sequences; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU and BOCU). Although UTF-32BE and UTF-32LE are simpler CESes, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE, which is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion. Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang. The Unicode model uses the term "character map" for other systems which directly assign a sequence of characters to a sequence of bytes, covering all of the CCS, CEF and CES layers. Unicode code points In Unicode, a character can be referred to as 'U+' followed by its codepoint value in hexadecimal. The range of valid code points (the codespace) for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains the most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters. The following table shows examples of code point values: Example Consider a string of the letters "ab̲c𐐀"—that is, a string containing a Unicode combining character () as well as a supplementary character (). This string has several Unicode representations which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements: Four composed characters: , , , Five graphemes: , , , , Five Unicode code points: , , , , Five UTF-32 code units (32-bit integer values): , , , , Six UTF-16 code units (16-bit integers) , , , , , Nine UTF-8 code units (8-bit values, or bytes) , , , , , , , , Note in particular that 𐐀 is represented with either one 32-bit value (UTF-32), two 16-bit values (UTF-16), or four 8-bit values (UTF-8). Although each of those forms uses the same total number of bits (32) to represent the glyph, it is not obvious how the actual numeric byte values are related. Transcoding As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between character encoding schemes, a process known as transcoding. Some of these are cited below. Cross-platform: Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu. iconv – a program and standardized API to convert encodings luit – a program that converts encoding of input and output to programs running interactively International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C. Windows: Encoding.Convert – .NET API MultiByteToWideChar/WideCharToMultiByte – to convert from ANSI to Unicode & Unicode to ANSI Common character encodings The most used character encoding on the web is UTF-8, used in 98.2% of surveyed web sites, as of May 2024. In application programs and operating system tasks, both UTF-8 and UTF-16 are popular options. ISO 646 ASCII EBCDIC ISO 8859: ISO 8859-1 Western Europe ISO 8859-2 Western and Central Europe ISO 8859-3 Western Europe and South European (Turkish, Maltese plus Esperanto) ISO 8859-4 Western Europe and Baltic countries (Lithuania, Estonia, Latvia and Lapp) ISO 8859-5 Cyrillic alphabet ISO 8859-6 Arabic ISO 8859-7 Greek ISO 8859-8 Hebrew ISO 8859-9 Western Europe with amended Turkish character set ISO 8859-10 Western Europe with rationalised character set for Nordic languages, including complete Icelandic set ISO 8859-11 Thai ISO 8859-13 Baltic languages plus Polish ISO 8859-14 Celtic languages (Irish Gaelic, Scottish, Welsh) ISO 8859-15 Added the Euro sign and other rationalisations to ISO 8859-1 ISO 8859-16 Central, Eastern and Southern European languages (Albanian, Bosnian, Croatian, Hungarian, Polish, Romanian, Serbian and Slovenian, but also French, German, Italian and Irish Gaelic) CP437, CP720, CP737, CP850, CP852, CP855, CP857, CP858, CP860, CP861, CP862, CP863, CP865, CP866, CP869, CP872 MS-Windows character sets: Windows-1250 for Central European languages that use Latin script, (Polish, Czech, Slovak, Hungarian, Slovene, Serbian, Croatian, Bosnian, Romanian and Albanian) Windows-1251 for Cyrillic alphabets Windows-1252 for Western languages Windows-1253 for Greek Windows-1254 for Turkish Windows-1255 for Hebrew Windows-1256 for Arabic Windows-1257 for Baltic languages Windows-1258 for Vietnamese Mac OS Roman KOI8-R, KOI8-U, KOI7 MIK ISCII TSCII VISCII JIS X 0208 is a widely deployed standard for Japanese character encoding that has several encoding forms. Shift JIS (Microsoft Code page 932 is a dialect of Shift_JIS) EUC-JP ISO-2022-JP JIS X 0213 is an extended version of JIS X 0208. Shift_JIS-2004 EUC-JIS-2004 ISO-2022-JP-2004 Chinese Guobiao GB 2312 GBK (Microsoft Code page 936) GB 18030 Taiwan Big5 (a more famous variant is Microsoft Code page 950) Hong Kong HKSCS Korean KS X 1001 is a Korean double-byte character encoding standard EUC-KR ISO-2022-KR Unicode (and subsets thereof, such as the 16-bit 'Basic Multilingual Plane') UTF-8 UTF-16 UTF-32 ANSEL or ISO/IEC 6937 See also Percent-encoding Alt code Character encodings in HTML :Category:Character encoding – articles related to character encoding in general :Category:Character sets – articles detailing specific character encodings Hexadecimal representations Mojibake – character set mismap Mojikyō – a system ("glyph set") that includes over 100,000 Chinese character drawings, modern and ancient, popular and obscure Presentation layer TRON, part of the TRON project, is an encoding system that does not use Han Unification; instead, it uses "control codes" to switch between 16-bit "planes" of characters. Universal Character Set characters Charset sniffing – used in some applications when character encoding metadata is not available References Further reading External links Character sets registered by Internet Assigned Numbers Authority (IANA) Characters and encodings, by Jukka Korpela Unicode Technical Report #17: Character Encoding Model Decimal, Hexadecimal Character Codes in HTML Unicode – Encoding converter The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) by Joel Spolsky (Oct 10, 2003) Encoding
Character encoding
Technology
4,729
431,529
https://en.wikipedia.org/wiki/Supersolid
In condensed matter physics, a supersolid is a spatially ordered (i.e. solid) material with superfluid properties. In the case of helium-4, it has been conjectured since the 1960s that it might be possible to create a supersolid. Starting from 2017, a definitive proof for the existence of this state was provided by several experiments using atomic Bose–Einstein condensates. The general conditions required for supersolidity to emerge in a certain substance are a topic of ongoing research. Background A supersolid is a special quantum state of matter where particles form a rigid, spatially ordered structure, but also flow with zero viscosity. This is in contradiction to the intuition that flow, and in particular superfluid flow with zero viscosity, is a property exclusive to the fluid state, e.g., superconducting electron and neutron fluids, gases with Bose–Einstein condensates, or unconventional liquids such as helium-4 or helium-3 at sufficiently low temperature. For more than 50 years it was thus unclear whether the supersolid state can exist. Experiments using helium While several experiments yielded negative results, in the 1980s, John Goodkind discovered the first anomaly in a solid by using ultrasound. Inspired by his observation, in 2004 Eun-Seong Kim and Moses Chan at Pennsylvania State University saw phenomena which were interpreted as supersolid behavior. Specifically, they observed a non-classical rotational moment of inertia of a torsional oscillator. This observation could not be explained by classical models but was consistent with superfluid-like behavior of a small percentage of the helium atoms contained within the oscillator. This observation triggered a large number of follow-up studies to reveal the role played by crystal defects or helium-3 impurities. Further experimentation has cast some doubt on the existence of a true supersolid in helium. Most importantly, it was shown that the observed phenomena could be largely explained due to changes in the elastic properties of the helium. In 2012, Chan repeated his original experiments with a new apparatus that was designed to eliminate any such contributions. In this experiment, Chan and his coauthors found no evidence of supersolidity. Experiments using ultracold quantum gases In 2017, two research groups from ETH Zurich and from MIT reported on the creation of an ultracold quantum gas with supersolid properties. The Zurich group placed a Bose–Einstein condensate inside two optical resonators, which enhanced the atomic interactions until they started to spontaneously crystallize and form a solid that maintains the inherent superfluidity of Bose–Einstein condensates. This setting realises a special form of a supersolid, the so-called lattice supersolid, where atoms are pinned to the sites of an externally imposed lattice structure. The MIT group exposed a Bose–Einstein condensate in a double-well potential to light beams that created an effective spin–orbit coupling. The interference between the atoms on the two spin–orbit coupled lattice sites gave rise to a characteristic density modulation. In 2019, three groups from Stuttgart, Florence, and Innsbruck observed supersolid properties in dipolar Bose–Einstein condensates formed from lanthanide atoms. In these systems, supersolidity emerges directly from the atomic interactions, without the need for an external optical lattice. This facilitated also the direct observation of superfluid flow and hence the definitive proof for the existence of the supersolid state of matter. In 2021, confocal cavity quantum electrodynamics with a Bose–Einstein condensate was used to create a supersolid that possesses a key property of solids, vibration. That is, a supersolid was created that possesses lattice phonons with a Goldstone mode dispersion exhibiting a 16 cm/s speed of sound. In 2021, dysprosium was used to create a 2-dimensional supersolid quantum gas, in 2022, the same team created a supersolid disk in a round trap and in 2024 they reported the observation of quantum vortices in the supersolid phase Theory In most theories of this state, it is supposed that vacancies – empty sites normally occupied by particles in an ideal crystal – lead to supersolidity. These vacancies are caused by zero-point energy, which also causes them to move from site to site as waves. Because vacancies are bosons, if such clouds of vacancies can exist at very low temperatures, then a Bose–Einstein condensation of vacancies could occur at temperatures less than a few tenths of a Kelvin. A coherent flow of vacancies is equivalent to a "superflow" (frictionless flow) of particles in the opposite direction. Despite the presence of the gas of vacancies, the ordered structure of a crystal is maintained, although with less than one particle on each lattice site on average. Alternatively, a supersolid can also emerge from a superfluid. In this situation, which is realised in the experiments with atomic Bose–Einstein condensates, the spatially ordered structure is a modulation on top of the superfluid density distribution. See also Superfluid film Superglass References External links Nature story on a supersolid experiment APS Physics Magazine on a vibrating supersolid experiment Penn State: What is a Supersolid? Condensed matter physics Phases of matter Liquid helium
Supersolid
Physics,Chemistry,Materials_science,Engineering
1,110
2,114,924
https://en.wikipedia.org/wiki/Spring%20Drive
Spring Drive is a name given to a series of watch movements produced by Epson in Shiojiri. The concept of using a mainspring to power a quartz timing package was first conceived in 1977 by Yoshikazu Akahane (赤羽 好和) at Suwa Seikosha (now a part of Epson after a 1985 merger). Specified to one second accuracy per day, the movement uses a conventional gear train as in traditional mechanical watches, but rather than an escapement and balance wheel, instead features Seiko's Tri-synchro Regulator system in which power delivery to the watch hands is regulated based on a reference quartz signal. Commercially released in 1999, the movement is found in watches distributed by the Seiko Watch Corporation, including its Credor, Grand Seiko, Presage, and Prospex brands. Mechanics The Spring Drive uses a conventional mainspring and barrel along with automatic and/or stem winding to store energy, just as in a mechanical watch. However, the escapement and balance wheel in mechanical watches is replaced by Seiko's Tri-synchro Regulator system, a phase-locked loop wherein a rotor, which Seiko refers to as a "glide wheel", is powered by the mainspring barrel via a stator. The glide wheel in turn powers a reference quartz crystal and accompanying integrated circuit which controls an electromagnetic brake which then regulates the rotational speed of the glide wheel itself. The glide wheel is intended to rotate eight times per second; the rotational speed is sampled once every rotation and a variable braking force is continuously applied to maintain that target frequency. As the glide wheel directly powers the seconds hand of the watch, this results in a true continuously sweeping second hand – in contrast to the beats per time motion resulting from the back-and-forth movement of traditional mechanical watches or the tick of typical quartz watches. The movement is specified to ±15 seconds per month. History The design was first conceived by Yoshikazu Akahane at Suwa Seikosha in 1977 and patents were applied for it in 1982; in total, no fewer than 230 patents have been applied worldwide for this movement. Initial development was hindered by the high energy consumption of the reference quartz crystal and integrated circuit making a watch with a then-target 48-hour power reserve impossible; another attempt in 1993 was also unsuccessful for the same reason. It was not until a third attempt in 1997, using a quartz crystal and integrated circuit with energy consumption approximately one one-hundredth that used in the initial attempt in 1982, that a Spring Drive watch with sufficient power reserve was deemed feasible. Over 600 prototypes were produced during development. The Spring Drive movement was announced publicly in 1997 and presented at the 1998 Basel Watch Fair. In 1999, the first production models were made available in Japan as limited edition, manual-wind watches in both the Credor and Seiko brands. The first non-limited model was released in Japan in 2002. The 1st spring drive automatic-wind movement of Grand Seiko was released in September 2004, the reference number is SBGA001. The first automatic-wind Spring Drive model was released in 2005, and coincided with the introduction of the Spring Drive movement to markets outside of Japan. Calibers Early models, manual wind and 48h power reserve: 7R68 : 30 jewels, date. 7R78 : 30 jewels, date. 7R88 : 30 jewels, date. 7R99 : 32 jewels. Current calibers with standard features. Time accuracy: monthly rate within ±15 sec (equivalent to a daily rate of ±1 sec) and power reserve (72h) indicator. 5R64 : 32 jewels, date, small seconds hand. 5R65 : 30 jewels, date. 5R66 : 30 jewels, date, GMT. 5R67 : 30 jewels, Moon Phase indicator. 5R77 : 30 jewels, Moon Phase indicator. 5R85 : 49 jewels, date, Chronograph, Izul. 5R86 : 50 jewels, date, GMT, Chronograph, Spacewalk. 7R06 : 88 jewels, manual winding, Sonnerie. 7R08 : 44 jewels, manual winding, Eichi I. 7R11 : 112 jewels, manual winding, Minute Repeater. 7R14 : 41 jewels, manual winding, Eichi II. 9R01 : 56 jewels, manual winding. Power reserve 8 days (192h). 9R15 : 30 jewels, date. Monthly rate within ±10 sec (±0.5 sec per day). 9R31 : 30 jewels, manual wind 9R65 : 30 jewels, date. 9R66 : 30 jewels, date, GMT. 9R84 : 41 jewels, date, Chronograph. 9R86 : 50 jewels, date, GMT, Chronograph. 9R96 : 50 jewels, date, GMT, Chronograph. Current calibers with higher power reserve and higher accuracy. Time accuracy: monthly rate within ±10 sec (equivalent to a daily rate of ±0.5 sec) and power reserve (5 days) indicator. 9RA5 : 38 jewels, date. 9RA2 : 38 jewels, date. Rear power reserve indicator. Notes and references Watches Epson Seiko Timekeeping components Japanese inventions de:Seiko#Spring Drive
Spring Drive
Technology
1,108
1,129,272
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28speed%29
To help compare different orders of magnitude, the following list describes various speed levels between approximately 2.2 m/s and 3.0 m/s (the speed of light). Values in bold are exact. List of orders of magnitude for speed See also Typical projectile speeds - also showing the corresponding kinetic energy per unit mass Neutron temperature References Units of velocity Physical quantities Speed
Orders of magnitude (speed)
Physics,Mathematics
76
40,346,741
https://en.wikipedia.org/wiki/Harvard-MIT%20Data%20Center
The Harvard-MIT Data Center (HMDC) provides multi-disciplinary information technology support for social science research and education at Harvard and MIT. Established in the early 1960s the HMDC was meant to be the original data center for political and social science at Harvard University, and over time it has evolved into an information technology service provider that transcends many educational fields. Services The HMDC offers the following services: Powerful, usable research computing tools Cluster computing power Application and server hosting On-site computer labs Statistical workshops and classes User friendly desktop support History In the early 1960s the HMDC, originally known as the Government Data Center, was established as part of a national movement for all universities to collect, consolidate, and share social science research data. This movement eventually became known as the Interuniversity Consortium for Political and Social Research (ICPSR), the largest collection of social science data in the world. In the early days associates of the Government Data Center were responsible for managing the distribution of ICPSR tapes housed in Harvard's Office of Information Technology. In 1987 all holdings within the facility were transferred to the Faculty of Arts and Sciences Department of Government (located in Harvard's Littauer building) and in recognition of the widespread use of the facility's data by Harvard scholars the name was changed to the Harvard Data Center. At this time some of the earliest local computer networks, which contained statistical software and computing resources, were established; in addition, associates began transitioning the facility's holdings from tape to more modern media. In the early 1990s associates of the Harvard Data Center played a major role in a National Science Foundation (NSF) grant that established a research training program in political economy for various educational institutions. Later on, in 1996, facility associates entered into an agreement with MIT to extend services to MIT users, thus changing the name to the Harvard-MIT Data Center (HMDC). In 1999 HMDC associates were awarded a multimillion-dollar grant by the NSF and five other funding agencies to create an open-source, digital library for quantitative research data; associates of the facility continue to receive additional grants and funding support from vendors, such as the NSF and the Library of Congress, to continue their research and development projects. In 2005, after the facility was transferred into Harvard's new Center for Government and International Studies complex, the HMDC became a founding member of the Institute for Quantitative Social Science (IQSS), and in 2007 HMDC associates launched their new online data center, the Dataverse Network repository. Today, the HMDC continues to serve the social science community by providing technology support for research, education, and administration. External links "HMDC" Data centers Harvard University Massachusetts Institute of Technology
Harvard-MIT Data Center
Technology
555
383,652
https://en.wikipedia.org/wiki/Telluric%20current
A telluric current (), or Earth current, is an electric current that flows underground or through the sea, resulting from natural and human-induced causes. These currents have extremely low frequency and traverse large areas near or at Earth's surface. Earth's crust and mantle are host to telluric currents, with around 32 mechanisms generating them, primarily geomagnetically induced currents caused by changes in Earth's magnetic field due to solar wind interactions with the magnetosphere or solar radiation's effects on the ionosphere. These currents exhibit diurnal patterns, flowing towards the Sun during the day and towards the geomagnetic poles at night. Both telluric and magnetotelluric methods exploit these currents for subsurface exploration, aiding in activities like geothermal and mineral exploration, petroleum prospecting, fault zone mapping, groundwater assessment, and the study of tectonic plate boundaries. The phenomenon has also captured the imagination of authors, finding its way into fiction. In Umberto Eco's Foucault's Pendulum, the search for a mystic center of the Earth connects to telluric currents, while Thomas Pynchon's Mason & Dixon incorporates them as enigmatic communication conduits alongside Hollow Earth theories. Description Telluric currents are phenomena observed in the Earth's crust and mantle. In September 1862, an experiment to specifically address Earth currents was carried out in the Munich Alps (Lamont, 1862). Including minor processes, there are at least 32 different mechanisms which cause telluric currents. The strongest are primarily geomagnetically induced currents, which are induced by changes in the outer part of the Earth's magnetic field, which are usually caused by interactions between the solar wind and the magnetosphere or solar radiation effects on the ionosphere. Telluric currents flow in the surface layers of the Earth. The electric potential on the Earth's surface can be measured at different points, enabling the calculation of the magnitudes and directions of the telluric currents and hence the Earth's conductance. These currents are known to have diurnal characteristics wherein the general direction of flow is towards the Sun. Telluric currents continuously move between the sunlit and shadowed sides of the Earth, toward the equator on the side of the Earth facing the Sun (that is, during the day), and toward the poles on the night side of the planet. Both telluric and magnetotelluric methods are used for exploring the structure beneath the Earth's surface (such as in industrial prospecting). For mineral exploration the targets are any subsurface structures with a distinguishable resistance in comparison to its surroundings. Uses include geothermal exploration, mining exploration, petroleum exploration, mapping of fault zones, ground water exploration and monitoring, investigation of magma chambers, and investigation of boundaries of tectonic plates. Earth batteries tap a useful low voltage current from telluric currents and were used for telegraph systems as far back as the 1840s. In industrial prospecting activity that uses the telluric current method, electrodes are properly located on the ground to sense the voltage difference between locations caused by the oscillatory telluric currents. It is recognized that a low frequency window (LFW) exists when telluric currents pass through the Earth's substrata. In the frequencies of the LFW, the Earth acts as a conductor. In fiction The main plot of the 1988 novel Foucault's Pendulum by Umberto Eco revolves around conspiracy theorists who believe that they are searching for the (Latin for "The Navel of the World"), the mystic "Center of The Earth" which is supposed to be a certain point from where a person could control the energies and shapes of the Earth, thus reforming it at will. The novel takes this even further by suggesting that (in the view of the conspiratorialists) monuments like the Eiffel Tower are nothing more than giant antennas related to these energies. Telluric currents, along what are effectively ley lines, are discovered to be a means of mysterious communication in Thomas Pynchon's 1997 novel Mason & Dixon and are associated with the book's Chinese-Jesuit subplot. As with Eco, cited above, Pynchon also reflects upon Hollow Earth theories in this work. See also References Further reading External links MTNet, Scientists engaged in the study of the Earth using electromagnetic methods, principally the magnetotelluric technique (magnetotellurics). Geophysics Enochian magic
Telluric current
Physics
928
32,518,704
https://en.wikipedia.org/wiki/Braided%20vector%20space
In mathematics, a braided vector space is a vector space together with an additional structure map symbolizing interchanging of two vector tensor copies: such that the Yang–Baxter equation is fulfilled. Hence drawing tensor diagrams with an overcrossing the corresponding composed morphism is unchanged when a Reidemeister move is applied to the tensor diagram and thus they present a representation of the braid group. As first example, every vector space is braided via the trivial braiding (simply flipping). A superspace has a braiding with negative sign in braiding two odd vectors. More generally, a diagonal braiding means that for a -base we have A good source for braided vector spaces entire braided monoidal categories with braidings between any objects , most importantly the modules over quasitriangular Hopf algebras and Yetter–Drinfeld modules over finite groups (such as above) If additionally possesses an algebra structure inside the braided category ("braided algebra") one has a braided commutator (e.g. for a superspace the anticommutator): Examples of such braided algebras (and even Hopf algebras) are the Nichols algebras, that are by definition generated by a given braided vectorspace. They appear as quantum Borel part of quantum groups and often (e.g. when finite or over an abelian group) possess an arithmetic root system, multiple Dynkin diagrams and a PBW-basis made up of braided commutators just like the ones in semisimple Lie algebras. References Hopf algebras Quantum groups
Braided vector space
Mathematics
330
598,460
https://en.wikipedia.org/wiki/Vampire%20Hunter%20D
is a series of novels written by Japanese author Hideyuki Kikuchi and illustrated by Yoshitaka Amano since 1983. As of February 2024, 55 novels have been published in the main series, with some novels comprising as many as four volumes. They have sold over 17 million copies worldwide, making Vampire Hunter D one of the best-selling book series in history. The series has also spawned anime, audio drama, manga, comic adaptations, a video game, as well as a short story collection, art books, and a supplemental guide book. Premise Vampire hunter D wanders through a far-future post-nuclear war Earth that combines elements of pulp genres: western, science fiction, horror and Lovecraftian horror, dark fantasy, folklore, and occult science. The planet, once terrified by the elegant but cruel vampires known as , ancient demons, mutants, and their technological creations, is now slowly returning to a semblance of order and human control—thanks in part to the decadence that brought about the downfall of the vampire race, to the continued stubbornness of frontier dwellers, and to the rise of a caste of independent hunters-for-hire who eliminate supernatural threats. Some time in 1999, a nuclear war occurred. The Nobility were vampires that planned for a possible nuclear war and sequestered all that was needed to rebuild civilization in their shelters. They use their science combined with magic to restore the world in their image. Nearly all magical creatures on Earth are genetically engineered by the Nobility to possess seemingly supernatural abilities, regeneration, and biological immortality, with a very small number being demons, gods, aliens, and extradimensional beings who survived the holocaust. Despite their technology being advanced enough to create a blood substitute as food, they still prefer to feed on humans. As such, they created a civilization where vampires and humans coexisted, eventually developing the planet into parklands and cities. The society eventually stagnates when vampire technology perfects scientific prophecy, which determines they are at their zenith of existence and doomed to fall, overthrown by humans. The human race was transformed at this time, with fear for the vampires being woven into their genetics, and the inability to remember vampire weaknesses such as garlic and crucifixes. Unlike vampires from traditional lore, the Nobility can reproduce sexually, although their offspring will permanently cease aging after reaching physical maturity, having inherited their vampire parent's immortality. Main characters D D is a dhampir, the half-breed child of a vampire father and human mother (actually genetically engineered by the ruler and first of vampires called Sacred Ancestor using his own DNA and that of humans in an experiment to create a vampire without vampires' typical weaknesses like the sun and having to drink blood), the ideal vampire hunter. He is renowned for his consummate skill and unearthly grace but feared and despised for his mixed lineage: born of both races but belonging to neither. Often underestimated by his opponents, D possesses surprising power and resourcefulness, having most of the strengths of the Nobility and only mild levels of their common weaknesses. It has been seen in both movies that his power is not only physical, but extends into the magical realm as well. His supernatural powers make him one of the strongest beings in the world, if not the second strongest (second only to his father). However, D prefers his physical abilities, only using his magic in times of great need. Unlike most dhampirs, D is able to live as a "normal" human; however, he is marked by his unearthly beauty and exceptionally potent aura and thus rarely accepted by human settlements. In terms of weaknesses, he is randomly susceptible to sun-sickness, a severe type of sunstroke, about once every five years (far less than most dhampirs). D also recovers from it at a rate far greater than other dhampirs. Usually, it takes several days to recover from sunlight syndrome, longer if the dhampir is exceedingly powerful, but D recovered in a few hours (around 1–6 hours approximately) despite being one of the strongest dhampirs alive. Otherwise, D does not appear to suffer from other vampiric weaknesses usual to dhampirs, being able to physically restrain opponents with his aura and having godlike reflexes surpassing even those of Nobles. His symbiotic left hand states, in Vampire Hunter D: Bloodlust: "The Herarchy of us became impatient with the heartless David and impaled The Lord on 'The Sword'". Speculation on whether "The Sword" is D's sword or not is debatable. It is important to note here, however, that the movie differs sharply from the book it takes its story from, Vampire Hunter D: Demon Deathchase, and future entries in the novel series do not differentiate between Dracula, The Vampire King, The Sacred Ancestor and D's father, proposing that they are one and the same. D rides a cybernetic horse with mechanical legs and other enhancements, wields a crescent longsword which looks similar to Yoshitaka Amano's scimitar sword design found in many of his works of art, but the sword has a hefty length, similar to that of a Japanese nodachi. D always wears a mystical blue pendant; it prevents many of the automatic defenses (such as laser fields and small nuclear blasters) created by the Nobility in past millennia from working properly and allows him to enter their sealed castles. He also uses wooden needles in the novels and game, which he can throw with super speed. He protects his milk-white face from the noonday sun with long black hair, flowing black clothing and cape, and the shadow of a wide-brimmed hat. D is described as appearing as a youth between 17 and 18 years old, though D's age is unknown (although in the novel Pale Fallen Angel parts I and II, it is made known that he is at least 5,000 years old, and later it is revealed that he is over 10,000 years old). His beauty is mesmerizing, often unintentionally wooing women and sometimes flustering men. Very little is known of D's past, and the identity of his parents is ambiguous, but his innate power suggests that his father is an extraordinarily powerful vampire. Regarding D's birth, some Nobles whisper dark rumors about their vampire progenitor, the Sacred Ancestor known as Count Dracula, bedding a human woman called "Mina the Fair" (implied to be Mina Harker). Dracula conducted bizarre crossbreeding experiments involving himself and countless human women or even other vampires. The only successful product of the experiments was D. D, wanting nothing to do with his father save for killing him, refusing to go by his true name. Instead, he shortens it to the first letter. In Twin Shadowed Knight, D has a twin who goes unnamed. The twin states that he and D were born from the same woman in exactly the same conditions. Left Hand D is the host for a sentient symbiote, Left Hand, a wisecracking homunculus with a human face residing in his left palm, who can suck in massive amounts of matter through a wind void or vacuum tunnel. Left Hand enjoys needling the poker-faced D, but only appears as needed, rarely witnessed or heard by anyone other than D, yet aware of many of D's thoughts and actions. At all other times, D's left hand appears normal. Besides providing a contrast to D's reserved demeanor, Left Hand is incredibly useful, possessing many mysterious powers such as psychometry, inducing sleep, hypnotizing others, determining the medical condition of a victim, absorbing matter, energy, and even souls and magic, healing and reviving D and himself, and the ability to size up the supernatural powers or prowess of an enemy, even beyond D's keen senses. After absorbing four elements, Left Hand can use them to generate powerful attacks, regenerate D and himself, and to transform D into full vampire state rivaling Sacred Ancestor. Left Hand can store absorbed objects in a pocket dimension inside his stomach, and later spit them out. In the first and second novels, Left Hand can also revive D when his physical condition is suffering, by consuming the four elements and converting the resulting energy into life force. This ability even saved D from the usually fatal for vampires stake through the heart he received from Rei-Ginsei in the first novel. Left Hand has its own mind and will, and acts as D's guide and sole permanent companion, providing a reservoir of knowledge pertaining to the lost Noble culture. So far, Left Hand's origins are unknown, and it is unclear how they came to be joined. However, some of its nature is revealed in the third book, which features a similar creature; it is implied he was one of the Barbarois (mutant/demon hybrids) who served in the personal retinue of the Sacred Ancestor, and was experimented on by him to increase his powers over other Barbarois. Sacred Ancestor The Sacred Ancestor's role in the novels is very mixed, appearing both as the bane and savior to isolated towns, and deified as a legendary god-king to the vampires, many of whom have never even met him in person. D quotes the Sacred Ancestor's precepts ("Transient guests are we"—implied to refer to the Nobility) in the first novel. The Sacred Ancestor appears both as a lawgiver honored for his intelligence, who showed some interest in preserving humans, and as a ruthless scientist (in the second novel), conducting hybrid breeding experiments with humans in order to perpetuate his own dwindling species. D appears to have encountered his alleged father on at least one occasion, as when at times D reaches a place where the imprint of the Sacred Ancestor's power remains, D remembers the Sacred Ancestor telling him that "You are my only success." Like D, the Sacred Ancestor is portrayed as a mysterious and handsome young wanderer who deals with both life and death. However, in the English dub of the anime, D states that the Sacred Ancestor respected humanity and did not feed on innocent people. Production In the postscript of the first Vampire Hunter D novel, Hideyuki Kikuchi cited Hammer Films' Horror of Dracula (1958) as his first inspiration in horror. He also praised horror manga artist Osamu Kishimoto for his distinct style, that he described as "a gothic mood in the Western tradition". Kikuchi is famous for writing his manuscripts by hand. Having written the Vampire Hunter D series for 40 years, the author has previously admitted that he does not remember all of its mythology, only that of a couple of volumes, and therefore one might find some contradictions in it. Kikuchi explained that the two points he is always careful of are, not to "mix up the characteristics of D", and "not to lose sight of the purpose of the journey". When asked if the Sacred Ancestor will ever be given a larger role, Kikuchi said the character "will come out someday" and "When that happens, please expect that to be your warning that Vampire Hunter D is speeding toward an ending". The author has previous said that he has "a final conflict for D" in mind, but that final conflict will not be the entire ending of the series. Publication history Beginning in January 1983, Kikuchi has written 41 Vampire Hunter D novels spanning 53 volumes as of April 2023. All of the publications in the series were published by Asahi Sonorama until the branch went out of business in September 2007. The release of D – Throng of Heretics in October 2007 under the Asahi Bunko – Sonorama Selection label marked the transition to the new publisher, Asahi Shimbun Publishing, a division of Asahi Sonorama's parent company. From December 2007 through January 2008, Asahi Shimbun Publishing reprinted the complete Vampire Hunter catalogue under the Sonorama Selection label. On May 11, 2005, the first official English translation was released under DH Press, translated by Kevin Leahy. As of 2020, 24 novels have been published in English, spanning 29 volumes. In 2021, Dark Horse began releasing the series in an omnibus format. The first volume, featuring the first three novels, was released on October 27, 2021. In December 2021, Dark Horse Comics in partnership with GraphicAudio began publishing dramatized audiobook adaptations of the Vampire Hunter D novel series featuring a full English voice cast, soundtrack, and sound effects. In January 2011, Hideyuki Kikuchi published the first spinoff set in the Vampire Hunter universe, a series of prequels titled , illustrated by Ayami Kojima, artist and character designer for the Castlevania series of video games. It takes place over 5,000 years before Vampire Hunter D and focuses on expanding the history of the Nobility, following the exploits of the vampire warrior Lord Greylancer. In 2013, Viz Media's Haikasoru imprint released the first official English translation of the prequel series, retitled Noble V: Greylancer, translated by Takami Nieda with newly commissioned cover artwork by Vincent Chong. Adaptations 1985 animated film Vampire Hunter D remains a cult classic among English-speaking audiences. Billed by the Japanese producers as a "dark future science-fiction romance" Vampire Hunter D is set in the year 12,090 AD, in a post-nuclear holocaust world where vampires, mutants and demons "slither through a world of darkness" (in the words of the film's opening introduction). 1988–1990 audio dramas Asashi Sonorama created audio drama adaptations of three of the novels, in five parts: Raiser of Gales "D" (January 1988) (the book it was based on was published May 1984) D – Demon Deathchase (June 1988) D – Mysterious Journey to the North Sea I: To the North Sea (March 1990) D – Mysterious Journey to the North Sea II: Summer at Last (May 1990) D – Mysterious Journey to the North Sea III: When Winter Comes Again (June 1990). Most of the voice cast for the original OVA reprised their roles. Originally released on cassette tape, in 2005 they were re-released as a special edition, five-disc Vampire Hunter D Audio Drama Box, including a small supplemental booklet with a new short story by Kikuchi and an "art cloth" with an illustration by Amano. 1999 video game A video game based on Vampire Hunter D Bloodlust was also made for the PlayStation game console, titled Vampire Hunter D. It is a survival horror game, but also similar to a standard adventure title. The player can see D from different pre-rendered angles throughout the game, and allow D to attack enemies with his sword. D can also use magic, Left Hand's abilities, and items. The story of the game is similar to that of Vampire Hunter D Bloodlust, although it takes place entirely within the castle as D fights all the enemies. Only two of the Barbarois mutants appear as enemies. There are three endings, one of which is similar to the end of the anime. 2000 animated film The second film, Vampire Hunter D: Bloodlust garnered respect for its advanced animation techniques, detailed art style and character designs, voice acting originally recorded in English (English voice casting/direction by Jack Fletcher), and its sophisticated orchestral soundtrack composed, arranged and conducted by Marco D'Ambrosio. Its art style closely mirrored that of the illustrator and original character designer of the first movie, Yoshitaka Amano. The storyline features a larger cast than the first film. The second Vampire Hunter D movie (known as Vampire Hunter D: Bloodlust outside of Japan) is based on the third of Hideyuki Kikuchi's Vampire Hunter D novels (Demon Deathchase in English). Unlike the first film, which was released in 1985, this movie is rated NC-16 in Singapore, M in Australia, 15 in the UK, R13 in New Zealand and R for violence/gore in the USA (except for the Blu-ray release, which is unrated). 2007 manga adaptation In November 2007, the first volume of Saiko Takaki's manga adaptation of Hideyuki Kikuchi's series was published simultaneously in the U.S., Japan, and Europe. The project, overseen by Digital Manga Publishing and Hideyuki Kikuchi, aimed to adapt the entire catalogue of Vampire Hunter D novels into a manga form, however it had concluded after the eighth volume 2022 comic book series On June 30, 2016, a Kickstarter crowdfunding campaign for a five-issue Vampire Hunter D comic book series titled Vampire Hunter D: Message from Mars was announced. Published by Stranger Comics with supervision from series creator Hideyuki Kikuchi and support from the creative teams at Unified Pictures and Digital Frontier, Message from Mars is an adaptation of the 2005 short story Message from Cecile and acts as a prequel to the then-in-development animated series. The series is written by Brandon M. Easton and illustrated by Michael Broussard, with visual development by Christopher Shy. The campaign's stretch goals also include an official Vampire Hunter D Pathfinder Roleplaying Game supplement written by F. Wesley Schneider. The campaign reached its $25,000 funding goal on July 1, 2016, and its initial $50,000 stretch goal on July 7, 2016. The campaign concluded on August 9, 2016, with 1,736 backers pledging a total of $107,025, reaching four out of five stretch goals. Following the first issue the series was placed on temporary hiatus due to a serious medical emergency in Broussard's family, resuming production in early 2018 with new artists Ryan Benjamin and Richard Friend. As of December 2020 all production work for the five-issue run is complete, but publication plans were placed on an indefinite hiatus due to the ongoing COVID-19 pandemic. In December 2021 the completion of the project was announced via Kickstarter update, with limited publication planned to begin in 2022 following a recovery from pandemic conditions, to be followed by a wide retail and digital release in the future. The graphic novel Kickstarter campaign was launched on January 26, 2022, offering a limited hardcover collector's edition, variant cover single issue editions, and a digital edition. The campaign achieved its initial $30,000 funding goal within 90 minutes, and surpassed $100,000 within the first day, concluding on February 19, 2020, with 4,095 backers pledging a total of $445,205. Development of animated series In June 2015, a new animated series tentatively titled Vampire Hunter D: Resurrection was announced, produced by Unified Pictures and Digital Frontier. The series would be produced by Kurt Rauer and Scott McLean, and directed by Yoichi Mori, with Bloodlust director Yoshiaki Kawajiri acting as supervising director and series creator Hideyuki Kikuchi providing editorial supervision. The series was currently in pre-production, and is developed as an hour-long serial drama with the intent of being broadcast on a major American cable network or on-demand provider, with Japanese distribution to follow. As of June 2016 the series is still in pre-production, with plans to begin shipping the project to distributors by the end of the year. Given the abundance of source material, the current plan is to produce as many as seven seasons, without revisiting the source material that was adapted into the first two films. In February 2018 it was announced that the pilot episode would be written by Brandon M. Easton, writer for the Message from Mars comic book series. The first draft of the pilot was completed in October 2018. Pre-production on the series was put on hold in early 2020 as a result of the COVID-19 pandemic. Other media In July 2008, Devils Due Publishing announced that it had acquired the rights to publish an English-language Vampire Hunter D comic book mini-series titled Vampire Hunter D: American Wasteland, to be written by Jimmy Palmiotti and pencilled by Tim Seeley, however the project was cancelled in 2009. Intended to infuse the standard Vampire Hunter D formula and mythos with more Western sensibilities, it would have told an original story about D departing the Frontier to embark on a journey to a new land still ruled by the vampiric Nobility. In 2010, it was reported in Japanese horror magazine Rue Morgue that Hideyuki Kikuchi was in talks with one of the producers for Capcom's Resident Evil video game series to develop a live-action Vampire Hunter D adaptation. Reception By 2008, the Vampire Hunter D novels had sold over 17 million copies worldwide, making it one of the best-selling book series in history. Theron Martin of Anime News Network called the first novel "a competent vampire-hunting story with enough strong points to balance out its weaknesses" and gave it a B rating. He praised the setting as a wholly credible world ruled by vampires and grounded in science fiction, rather than fantasy or the supernatural. However, he called the plotting "fairly rudimentary" and a standard tale of a hero and heroine struggling against colorful opposition coming from different directions, where even the few twists are hardly unique. While Martin praised the characters of Doris and Rei-Ginsei, he criticized D and Count Magnus Lee as having weak characterizations. His colleague Rebecca Silverman also strongly praised the world and setting of the first three Vampire Hunter D novels, finding it to clearly be a post-apocalyptic "version of our reality" that in many ways is just as much a character as D. She wrote that the books are "practically dripping with atmosphere" as the story's descriptions are florid and detailed. In her four out of five stars review, Silverman did note that the series could sometimes feel melodramatic and criticized most of the heroines as outdated. Reviewing the first three novels for Anime UK News, Ian Wolf gave the series an 8/10 rating and wrote "Vampire Hunter D is an entertaining read, mixing elements of many different genres to create something very different from what is often available." He noted that D's Left Hand adds some comedy to the overall dark tone of the series. See also Vampire literature Vampire film Notes References External links Dark Horse Digital Manga Publishing Haikasoru Asashi Sonorama – Japanese publisher of the Vampire Hunter D series books and audio dramas. Hideyuki Kikuichi Official Fan Club (Japanese) The Vampire Hunter D Archives Book series introduced in 1983 Japanese serial novels Biopunk Transhumanism in fiction Demons in popular culture Novels about magic Fiction about artificial intelligence Fiction about wormholes Mutants in fiction Fiction about cyborgs Fiction about robots Fiction about genetic engineering Fiction about nanotechnology Science fiction novel series Science fiction horror Science fantasy Dystopian fiction Lovecraftian horror Horror fiction Dark fantasy Gothic fiction Fantasy novel series Fictional half-vampires Fictional vampire hunters Fictional bounty hunters Fiction about the Solar System Fiction about immortality Fiction set in the 7th millennium or beyond Apocalyptic fiction Post-apocalyptic fiction Post-apocalyptic literature Vampire novels Novels adapted into comics Japanese novels adapted into films Novels adapted into video games Retrofuturism Anime productions suspended due to the COVID-19 pandemic Comic books suspended due to the COVID-19 pandemic Science fantasy novels Horror novels
Vampire Hunter D
Materials_science,Engineering,Biology
4,791
26,069,810
https://en.wikipedia.org/wiki/Samsung%20T559%20Comeback
The Samsung Comeback (SGH-T559) is a mobile phone announced by T-Mobile on 22 July 2009 and released in Q3 2009. The phone has a full QWERTY keyboard, and is primarily a messaging phone, though it does have internet capability. Overview The Comeback has a 2.0-megapixel camera, an HTML browser with Flash Lite, 3G support, stereo Bluetooth, and a music player that supports MP3, AAC/AAC+, WMA, MPEG4, WAV, MIDI, and RealAudio formats. It has an SAR of 1.35 watts per kilogram. The phone comes in White/Cherry Red or Grey/Plum colors. The Alphanumeric keypad glows orange on both color choices. References Samsung Comeback (retrieved 16-02-2010) Samsung Comeback phone details from T-Mobile (retrieved 16-02-2010) Samsung Comeback review at CNET (retrieved 16-02-2010) Samsung mobile phones Mobile phones introduced in 2009
Samsung T559 Comeback
Technology
211
38,231,808
https://en.wikipedia.org/wiki/Finite%20lattice%20representation%20problem
In mathematics, the finite lattice representation problem, or finite congruence lattice problem, asks whether every finite lattice is isomorphic to the congruence lattice of some finite algebra. Background A lattice is called algebraic if it is complete and compactly generated. In 1963, Grätzer and Schmidt proved that every algebraic lattice is isomorphic to the congruence lattice of some algebra. Thus there is essentially no restriction on the shape of a congruence lattice of an algebra. The finite lattice representation problem asks whether the same is true for finite lattices and finite algebras. That is, does every finite lattice occur as the congruence lattice of a finite algebra? In 1980, Pálfy and Pudlák proved that this problem is equivalent to the problem of deciding whether every finite lattice occurs as an interval in the subgroup lattice of a finite group. For an overview of the group theoretic approach to the problem, see Pálfy (1993) and Pálfy (2001). This problem should not be confused with the congruence lattice problem. Significance This is among the oldest unsolved problems in universal algebra. Until it is answered, the theory of finite algebras is incomplete since, given a finite algebra, it is unknown whether there are, a priori, any restrictions on the shape of its congruence lattice. References Further reading External links Finite Congruence Lattice Problem Algebraic structures Lattice theory Mathematical problems Unsolved problems in mathematics
Finite lattice representation problem
Mathematics
295
37,435,693
https://en.wikipedia.org/wiki/Temporal%20lobe%20necrosis
Temporal lobe necrosis is a late-stage and serious complication usually occurring in persons who have undergone radiation treatment for nasopharyngeal carcinoma (NPC). It is rather rare and occurs in 4-30% of patients who receive radiation treatment for NPC. Many patients who experience temporal lobe necrosis are asymptomatic. This demonstrates a need for consistent imaging follow up, such as MRI and/or PET/CT, to help with the potential management of it. Those who are symptomatic usually suffer from "vague" symptoms including headaches, dizziness, intracranial pressure, personality changes, seizures, and short-term memory loss. The rarity of this disease has led to difficulty in finding optimal treatments, however, most treatments include one or some of the following: steroids, hyperbaric oxygen, surgery, and decadron. References External links Necrosis Brain disorders
Temporal lobe necrosis
Biology
191
3,124,816
https://en.wikipedia.org/wiki/Hedge%20%28linguistics%29
In linguistics (particularly sub-fields like applied linguistics and pragmatics), a hedge is a word or phrase used in a sentence to express ambiguity, probability, caution, or indecisiveness about the remainder of the sentence, rather than full accuracy, certainty, confidence, or decisiveness. Hedges can also allow speakers and writers to introduce (or occasionally even eliminate) ambiguity in meaning and typicality as a category member. Hedging in category membership is used in reference to the prototype theory, to signify the extent to which items are typical or atypical members of different categories. Hedges might be used in writing, to downplay a harsh critique or a generalization, or in speaking, to lessen the impact of an utterance due to politeness constraints between a speaker and addressee. Typically, hedges are adjectives or adverbs, but can also consist of clauses such as one use of tag questions. In some cases, a hedge could be regarded as a form of euphemism. Linguists consider hedges to be tools of epistemic modality; allowing speakers and writers to signal a level of caution in making an assertion. Hedges are also used to distinguish items into multiple categories, where items can be in a certain category to an extent. Types of hedges Hedges may take the form of many different parts of speech, for example: There might just be a few insignificant problems we need to address. (adjective) The party was somewhat spoiled by the return of the parents. (adverb) I'm not an expert but you might want to try restarting your computer. (clause) That's false, isn't it? (tag question clause) Using hedges Hedges are often used in everyday speech, and they can serve many different purposes. Below are a few ways to use hedges with examples to clarify these different functions. Category membership A very common use of hedges can be found in signaling typicality of category membership. Different hedges can signal prototypical membership in a category, meaning that member has most of the characteristics that are exemplary of the category. For example; A robin is a bird par excellence. This signifies that a robin has all of the typical characteristics of a bird, i.e. feathers, small, lives in a nest, etc. Loosely speaking, a bat is a bird. This sentence displays that a bat could technically be called a bird, but the hedge loosely speaking signifies that a bat has fringe membership in the category "bird". Epistemic hedges In some cases, "I don't know" functions as a prepositioned hedge—a forward-looking stance marker displaying that the speaker is not fully committed to what follows in their turn of talk. Hedges may intentionally or unintentionally be employed in both spoken and written language since they are crucially important in communication. Hedges help speakers and writers indicate more precisely how the cooperative principle (expectations of quantity, quality, manner, and relevance) is observed in assessments. For example, All I know is smoking is harmful to your health. Here, it can be observed that information conveyed by the speaker is limited by adding all I know. By so saying, the speaker wants to inform that they are not only making an assertion but observing the maxim of quantity as well. They told me that they are married. If the speaker were to say simply They are married and did not know for sure if that were the case, they might violate the maxim of quality, since they were saying something that they do not know to be true or false. By prefacing the remark with They told me that, the speaker wants to confirm that they are observing the conversational maxim of quality. I am not sure if all of these are clear to you, but this is what I know. The above example shows that hedges are good indications the speakers are not only conscious of the maxim of manner, but they are also trying to observe them. By the way, you like this car? By using by the way, what has been said by the speakers is not relevant to the moment in which the conversation takes place. Such a hedge can be found in the middle of speakers' conversation as the speaker wants to switch to another topic that is different from the previous one. Therefore, by the way functions as a hedge indicating that the speaker wants to drift into another topic or to stop the previous topic. Hedges in non-English languages Hedges are used as a tool of communication and are found in all of the world's languages. Examples of hedges in languages besides English are as follow: genre (French) Il était, grand (He was, , tall.) eigentlich (German) După câte am înţeles (Romanian) sora dumneavoastră crede că omul nu poate iubi decât o singură dată în viaţă. (, your sister thinks that man can only love once in his life.) When this phrase has full syntactic complementation, speakers emphasize their lack of knowledge or display reluctance to answer. However, without an object complement, speakers display uncertainty about the truth of the following proposition or about its sufficiency as an answer. Hedges in fuzzy language Hedges are generally used to either add or take away fuzziness or obscurity in a given situation, often through the use of modal auxiliaries or approximates. Fuzzy language refers to the strategic manipulation of hedges so as to deliberately introduce ambiguity into a statement. Hedges can also be used to express sarcasm as a way of making sentences more vague in written form. Sapphire works really hard. In this sentence, the word really can make the sentence fuzzy depending on the tone of the sentence. It could be serious (where Sapphire really is hard-working and deserves a raise or promotion) or sarcastic (where Sapphire is not contributing to the work). Lillian sure nailed her phonetics exam. In this sentence, sure is used sarcastically to create vagueness. Evasive hedging Hedging can be used as an evasive tool. For example, when expectations are not met or when people want to avoid answering a question. This is seen below: A: What did you think of Steve?B: As far as I can tell, he seems like a good guy. A: What did you think about Erica's presentation?B: I mean, it wasn't the best. Hedges and politeness Hedges can also be used to politely respond negatively to commands and requests by others. A: Are you coming to my ceremony tonight? B: I might, I'll have to see. A: Did you like that book? B: Personally, it wasn't my favorite, but it isn't bad I suppose. Incorrect usage of hedges There are cases in which particular hedges cannot be used or are considered strange given the context. Loosely speaking, my computer is also my television. *Loosely speaking, my computer is an electronic device. In the first sentence, 'loosely speaking' is used correctly, as it precedes a somewhat inaccurate, perhaps interpretive picture of the computer's identity. In the second sentence, 'loosely speaking' is used when the phrase 'broadly speaking' would be more apt: the description itself is accurate, but more general in nature. Hedging strategies Source: Indetermination – serves to augment the uncertainty of a statement or response Depersonalisation – circumvents the use of direct reference of a specific subject, creating fuzziness around who referent of the sentence is Subjectivisation – to use verbs regarding the action of thought to express subjectivity about a claim (such as to suppose, think, or guess) Limitation – narrowing the category membership of a subject in order to add clarity See also Polite fiction Euphemism Epistemic modality References Further reading External links Hedged Assertion Parts of speech Ambiguity Euphemisms Pragmatics
Hedge (linguistics)
Technology
1,614
60,667,578
https://en.wikipedia.org/wiki/Virtually%20imaged%20phased%20array
A virtually imaged phased array (VIPA) is an angular dispersive device that, like a prism or a diffraction grating, splits light into its spectral components. The device works almost independently of polarization. In contrast to prisms or regular diffraction gratings, the VIPA has a much higher angular dispersion but has a smaller free spectral range. This aspect is similar to that of an Echelle grating, since it also uses high diffraction orders. To overcome this disadvantage, the VIPA can be combined with a diffraction grating. The VIPA is a compact spectral disperser with high wavelength resolving power. Basic mechanism In a virtually imaged phased array, the phased array is the optical analogue of a phased array antenna at radio frequencies. Unlike a diffraction grating which can be interpreted as a real phased array, in a virtually imaged phased array the phased array is created in a virtual image. More specifically, the optical phased array is virtually formed with multiple virtual images of a light source. This is the fundamental difference from an Echelle grating, where a similar phased array is formed in the real space. The virtual images of a light source in the VIPA are automatically aligned exactly at a constant interval, which is critical for optical interference. This is an advantage of the VIPA over an Echelle grating. When the output light is observed, the virtually imaged phased array works as if light were emitted from a real phased array. History and applications VIPA was proposed and named by Shirasaki in 1996. Prior to the publication in the paper, a preliminary presentation was given by Shirasaki at a conference. This presentation was reported in Laser Focus World. The details of this new approach to producing angular dispersion were described in the patent. Since then, in the first ten years, the VIPA was of particular interest in the field of optical fiber communication technology. The VIPA was first applied to optical wavelength division multiplexing (WDM) and a wavelength demultiplexer was demonstrated for a channel spacing of 0.8 nm, which was a standard channel spacing at the time. Later, a much smaller channel separation of 24 pm and a 3 dB bandwidth of 6 pm were achieved by Weiner in 2005 at 1550 nm wavelength range. For another application, by utilizing the wavelength-dependent length of the light path due to the angular dispersion of the VIPA, the compensation of chromatic dispersion of fibers was studied and demonstrated (Shirasaki, 1997). The compensation was further developed for tunable systems by using adjustable mirrors or a spatial light modulator (Weiner, 2006). Using the VIPA, compensation of polarization mode dispersion was also achieved (Weiner, 2008). Furthermore, pulse shaping using the combination of a VIPA for high-resolution wavelength splitting/recombining and a SLM was demonstrated (Weiner, 2010). A drawback of the VIPA is its limited free spectral range due to the high diffraction order. To expand the functional wavelength range, Shirasaki combined a VIPA with a regular diffraction grating in 1997 to provide a broadband two-dimensional spectral disperser. This configuration can be a high performance substitute for diffraction gratings in many grating applications. After the mid 2000s, the two-dimensional VIPA disperser has been used in various fields and devices, such as high-resolution WDM (Weiner, 2004), a laser frequency comb (Diddams, 2007), a spectrometer (Nugent-Glandorf, 2012), astrophysical instruments (Le Coarer, 2017, Bourdarot, 2018, Delboulbé, 2022, and Stacey, 2024), Brillouin spectroscopy in biomechanics (Scarcelli, 2008, Rosa, 2018, and Margueritat, 2020), other Brillouin spectroscopy (Loubeyre, 2022 and Wu, 2023), beam scanning (Ford, 2008), microscopy (Jalali, 2009), tomography imaging (Ellerbee, 2014), metrology (Bhattacharya, 2015), fiber laser (Xu, 2020), LiDAR (Fu, 2021), and surface measurement (Zhu, 2022). Structure and operational principle The main component of a VIPA is a glass plate whose normal is slightly tilted with respect to the input light. One side (light input side) of the glass plate is coated with a 100% reflective mirror and the other side (light output side) is coated with a highly reflective but partially transmissive mirror. The side with the 100% reflective mirror has an anti-reflection coated light entrance area, through which a light beam enters the glass plate. The input light is line-focused to a line (focal line) on the partially transmissive mirror on the light output side. A typical line-focusing lens is a cylindrical lens, which is also part of the VIPA. The light beam is diverging after the beam waist located at the line-focused position. After the light enters the glass plate through the light entrance area, the light is reflected at the partially transmissive mirror and the 100% reflective mirror, and thus the light travels back and forth between the partially transmissive mirror and the 100% reflective mirror. It is noted that the glass plate is tilted as a result of its slight rotation where the axis of rotation is the focal line. This rotation/tilt prevents the light from leaving the glass plate out of the light entrance area. Therefore, in order for the optical system to work as a VIPA, there is a critical minimum angle of tilt that allows the light entering through the light entrance area to return only to the 100% reflective mirror. Below this angle, the function of the VIPA is severely impaired. If the tilting angle were zero, the reflected light from the partially transmissive mirror would travel exactly in reverse and exit the glass plate through the light entrance area without being reflected by the 100% reflective mirror. In the figure, refraction at the surfaces of the glass plate was ignored for simplicity. When the light beam is reflected each time at the partially transmissive mirror, a small portion of the light power passes through the mirror and travels away from the glass plate. For a light beam passing through the mirror after multiple reflections, the position of the line-focus can be seen in the virtual image when observed from the light output side. Therefore, this light beam travels as if it originated at a virtual light source located at the position of the line-focus and diverged from the virtual light source. The positions of the virtual light sources for all the transmitted light beams automatically align along the normal to the glass plate with a constant spacing, that is, a number of virtual light sources are superimposed to create an optical phased array. Due to the interference of all the light beams, the phased array emits a collimated light beam in one direction, which is at a wavelength dependent angle, and therefore, an angular dispersion is produced. Wavelength resolution Similarly to the resolving power of a diffraction grating, which is determined by the number of the illuminated grating elements and the order of diffraction, the resolving power of a VIPA is determined by the reflectivity of the back surface of the VIPA and the thickness of the glass plate. For a fixed thickness, a high reflectivity causes light to stay longer in the VIPA. This creates more virtual sources of light and thus increases the resolving power. On the other hand, with a lower reflectivity, the light in the VIPA is quickly lost, meaning fewer virtual sources of light are superimposed. This results in lower resolving power. For large angular dispersion with high resolving power, the dimensions of the VIPA should be accurately controlled. Fine tuning of the VIPA characteristics was demonstrated by developing an elastomer-based structure (Metz, 2013). A constant reflectivity of the partially transmissive mirror in the VIPA produces a Lorentzian power distribution when the output light is imaged onto a screen, which has a negative effect on the wavelength selectivity. This can be improved by providing the partially transmissive mirror with a linearly decreasing reflectivity. This leads to a Gaussian-like power distribution on a screen and improves the wavelength selectivity or the resolving power. Spectral dispersion law An analytical calculation of the VIPA was first performed by Vega and Weiner in 2003 based on the theory of plane waves and an improved model based on the Fresnel diffraction theory was developed by Xiao and Weiner in 2004. Commercialization of the VIPA VIPA devices have been commercialized by LightMachinery as spectral disperser devices or components with various customized design parameters. References Spectroscopy Optical components Interferometry
Virtually imaged phased array
Physics,Chemistry,Materials_science,Technology,Engineering
1,824
74,025,450
https://en.wikipedia.org/wiki/Trevor%20Lawley
Trevor Lawley FMedSci is a Faculty member and Senior Group Leader in the Host-Microbiota Interactions Lab at the Wellcome Sanger Institute (WSI). He is also co-founder and Chief Scientific Officer of the biotech company Microbiotica. During his career, Lawley has pioneered the application of high throughput genomic and culturing approaches to characterise enteric pathogens and investigate the microbiomes contained on and within host organisms, during periods of health and disease. Education and career Lawley received his bachelor's degree in Biology in 1997 from Acadia University. He then studied for a PhD at the University of Alberta, in the laboratories of Diane Taylor and Laura Frost, where he studied the mechanisms that pathogenic bacteria use to disseminate antibiotic resistance genes. After his PhD, Trevor was awarded a Canadian Institutes of Health Research post-doctoral fellowship to work in the Laboratories of Stanley Falkow and Denise Monack at Stanford University, where he studied the impact of antibiotic treatment on Salmonella disease and transmission. In 2007 Lawley received a Royal Society of London Award to start a research programme on Clostridioides difficile disease and transmission at the Wellcome Sanger Institute. In 2010, he was appointed as a Career Development Fellow in the Sanger Institute Faculty and was promoted to Faculty Group Leader in 2014 and a Senior Group Leader in 2021. Lawley chairs the Wellcome Sanger Institute International Fellows programme, which focuses on empowering scientists from Low- and Middle-Income Countries through access to cutting-edge genomic technologies and training. In December 2016, Lawley, together with Gordon Dougan and Mike Romanos, co-founded the biotech company Microbiotica through £12M seed funding from Cambridge Innovation Capital, IP Group and Seventure. Microbiotica develops Live Biotherapeutic Products, biomarkers and microbiome-based technologies focused on autoimmune diseases and cancers. In 2018, Microbiotica entered into a collaboration with Genentech to discover, develop and commercialise inflammatory bowel disease (IBD)biomarkers, targets and medicines. In 2020, Microbiotica entered into a partnership with Cambridge University Hospitals and Cancer Research UK to discover and develop biomarkers and medicines for cancer immunotherapy patients with melanoma, renal cell carcinoma or lung cancer. In March 2022, Microbiotica secured Series B funding to perform two phase 1 clinical studies for treatment of patients with melanoma (MELODY-1, NCT06540391) or ulcerative colitis (COMPOSER-1, NCT06582264). The two clinical trials started in 2024 and clinical readouts are expected in Q2 2026. Research Lawley leads the Host-Microbiota Interactions Lab at the Sanger Institute, which explores the relationship between humans and the bacteria and viruses that collectively form their microbiome. Lawley and his team use a range of methods and tools, including large scale metagenomic analysis, genetics, mouse and cellular models, state-of-the art microbial culturing, transcriptomics, proteomics and machine learning, to investigate the microbial communities associated with human health and a range of developmental disorders, diseases and poorly understood syndromes. They have a particular interest the roles of the microbiome in infectious disease, autoimmune disease, cancer and childhood developmental disorders. Their work has pioneered concepts, analytical tools and methodologies that, through data- and hypothesis-driven approaches, have led to foundational discoveries and enabled translation of medicines and diagnostics from the human microbiome. Lawley and his team overturned the "great plate count anomaly dogma", long held in microbial ecology, by demonstrating the majority of the human gut microbiota is culturable. This breakthrough involved bacterial culturing, biobanking and genome sequencing at scale, leading to the development of fast, affordable, yet sophisticated, high-resolution gut microbiome analysis.. A major outcome of biobanking pure cultures has been experimental testing of biological hypotheses derived from metagenomic analysis, which moves the field towards studies of causation and enables the realisation of microbiome derived medicines. They focus on several key areas including: Characterising the evolution of taxonomic and functional diversity in the human microbiome. Investigating the early-life microbiome, and how birth-mode and clinical interventions, impact microbiome acquisition and assembly. Understanding host-microbiome interactions and drivers of inflammatory disease and cancer. Studying the fundamental biology, transmission and pathogenesis of C. difficile. Lawley’s group has authored over 100 papers. Their work is regularly covered in the scientific and popular press. Awards and honours Lawley was elected as a Fellow of the Academy of Medical Sciences in 2023. In 2015 Lawley received the Peggy Lillis Foundation “Innovator Award” for Pioneering Work on Live Biotherapeutics. References Year of birth missing (living people) Living people Canadian microbiologists Wellcome Trust Biotechnologists Acadia University alumni University of Alberta alumni Stanford University staff Fellows of the Academy of Medical Sciences (United Kingdom) Canadian geneticists
Trevor Lawley
Biology
1,076
621,774
https://en.wikipedia.org/wiki/Low-dimensional%20topology
In mathematics, low-dimensional topology is the branch of topology that studies manifolds, or more generally topological spaces, of four or fewer dimensions. Representative topics are the structure theory of 3-manifolds and 4-manifolds, knot theory, and braid groups. This can be regarded as a part of geometric topology. It may also be used to refer to the study of topological spaces of dimension 1, though this is more typically considered part of continuum theory. History A number of advances starting in the 1960s had the effect of emphasising low dimensions in topology. The solution by Stephen Smale, in 1961, of the Poincaré conjecture in five or more dimensions made dimensions three and four seem the hardest; and indeed they required new methods, while the freedom of higher dimensions meant that questions could be reduced to computational methods available in surgery theory. Thurston's geometrization conjecture, formulated in the late 1970s, offered a framework that suggested geometry and topology were closely intertwined in low dimensions, and Thurston's proof of geometrization for Haken manifolds utilized a variety of tools from previously only weakly linked areas of mathematics. Vaughan Jones' discovery of the Jones polynomial in the early 1980s not only led knot theory in new directions but gave rise to still mysterious connections between low-dimensional topology and mathematical physics. In 2002, Grigori Perelman announced a proof of the three-dimensional Poincaré conjecture, using Richard S. Hamilton's Ricci flow, an idea belonging to the field of geometric analysis. Overall, this progress has led to better integration of the field into the rest of mathematics. Two dimensions A surface is a two-dimensional, topological manifold. The most familiar examples are those that arise as the boundaries of solid objects in ordinary three-dimensional Euclidean space R3—for example, the surface of a ball. On the other hand, there are surfaces, such as the Klein bottle, that cannot be embedded in three-dimensional Euclidean space without introducing singularities or self-intersections. Classification of surfaces The classification theorem of closed surfaces states that any connected closed surface is homeomorphic to some member of one of these three families: the sphere; the connected sum of g tori, for ; the connected sum of k real projective planes, for . The surfaces in the first two families are orientable. It is convenient to combine the two families by regarding the sphere as the connected sum of 0 tori. The number g of tori involved is called the genus of the surface. The sphere and the torus have Euler characteristics 2 and 0, respectively, and in general the Euler characteristic of the connected sum of g tori is . The surfaces in the third family are nonorientable. The Euler characteristic of the real projective plane is 1, and in general the Euler characteristic of the connected sum of k of them is . Teichmüller space In mathematics, the Teichmüller space TX of a (real) topological surface X, is a space that parameterizes complex structures on X up to the action of homeomorphisms that are isotopic to the identity homeomorphism. Each point in TX may be regarded as an isomorphism class of 'marked' Riemann surfaces where a 'marking' is an isotopy class of homeomorphisms from X to X. The Teichmüller space is the universal covering orbifold of the (Riemann) moduli space. Teichmüller space has a canonical complex manifold structure and a wealth of natural metrics. The underlying topological space of Teichmüller space was studied by Fricke, and the Teichmüller metric on it was introduced by . Uniformization theorem In mathematics, the uniformization theorem says that every simply connected Riemann surface is conformally equivalent to one of the three domains: the open unit disk, the complex plane, or the Riemann sphere. In particular it admits a Riemannian metric of constant curvature. This classifies Riemannian surfaces as elliptic (positively curved—rather, admitting a constant positively curved metric), parabolic (flat), and hyperbolic (negatively curved) according to their universal cover. The uniformization theorem is a generalization of the Riemann mapping theorem from proper simply connected open subsets of the plane to arbitrary simply connected Riemann surfaces. Three dimensions A topological space X is a 3-manifold if every point in X has a neighbourhood that is homeomorphic to Euclidean 3-space. The topological, piecewise-linear, and smooth categories are all equivalent in three dimensions, so little distinction is made in whether we are dealing with say, topological 3-manifolds, or smooth 3-manifolds. Phenomena in three dimensions can be strikingly different from phenomena in other dimensions, and so there is a prevalence of very specialized techniques that do not generalize to dimensions greater than three. This special role has led to the discovery of close connections to a diversity of other fields, such as knot theory, geometric group theory, hyperbolic geometry, number theory, Teichmüller theory, topological quantum field theory, gauge theory, Floer homology, and partial differential equations. 3-manifold theory is considered a part of low-dimensional topology or geometric topology. Knot and braid theory Knot theory is the study of mathematical knots. While inspired by knots that appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined together so that it cannot be undone. In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3 (since we're using topology, a circle isn't bound to the classical geometric concept, but to all of its homeomorphisms). Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. Knot complements are frequently-studied 3-manifolds. The knot complement of a tame knot K is the three-dimensional space surrounding the knot. To make this precise, suppose that K is a knot in a three-manifold M (most often, M is the 3-sphere). Let N be a tubular neighborhood of K; so N is a solid torus. The knot complement is then the complement of N, A related topic is braid theory. Braid theory is an abstract geometric theory studying the everyday braid concept, and some generalizations. The idea is that braids can be organized into groups, in which the group operation is 'do the first braid on a set of strings, and then follow it with a second on the twisted strings'. Such groups may be described by explicit presentations, as was shown by . For an elementary treatment along these lines, see the article on braid groups. Braid groups may also be given a deeper mathematical interpretation: as the fundamental group of certain configuration spaces. Hyperbolic 3-manifolds A hyperbolic 3-manifold is a 3-manifold equipped with a complete Riemannian metric of constant sectional curvature -1. In other words, it is the quotient of three-dimensional hyperbolic space by a subgroup of hyperbolic isometries acting freely and properly discontinuously. See also Kleinian model. Its thick-thin decomposition has a thin part consisting of tubular neighborhoods of closed geodesics and/or ends that are the product of a Euclidean surface and the closed half-ray. The manifold is of finite volume if and only if its thick part is compact. In this case, the ends are of the form torus cross the closed half-ray and are called cusps. Knot complements are the most commonly studied cusped manifolds. Poincaré conjecture and geometrization Thurston's geometrization conjecture states that certain three-dimensional topological spaces each have a unique geometric structure that can be associated with them. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply-connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic). In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by , and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture. Four dimensions A 4-manifold is a 4-dimensional topological manifold. A smooth 4-manifold is a 4-manifold with a smooth structure. In dimension four, in marked contrast with lower dimensions, topological and smooth manifolds are quite different. There exist some topological 4-manifolds that admit no smooth structure and even if there exists a smooth structure it need not be unique (i.e. there are smooth 4-manifolds that are homeomorphic but not diffeomorphic). 4-manifolds are of importance in physics because, in General Relativity, spacetime is modeled as a pseudo-Riemannian 4-manifold. Exotic R4 An exotic R4 is a differentiable manifold that is homeomorphic but not diffeomorphic to the Euclidean space R4. The first examples were found in the early 1980s by Michael Freedman, by using the contrast between Freedman's theorems about topological 4-manifolds, and Simon Donaldson's theorems about smooth 4-manifolds. There is a continuum of non-diffeomorphic differentiable structures of R4, as was shown first by Clifford Taubes. Prior to this construction, non-diffeomorphic smooth structures on spheres—exotic spheres—were already known to exist, although the question of the existence of such structures for the particular case of the 4-sphere remained open (and still remains open to this day). For any positive integer n other than 4, there are no exotic smooth structures on Rn; in other words, if n ≠ 4 then any smooth manifold homeomorphic to Rn is diffeomorphic to Rn. Other special phenomena in four dimensions There are several fundamental theorems about manifolds that can be proved by low-dimensional methods in dimensions at most 3, and by completely different high-dimensional methods in dimension at least 5, but which are false in four dimensions. Here are some examples: In dimensions other than 4, the Kirby–Siebenmann invariant provides the obstruction to the existence of a PL structure; in other words a compact topological manifold has a PL structure if and only if its Kirby–Siebenmann invariant in H4(M,Z/2Z) vanishes. In dimension 3 and lower, every topological manifold admits an essentially unique PL structure. In dimension 4 there are many examples with vanishing Kirby–Siebenmann invariant but no PL structure. In any dimension other than 4, a compact topological manifold has only a finite number of essentially distinct PL or smooth structures. In dimension 4, compact manifolds can have a countable infinite number of non-diffeomorphic smooth structures. Four is the only dimension n for which Rn can have an exotic smooth structure. R4 has an uncountable number of exotic smooth structures; see exotic R4. The solution to the smooth Poincaré conjecture is known in all dimensions other than 4 (it is usually false in dimensions at least 7; see exotic sphere). The Poincaré conjecture for PL manifolds has been proved for all dimensions other than 4, but it is not known whether it is true in 4 dimensions (it is equivalent to the smooth Poincaré conjecture in 4 dimensions). The smooth h-cobordism theorem holds for cobordisms provided that neither the cobordism nor its boundary has dimension 4. It can fail if the boundary of the cobordism has dimension 4 (as shown by Donaldson). If the cobordism has dimension 4, then it is unknown whether the h-cobordism theorem holds. A topological manifold of dimension not equal to 4 has a handlebody decomposition. Manifolds of dimension 4 have a handlebody decomposition if and only if they are smoothable. There are compact 4-dimensional topological manifolds that are not homeomorphic to any simplicial complex. In dimension at least 5 the existence of topological manifolds not homeomorphic to a simplicial complex was an open problem. In 2013, Ciprian Manolescu posted a preprint on the ArXiv showing that there are manifolds in each dimension greater than or equal to 5, that are not homeomorphic to a simplicial complex. A few typical theorems that distinguish low-dimensional topology There are several theorems that in effect state that many of the most basic tools used to study high-dimensional manifolds do not apply to low-dimensional manifolds, such as: Steenrod's theorem states that an orientable 3-manifold has a trivial tangent bundle. Stated another way, the only characteristic class of a 3-manifold is the obstruction to orientability. Any closed 3-manifold is the boundary of a 4-manifold. This theorem is due independently to several people: it follows from the Dehn–Lickorish theorem via a Heegaard splitting of the 3-manifold. It also follows from René Thom's computation of the cobordism ring of closed manifolds. The existence of exotic smooth structures on R4. This was originally observed by Michael Freedman, based on the work of Simon Donaldson and Andrew Casson. It has since been elaborated by Freedman, Robert Gompf, Clifford Taubes and Laurence Taylor to show there exists a continuum of non-diffeomorphic smooth structures on R4. Meanwhile, Rn is known to have exactly one smooth structure up to diffeomorphism provided n ≠ 4. See also List of geometric topology topics References External links Rob Kirby's Problems in Low-Dimensional Topologygzipped postscript file (1.4 MB) Mark Brittenham's links to low dimensional topologylists of homepages, conferences, etc. Geometric topology
Low-dimensional topology
Mathematics
2,908
74,933,406
https://en.wikipedia.org/wiki/Emilia%20Morosan
Emilia Morosan (also Moroșan, born 1976) is a Romanian-American condensed matter physicist whose research involves the synthesis of quantum materials, including quantum criticality and unconventional superconductors. She is also known for her discovery of super-strong titanium gold alloys. She is a professor at Rice University. Education and career Morosan was born in 1976 in Suceava, and studied physics at Alexandru Ioan Cuza University in Iași, Romania, earning a bachelor's degree in 1999. She completed a Ph.D. in physics and astronomy at Iowa State University in 2005. Her doctoral dissertation, Field-induced magnetic phase transitions and correlated electronic states in the hexagonal RAgGe and RPtIn series, was supervised by Paul C. Canfield. After postdoctoral research in chemistry at Princeton University, Morosan joined Rice University in 2007, as an assistant professor with a joint appointment in the Department of Chemistry and the Department of Physics and Astronomy, her primary affiliation. She was promoted to associate professor in 2013, adding another affiliation in the Department of Materials Science and Nanoengineering. Her promotion to full professor in 2015 added a fourth affiliation, in the Department of Electrical and Chemical Engineering. She is also a member of the Rice Center for Quantum Materials. Recognition Morosan was a 2009 recipient of the Presidential Early Career Awards for Scientists and Engineers. She was named as a Fellow of the American Physical Society (APS) in 2018, after a nomination from the APS Division of Condensed Matter Physics, "for experimental contributions to the understanding of correlated magnetic and superconducting materials, through the synthesis and study of unconventional magnetic systems, heavy fermion compounds and superconductors". She is also an Alexander von Humboldt fellow and a National Academy of Sciences Kavli Frontiers Fellow. References External links Morosan Research Group 1976 births Living people People from Suceava Romanian emigrants to the United States Romanian physicists Romanian women physicists American physicists American women physicists Condensed matter physicists Alexandru Ioan Cuza University alumni Iowa State University alumni Rice University faculty Fellows of the American Physical Society Recipients of the Presidential Early Career Award for Scientists and Engineers
Emilia Morosan
Physics,Materials_science
447
62,223,323
https://en.wikipedia.org/wiki/Square%20root%20of%206
The square root of 6 is the positive real number that, when multiplied by itself, gives the natural number 6. It is more precisely called the principal square root of 6, to distinguish it from the negative number with the same property. This number appears in numerous geometric and number-theoretic contexts. It can be denoted in surd form as and in exponent form as . It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: . which can be rounded up to 2.45 to within about 99.98% accuracy (about 1 part in 4800); that is, it differs from the correct value by about . It takes two more digits (2.4495) to reduce the error by about half. The approximation (≈ 2.449438...) is nearly ten times better: despite having a denominator of only 89, it differs from the correct value by less than , or less than one part in 47,000. Since 6 is the product of 2 and 3, the square root of 6 is the geometric mean of 2 and 3, and is the product of the square root of 2 and the square root of 3, both of which are irrational algebraic numbers. NASA has published more than a million decimal digits of the square root of six. Rational approximations The square root of 6 can be expressed as the simple continued fraction The successive partial evaluations of the continued fraction, which are called its convergents, approach : Their numerators are 2, 5, 22, 49, 218, 485, 2158, 4801, 21362, 47525, 211462, …, and their denominators are 1, 2, 9, 20, 89, 198, 881, 1960, 8721, 19402, 86329, …. Each convergent is a best rational approximation of ; in other words, it is closer to than any rational with a smaller denominator. Decimal equivalents improve linearly, at a rate of nearly one digit per convergent: The convergents, expressed as , satisfy alternately the Pell's equations When is approximated with the Babylonian method, starting with and using , the th approximant is equal to the th convergent of the continued fraction: The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial . The Newton's method update, is equal to when . The method therefore converges quadratically. Geometry In plane geometry, the square root of 6 can be constructed via a sequence of dynamic rectangles, as illustrated here. In solid geometry, the square root of 6 appears as the longest distances between corners (vertices) of the double cube, as illustrated above. The square roots of all lower natural numbers appear as the distances between other vertex pairs in the double cube (including the vertices of the included two cubes). The edge length of a cube with total surface area of 1 is or the reciprocal square root of 6. The edge lengths of a regular tetrahedron (), a regular octahedron (), and a cube () of equal total surface areas satisfy . The edge length of a regular octahedron is the square root of 6 times the radius of an inscribed sphere (that is, the distance from the center of the solid to the center of each face). The square root of 6 appears in various other geometry contexts, such as the side length for the square enclosing an equilateral triangle of side 2 (see figure). Trigonometry The square root of 6, with the square root of 2 added or subtracted, appears in several exact trigonometric values for angles at multiples of 15 degrees ( radians). :{| class="wikitable" style="text-align: center;" !Radians!!Degrees!!!!!!!!!!!! |- ! !! ||| || || || || |- ! !! ||| || || || || |} In culture Villard de Honnecourt's 13th century construction of a Gothic "fifth-point arch" with circular arcs of radius 5 has a height of twice the square root of 6, as illustrated here. See also Square root Square root of 2 Square root of 3 Square root of 5 Square root of 7 References Mathematical constants Quadratic irrational numbers
Square root of 6
Mathematics
917
69,366,677
https://en.wikipedia.org/wiki/Dayok
Dayok is a Philippine condiment originating from the islands of Visayas and Mindanao in the Philippines. It is made from fish entrails (usually from yellowfin tuna), excluding the heart and the bile sac. It is fermented with salt, and sometimes rice wine (pangasi) and various herbs. It has a sharp umami and salty flavor very similar to patis (fish sauce) and bagoong. They are sold in sealed glass bottles. See also Bekasang, a similar Indonesian preparation Shiokara, a similar Japanese preparation Bagoong Shrimp paste Fish sauce References External links Fermented fish Fish sauces Philippine condiments Fermented foods
Dayok
Biology
140
256,738
https://en.wikipedia.org/wiki/Tensor%20field
In mathematics and physics, a tensor field is a function assigning a tensor to each point of a region of a mathematical space (typically a Euclidean space or manifold) or of the physical space. Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in material object, and in numerous applications in the physical sciences. As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a magnitude and a direction, like velocity), a tensor field is a generalization of a scalar field and a vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor is defined on a vector fields set over a module , we call a tensor field on . Many mathematical structures called "tensors" are also tensor fields. For example, the Riemann curvature tensor is a tensor field as it associates a tensor to each point of a Riemannian manifold, which is a topological space. Definition Let M be a manifold, for instance the Euclidean plane Rn. Equivalently, it is a collection of elements Tx ∈ Vx⊗p ⊗ (Vx*)⊗q for all points x ∈ M, arranging into a smooth map T : M → V⊗p ⊗ (V*)⊗q. Elements Tx are called tensors. Often we take V = TM to be the tangent bundle of M. Geometric introduction Intuitively, a vector field is best visualized as an "arrow" attached to each point of a region, with variable length and direction. One example of a vector field on a curved space is a weather map showing horizontal wind velocity at each point of the Earth's surface. Now consider more complicated fields. For example, if the manifold is Riemannian, then it has a metric field , such that given any two vectors at point , their inner product is . The field could be given in matrix form, but it depends on a choice of coordinates. It could instead be given as an ellipsoid of radius 1 at each point, which is coordinate-free. Applied to the Earth's surface, this is Tissot's indicatrix. In general, we want to specify tensor fields in a coordinate-independent way: It should exist independently of latitude and longitude, or whatever particular "cartographic projection" we are using to introduce numerical coordinates. Via coordinate transitions Following and , the concept of a tensor relies on a concept of a reference frame (or coordinate system), which may be fixed (relative to some background reference frame), but in general may be allowed to vary within some class of transformations of these coordinate systems. For example, coordinates belonging to the n-dimensional real coordinate space may be subjected to arbitrary affine transformations: (with n-dimensional indices, summation implied). A covariant vector, or covector, is a system of functions that transforms under this affine transformation by the rule The list of Cartesian coordinate basis vectors transforms as a covector, since under the affine transformation . A contravariant vector is a system of functions of the coordinates that, under such an affine transformation undergoes a transformation This is precisely the requirement needed to ensure that the quantity is an invariant object that does not depend on the coordinate system chosen. More generally, a tensor of valence (p,q) has p downstairs indices and q upstairs indices, with the transformation law being The concept of a tensor field may be obtained by specializing the allowed coordinate transformations to be smooth (or differentiable, analytic, etc.). A covector field is a function of the coordinates that transforms by the Jacobian of the transition functions (in the given class). Likewise, a contravariant vector field transforms by the inverse Jacobian. Tensor bundles A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle. The vector bundle is a natural idea of "vector space depending continuously (or smoothly) on parameters" – the parameters being the points of a manifold M. For example, a vector space of one dimension depending on an angle could look like a Möbius strip or alternatively like a cylinder. Given a vector bundle V over M, the corresponding field concept is called a section of the bundle: for m varying over M, a choice of vector vm in Vm, where Vm is the vector space "at" m. Since the tensor product concept is independent of any choice of basis, taking the tensor product of two vector bundles on M is routine. Starting with the tangent bundle (the bundle of tangent spaces) the whole apparatus explained at component-free treatment of tensors carries over in a routine way – again independently of coordinates, as mentioned in the introduction. We therefore can give a definition of tensor field, namely as a section of some tensor bundle. (There are vector bundles that are not tensor bundles: the Möbius band for instance.) This is then guaranteed geometric content, since everything has been done in an intrinsic way. More precisely, a tensor field assigns to any given point of the manifold a tensor in the space where V is the tangent space at that point and V∗ is the cotangent space. See also tangent bundle and cotangent bundle. Given two tensor bundles E → M and F → M, a linear map A: Γ(E) → Γ(F) from the space of sections of E to sections of F can be considered itself as a tensor section of if and only if it satisfies A(fs) = fA(s), for each section s in Γ(E) and each smooth function f on M. Thus a tensor section is not only a linear map on the vector space of sections, but a C∞(M)-linear map on the module of sections. This property is used to check, for example, that even though the Lie derivative and covariant derivative are not tensors, the torsion and curvature tensors built from them are. Notation The notation for tensor fields can sometimes be confusingly similar to the notation for tensor spaces. Thus, the tangent bundle TM = T(M) might sometimes be written as to emphasize that the tangent bundle is the range space of the (1,0) tensor fields (i.e., vector fields) on the manifold M. This should not be confused with the very similar looking notation ; in the latter case, we just have one tensor space, whereas in the former, we have a tensor space defined for each point in the manifold M. Curly (script) letters are sometimes used to denote the set of infinitely-differentiable tensor fields on M. Thus, are the sections of the (m,n) tensor bundle on M that are infinitely-differentiable. A tensor field is an element of this set. Tensor fields as multilinear forms There is another more abstract (but often useful) way of characterizing tensor fields on a manifold M, which makes tensor fields into honest tensors (i.e. single multilinear mappings), though of a different type (although this is not usually why one often says "tensor" when one really means "tensor field"). First, we may consider the set of all smooth (C∞) vector fields on M, (see the section on notation above) as a single space — a module over the ring of smooth functions, C∞(M), by pointwise scalar multiplication. The notions of multilinearity and tensor products extend easily to the case of modules over any commutative ring. As a motivating example, consider the space of smooth covector fields (1-forms), also a module over the smooth functions. These act on smooth vector fields to yield smooth functions by pointwise evaluation, namely, given a covector field ω and a vector field X, we define Because of the pointwise nature of everything involved, the action of on X is a C∞(M)-linear map, that is, for any p in M and smooth function f. Thus we can regard covector fields not just as sections of the cotangent bundle, but also linear mappings of vector fields into functions. By the double-dual construction, vector fields can similarly be expressed as mappings of covector fields into functions (namely, we could start "natively" with covector fields and work up from there). In a complete parallel to the construction of ordinary single tensors (not tensor fields!) on M as multilinear maps on vectors and covectors, we can regard general (k,l) tensor fields on M as C∞(M)-multilinear maps defined on k copies of and l copies of into C∞(M). Now, given any arbitrary mapping T from a product of k copies of and l copies of into C∞(M), it turns out that it arises from a tensor field on M if and only if it is multilinear over C∞(M). Namely -module of tensor fields of type over M is canonically isomorphic to -module of -multilinear forms This kind of multilinearity implicitly expresses the fact that we're really dealing with a pointwise-defined object, i.e. a tensor field, as opposed to a function which, even when evaluated at a single point, depends on all the values of vector fields and 1-forms simultaneously. A frequent example application of this general rule is showing that the Levi-Civita connection, which is a mapping of smooth vector fields taking a pair of vector fields to a vector field, does not define a tensor field on M. This is because it is only -linear in Y (in place of full C∞(M)-linearity, it satisfies the Leibniz rule, )). Nevertheless, it must be stressed that even though it is not a tensor field, it still qualifies as a geometric object with a component-free interpretation. Applications The curvature tensor is discussed in differential geometry and the stress–energy tensor is important in physics, and these two tensors are related by Einstein's theory of general relativity. In electromagnetism, the electric and magnetic fields are combined into an electromagnetic tensor field. Differential forms, used in defining integration on manifolds, are a type of tensor field. Tensor calculus In theoretical physics and other fields, differential equations posed in terms of tensor fields provide a very general way to express relationships that are both geometric in nature (guaranteed by the tensor nature) and conventionally linked to differential calculus. Even to formulate such equations requires a fresh notion, the covariant derivative. This handles the formulation of variation of a tensor field along a vector field. The original absolute differential calculus notion, which was later called tensor calculus, led to the isolation of the geometric concept of connection. Twisting by a line bundle An extension of the tensor field idea incorporates an extra line bundle L on M. If W is the tensor product bundle of V with L, then W is a bundle of vector spaces of just the same dimension as V. This allows one to define the concept of tensor density, a 'twisted' type of tensor field. A tensor density is the special case where L is the bundle of densities on a manifold, namely the determinant bundle of the cotangent bundle. (To be strictly accurate, one should also apply the absolute value to the transition functions – this makes little difference for an orientable manifold.) For a more traditional explanation see the tensor density article. One feature of the bundle of densities (again assuming orientability) L is that Ls is well-defined for real number values of s; this can be read from the transition functions, which take strictly positive real values. This means for example that we can take a half-density, the case where s = . In general we can take sections of W, the tensor product of V with Ls, and consider tensor density fields with weight s. Half-densities are applied in areas such as defining integral operators on manifolds, and geometric quantization. The flat case When M is a Euclidean space and all the fields are taken to be invariant by translations by the vectors of M, we get back to a situation where a tensor field is synonymous with a tensor 'sitting at the origin'. This does no great harm, and is often used in applications. As applied to tensor densities, it does make a difference. The bundle of densities cannot seriously be defined 'at a point'; and therefore a limitation of the contemporary mathematical treatment of tensors is that tensor densities are defined in a roundabout fashion. Cocycles and chain rules As an advanced explanation of the tensor concept, one can interpret the chain rule in the multivariable case, as applied to coordinate changes, also as the requirement for self-consistent concepts of tensor giving rise to tensor fields. Abstractly, we can identify the chain rule as a 1-cocycle. It gives the consistency required to define the tangent bundle in an intrinsic way. The other vector bundles of tensors have comparable cocycles, which come from applying functorial properties of tensor constructions to the chain rule itself; this is why they also are intrinsic (read, 'natural') concepts. What is usually spoken of as the 'classical' approach to tensors tries to read this backwards – and is therefore a heuristic, post hoc approach rather than truly a foundational one. Implicit in defining tensors by how they transform under a coordinate change is the kind of self-consistency the cocycle expresses. The construction of tensor densities is a 'twisting' at the cocycle level. Geometers have not been in any doubt about the geometric nature of tensor quantities; this kind of descent argument justifies abstractly the whole theory. Generalizations Tensor densities The concept of a tensor field can be generalized by considering objects that transform differently. An object that transforms as an ordinary tensor field under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian of the inverse coordinate transformation to the wth power, is called a tensor density with weight w. Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher "weights" then just correspond to taking additional tensor products with this space in the range. A special case are the scalar densities. Scalar 1-densities are especially important because it makes sense to define their integral over a manifold. They appear, for instance, in the Einstein–Hilbert action in general relativity. The most common example of a scalar 1-density is the volume element, which in the presence of a metric tensor g is the square root of its determinant in coordinates, denoted . The metric tensor is a covariant tensor of order 2, and so its determinant scales by the square of the coordinate transition: which is the transformation law for a scalar density of weight +2. More generally, any tensor density is the product of an ordinary tensor with a scalar density of the appropriate weight. In the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles w times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values. Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see density on a manifold. See also Notes References . . . . . . . . Multilinear algebra Differential geometry Differential topology Tensors Functions and mappings
Tensor field
Mathematics,Engineering
3,395
78,798,205
https://en.wikipedia.org/wiki/Contra-rotating%20marine%20propellers
Contra-rotating propellers have benefits when providing thrust in marine applications. Contra-rotating propellers are used on torpedoes due to the natural torque compensation and are also used in some motor boats. The cost of boring out the outer shafts and problems of mounting the inner shaft bearings are not worth pursuing in case of normal ships. Advantages and disadvantages Advantages The propeller-induced heeling moment is compensated (negligible for larger ships). More power can be transmitted for a given propeller radius. The propeller efficiency is increased by recovering energy from the tangential (rotational) flow from the leading propeller. Tangential flow does not contribute to thrust, conversion of tangential to axial flow increases both thrust and overall system efficiency. Disadvantages The mechanical installation of coaxial contra-rotating shafts is complicated and expensive and requires more maintenance. The hydrodynamic gains are partially reduced by mechanical losses in shafting. Applications Torpedoes such as the Bliss-Leavitt torpedo have commonly used contra-rotating propellers to give the maximum possible speed within a limited diameter as well as counteracting the torque that would otherwise tend to cause the torpedo to rotate around its own longitudinal axis. Recreational boating also found applications: in 1982 Volvo Penta introduced a contra-rotating boat propeller branded DuoProp. The patented device has been marketed since. After the Volvo Penta patents ran out, Mercury Marine has also produced a corresponding product, MerCruiser Bravo 3. However, in commercial ships and in traditional machinery arrangement, contra-rotating propellers are rare, due to cost and complexity. ABB provided an azimuth thruster for ShinNihonkai Ferries in form of the CRP Azipod, claiming efficiency gains from the propeller (about 10% increase) and a simpler hull design. Volvo Penta have launched the IPS (Inboard Performance System), an integrated diesel, transmission and pulling contra-rotating propellers for motor yachts. At lower power levels, contra-rotating mechanical azimuth thrusters are one possibility, convenient for CRP due to their inherent bevel gear construction. Rolls-Royce and Steerprop have offered CRP versions of their products. References Propellers Marine engineering
Contra-rotating marine propellers
Engineering
437
186,415
https://en.wikipedia.org/wiki/Mojibake
Mojibake (; , 'character transformation') is the garbled or gibberish text that is the result of text being decoded using an unintended character encoding. The result is a systematic replacement of symbols with completely unrelated ones, often from a different writing system. This display may include the generic replacement character in places where the binary representation is considered invalid. A replacement can also involve multiple consecutive symbols, as viewed in one encoding, when the same binary code constitutes one symbol in the other encoding. This is either because of differing constant length encoding (as in Asian 16-bit encodings vs European 8-bit encodings), or the use of variable length encodings (notably UTF-8 and UTF-16). Failed rendering of glyphs due to either missing fonts or missing glyphs in a font is a different issue that is not to be confused with mojibake. Symptoms of this failed rendering include blocks with the code point displayed in hexadecimal or using the generic replacement character. Importantly, these replacements are valid and are the result of correct error handling by the software. Causes To correctly reproduce the original text that was encoded, the correspondence between the encoded data and the notion of its encoding must be preserved (i.e. the source and target encoding standards must be the same). As mojibake is the instance of non-compliance between these, it can be achieved by manipulating the data itself, or just relabelling it. Mojibake is often seen with text data that have been tagged with a wrong encoding; it may not even be tagged at all, but moved between computers with different default encodings. A major source of trouble are communication protocols that rely on settings on each computer rather than sending or storing metadata together with the data. The differing default settings between computers are in part due to differing deployments of Unicode among operating system families, and partly the legacy encodings' specializations for different writing systems of human languages. Whereas Linux distributions mostly switched to UTF-8 in 2004, Microsoft Windows generally uses UTF-16, and sometimes uses 8-bit code pages for text files in different languages. For some writing systems, such as Japanese, several encodings have historically been employed, causing users to see mojibake relatively often. As an example, the word mojibake itself ("文字化け") stored as EUC-JP might be incorrectly displayed as , "ハクサ嵂ス、ア" (MS-932), or "ハクサ郾ス、ア" if interpreted as Shift-JIS, or as "ʸ»ú²½¤±" in software that assumes text to be in the Windows-1252 or ISO 8859-1 encodings, usually labelled Western or Western European. This is further exacerbated if other locales are involved: the same text stored as UTF-8 appears as if interpreted as Shift-JIS, as "文字化け" if interpreted as Western, or (for example) as "鏂囧瓧鍖栥亼" if interpreted as being in a GBK (Mainland China) locale. Underspecification If the encoding is not specified, it is up to the software to decide it by other means. Depending on the type of software, the typical solution is either configuration or charset detection heuristics, both of which are prone to mis-prediction. The encoding of text files is affected by locale setting, which depends on the user's language and brand of operating system, among other conditions. Therefore, the assumed encoding is systematically wrong for files that come from a computer with a different setting, or even from a differently localized piece of software within the same system. For Unicode, one solution is to use a byte order mark, but many parsers do not tolerate this for source code or other machine-readable text. Another solution is to store the encoding as metadata in the file system; file systems that support extended file attributes can store this as user.charset. This also requires support in software that wants to take advantage of it, but does not disturb other software. While some encodings are easy to detect, such as UTF-8, there are many that are hard to distinguish (see charset detection). For example, a web browser may not be able to distinguish between a page coded in EUC-JP and another in Shift-JIS if the encoding is not assigned explicitly using HTTP headers sent along with the documents, or using the document's meta tags that are used to substitute for missing HTTP headers if the server cannot be configured to send the proper HTTP headers; see character encodings in HTML. Mis-specification Mojibake also occurs when the encoding is incorrectly specified. This often happens between encodings that are similar. For example, the Eudora email client for Windows was known to send emails labelled as ISO 8859-1 that were in reality Windows-1252. Windows-1252 contains extra printable characters in the C1 range (the most frequently seen being curved quotation marks and extra dashes), that were not displayed properly in software complying with the ISO standard; this especially affected software running under other operating systems such as Unix. User oversight Of the encodings still in common use, many originated from taking ASCII and appending atop it; as a result, these encodings are partially compatible with each other. Examples of this include Windows-1252 and ISO 8859-1. People thus may mistake the expanded encoding set they are using with plain ASCII. Overspecification When there are layers of protocols, each trying to specify the encoding based on different information, the least certain information may be misleading to the recipient. For example, consider a web server serving a static HTML file over HTTP. The character set may be communicated to the client in any number of 3 ways: in the HTTP header. This information can be based on server configuration (for instance, when serving a file off disk) or controlled by the application running on the server (for dynamic websites). in the file, as an HTML meta tag (http-equiv or charset) or the encoding attribute of an XML declaration. This is the encoding that the author meant to save the particular file in. in the file, as a byte order mark. This is the encoding that the author's editor actually saved it in. Unless an accidental encoding conversion has happened (by opening it in one encoding and saving it in another), this will be correct. It is, however, only available in Unicode encodings such as UTF-8 or UTF-16. Lack of hardware or software support Much older hardware is typically designed to support only one character set and the character set typically cannot be altered. The character table contained within the display firmware will be localized to have characters for the country the device is to be sold in, and typically the table differs from country to country. As such, these systems will potentially display mojibake when loading text generated on a system from a different country. Likewise, many early operating systems do not support multiple encoding formats and thus will end up displaying mojibake if made to display non-standard text—early versions of Microsoft Windows and Palm OS for example, are localized on a per-country basis and will only support encoding standards relevant to the country the localized version will be sold in, and will display mojibake if a file containing a text in a different encoding format from the version that the OS is designed to support is opened. Resolutions Applications using UTF-8 as a default encoding may achieve a greater degree of interoperability because of its widespread use and backward compatibility with US-ASCII. UTF-8 also has the ability to be directly recognised by a simple algorithm, so that well written software should be able to avoid mixing UTF-8 up with other encodings. The difficulty of resolving an instance of mojibake varies depending on the application within which it occurs and the causes of it. Two of the most common applications in which mojibake may occur are web browsers and word processors. Modern browsers and word processors often support a wide array of character encodings. Browsers often allow a user to change their rendering engine's encoding setting on the fly, while word processors allow the user to select the appropriate encoding when opening a file. It may take some trial and error for users to find the correct encoding. The problem gets more complicated when it occurs in an application that normally does not support a wide range of character encoding, such as in a non-Unicode computer game. In this case, the user must change the operating system's encoding settings to match that of the game. However, changing the system-wide encoding settings can also cause Mojibake in pre-existing applications. In Windows XP or later, a user also has the option to use Microsoft AppLocale, an application that allows the changing of per-application locale settings. Even so, changing the operating system encoding settings is not possible on earlier operating systems such as Windows 98; to resolve this issue on earlier operating systems, a user would have to use third party font rendering applications. Problems in different writing systems English Mojibake in English texts generally occurs in punctuation, such as em dashes (—), en dashes (–), and curly quotes (“, ”, ‘, ’), but rarely in character text, since most encodings agree with ASCII on the encoding of the English alphabet. For example, the pound sign £ will appear as £ if it was encoded by the sender as UTF-8 but interpreted by the recipient as one of the Western European encodings (CP1252 or ISO 8859-1). If iterated using CP1252, this can lead to £, £, £, £, and so on. Similarly, the right single quotation mark (’), when encoded in UTF-8 and decoded using Windows-1252, becomes ’, ’, ’, and so on. In older eras, some computers had vendor-specific encodings which caused mismatch also for English text. Commodore brand 8-bit computers used PETSCII encoding, particularly notable for inverting the upper and lower case compared to standard ASCII. PETSCII printers worked fine on other computers of the era, but inverted the case of all letters. IBM mainframes use the EBCDIC encoding which does not match ASCII at all. Other Western European languages The alphabets of the North Germanic languages, Catalan, Romanian, Finnish, French, German, Italian, Portuguese and Spanish are all extensions of the Latin alphabet. The additional characters are typically the ones that become corrupted, making texts only mildly unreadable with mojibake: å, ä, ö in Finnish and Swedish (š and ž are present in some Finnish loanwords, é marginally in Swedish, mainly also in loanwords) à, ç, è, é, ï, í, ò, ó, ú, ü in Catalan æ, ø, å in Norwegian and Danish as well as optional acute accents on é etc for disambiguation á, é, ó, ij, è, ë, ï in Dutch ä, ö, ü, and ß in German á, ð, í, ó, ú, ý, æ, ø in Faroese á, ð, é, í, ó, ú, ý, þ, æ, ö in Icelandic à, â, ç, è, é, ë, ê, ï, î, ô, ù, û, ü, ÿ, æ, œ in French à, è, é, ì, ò, ù in Italian á, é, í, ñ, ó, ú, ü, ¡, ¿ in Spanish à, á, â, ã, ç, é, ê, í, ó, ô, õ, ú in Portuguese (ü no longer used) á, é, í, ó, ú in Irish à, è, ì, ò, ù in Scottish Gaelic ă, â, î, ș, ț in Romanian £ in British English (æ and œ are rarely used) ... and their uppercase counterparts, if applicable. These are languages for which the ISO 8859-1 character set (also known as Latin 1 or Western) has been in use. However, ISO 8859-1 has been obsoleted by two competing standards, the backward compatible Windows-1252, and the slightly altered ISO 8859-15. Both add the Euro sign € and the French œ, but otherwise any confusion of these three character sets does not create mojibake in these languages. Furthermore, it is always safe to interpret ISO 8859-1 as Windows-1252, and fairly safe to interpret it as ISO 8859-15, in particular with respect to the Euro sign, which replaces the rarely used currency sign (¤). However, with the advent of UTF-8, mojibake has become more common in certain scenarios, e.g. exchange of text files between UNIX and Windows computers, due to UTF-8's incompatibility with Latin-1 and Windows-1252. But UTF-8 has the ability to be directly recognised by a simple algorithm, so that well written software should be able to avoid mixing UTF-8 up with other encodings, so this was most common when many had software not supporting UTF-8. Most of these languages were supported by MS-DOS default CP437 and other machine default encodings, except ASCII, so problems when buying an operating system version were less common. Windows and MS-DOS are not compatible, however. In Swedish, Norwegian, Danish and German, vowels are rarely repeated, and it is usually obvious when one character gets corrupted, e.g. the second letter in the Swedish word ("love") when it is encoded in UTF-8 but decoded in Western, producing "kÃ⁠¤rlek", or für in German, which becomes "für". This way, even though the reader has to guess what the original letter is, almost all texts remain legible. Finnish, on the other hand, frequently uses repeating vowels in words like ("wedding night") which can make corrupted text very hard to read (e.g. appears as "hÃ⁠¤Ã⁠¤yÃ⁠¶"). Icelandic has ten possibly confounding characters, and Faroese has eight, making many words almost completely unintelligible when corrupted (e.g. Icelandic , "outstanding hospitality", appears as "þjóðlöð"). In German, ("letter salad") is a common term for this phenomenon, in Spanish, (literally "deformation") is used, and in Portuguese, (literally "deformatting") is used. Some users transliterate their writing when using a computer, either by omitting the problematic diacritics, or by using digraph replacements (å → aa, ä/æ → ae, ö/ø → oe, ü → ue etc.). Thus, an author might write "ueber" instead of "über", which is standard practice in German when umlauts are not available. The latter practice seems to be better tolerated in the German language sphere than in the Nordic countries. For example, in Norwegian, digraphs are associated with archaic Danish, and may be used jokingly. However, digraphs are useful in communication with other parts of the world. As an example, the Norwegian football player Ole Gunnar Solskjær had his last name spelled "SOLSKJAER" on his uniform when he played for Manchester United. An artifact of UTF-8 misinterpreted as ISO 8859-1, "" being rendered as "Ring meg nÃ¥", was seen in 2014 in an SMS scam targeting Norway. The same problem occurs also in Romanian, see these examples: Central and Eastern European Users of Central and Eastern European languages can also be affected. Because most computers were not connected to any network during the mid- to late-1980s, there were different character encodings for every language with diacritical characters (see ISO/IEC 8859 and KOI-8), often also varying by operating system. Hungarian In Hungarian, the phenomenon is referred to as betűszemét, meaning "letter garbage". Hungarian has been particularly susceptible as it contains the accented letters á, é, í, ó, ú, ö, ü (all present in the Latin-1 character set), plus the two characters ő and ű which are not in Latin-1. These two characters can be correctly encoded in Latin-2, Windows-1250, and Unicode. However, before Unicode became common in e-mail clients, e-mails containing Hungarian text often had the letters ő and ű corrupted, sometimes to the point of unrecognizability. It is common to respond to a corrupted e-mail with the nonsense phrase "Árvíztűrő tükörfúrógép" (literally "Flood-resistant mirror-drilling machine") which contains all accented characters used in Hungarian. Examples Polish Prior to the creation of ISO 8859-2 in 1987, users of various computing platforms used their own character encodings such as AmigaPL on Amiga, Atari Club on Atari ST and Masovia, IBM CP852, Mazovia and Windows CP1250 on IBM PCs. Polish companies selling early DOS computers created their own mutually-incompatible ways to encode Polish characters and simply reprogrammed the EPROMs of the video cards (typically CGA, EGA, or Hercules) to provide hardware code pages with the needed glyphs for Polish—arbitrarily located without reference to where other computer sellers had placed them. The situation began to improve when, after pressure from academic and user groups, ISO 8859-2 succeeded as the "Internet standard" with limited support of the dominant vendors' software (today largely replaced by Unicode). With the numerous problems caused by the variety of encodings, even today some users tend to refer to Polish diacritical characters as (, lit. "little shrubs"). Russian and other Cyrillic-based alphabets Mojibake is colloquially called ( ) in Russian, which was and remains complicated by several systems for encoding Cyrillic. The Soviet Union and early Russian Federation developed KOI encodings (, , which translates to "Code for Information Exchange"). This began with Cyrillic-only 7-bit KOI7, based on ASCII but with Latin and some other characters replaced with Cyrillic letters. Then came 8-bit KOI8 encoding that is an ASCII extension which encodes Cyrillic letters only with high-bit set octets corresponding to 7-bit codes from KOI7. It is for this reason that KOI8 text, even Russian, remains partially readable after stripping the eighth bit, which was considered as a major advantage in the age of 8BITMIME-unaware email systems. For example, the words "" (), when encoded in KOI8 and passed through the high bit stripping process, end up being rendered as "[KOLA RUSSKOGO qZYKA". Eventually, KOI8 gained different flavors for Russian and Bulgarian (KOI8-R), Ukrainian (KOI8-U), Belarusian (KOI8-RU), and even Tajik (KOI8-T). Meanwhile, in the West, Code page 866 supported Ukrainian and Belarusian, as well as Russian and Bulgarian in MS-DOS. For Microsoft Windows, Code Page 1251 added support for Serbian and other Slavic variants of Cyrillic. Most recently, the Unicode encoding includes code points for virtually all characters in all languages, including all Cyrillic characters. Before Unicode, it was necessary to match text encoding with a font using the same encoding system; failure to do this produced unreadable gibberish whose specific appearance varied depending on the exact combination of text and font encoding. For example, attempting to view non-Unicode Cyrillic text using a font that is limited to the Latin alphabet, or using the default ("Western") encoding, typically results in text that consists almost entirely of capitalized vowels with diacritical marks (e.g. KOI8 "" (, library) becomes "âÉÂÌÉÏÔÅËÁ", while "Школа русского языка" (, Russian-language school) becomes "ûËÏÌÁ ÒÕÓÓËÏÇÏ ÑÚÙËÁ"). Using Code Page 1251 to view text in KOI8, or vice versa, results in garbled text that consists mostly of capital letters (KOI8 and Code Page 1251 share the same ASCII region, but KOI8 has uppercase letters in the region where Code Page 1251 has lowercase, and vice versa). During the early years of the Russian sector of the World Wide Web, both KOI8 and Code Page 1251 were common. Nearly all websites now use Unicode, but an estimated 0.35% of all web pages worldwide – all languages included – are still encoded in Code Page 1251, while less than 0.003% of sites are still encoded in KOI8-R. Though the HTML standard includes the ability to specify the encoding for any given web page in its source, this is sometimes neglected, forcing the user to switch encodings in the browser manually. In Bulgarian, mojibake is often called (), meaning "monkey's [alphabet]". In Serbian, it is called (), meaning "trash". Unlike the former USSR, South Slavs never used something like KOI8, and Code Page 1251 was the dominant Cyrillic encoding before Unicode; therefore, these languages experienced fewer encoding incompatibility troubles than Russian. In the 1980s, Bulgarian computers used their own MIK encoding, which is superficially similar to (although incompatible with) CP866. Yugoslav languages Croatian, Bosnian, Serbian (the seceding varieties of Serbo-Croatian language) and Slovenian add to the basic Latin alphabet the letters š, đ, č, ć, ž, and their capital counterparts Š, Đ, Č, Ć, Ž (only č/Č, š/Š and ž/Ž are officially used in Slovenian, although others are used when needed, mostly in foreign names). All of these letters are defined in Latin-2 and Windows-1250, while only some (š, Š, ž, Ž, Đ) exist in the usual OS-default Windows-1252, and are there because of some other languages. Although Mojibake can occur with any of these characters, the letters that are not included in Windows-1252 are much more prone to errors. Thus, even nowadays, "šđčćž ŠĐČĆŽ" is often displayed as "šðèæž ŠÐÈÆŽ", although ð, È, and Æ are never used in Slavic languages. When confined to basic ASCII (most user names, for example), common replacements are: š→s, đ→dj, č→c, ć→c, ž→z (capital forms analogously, with Đ→Dj or Đ→DJ depending on word case). All of these replacements introduce ambiguities, so reconstructing the original from such a form is usually done manually if required. The Windows-1252 encoding is important because the English versions of the Windows operating system are most widespread, not localized ones. The reasons for this include a relatively small and fragmented market, increasing the price of high quality localization, a high degree of software piracy (in turn caused by high price of software compared to income), which discourages localization efforts, and people preferring English versions of Windows and other software. The drive to differentiate Croatian from Serbian, Bosnian from Croatian and Serbian, and now even Montenegrin from the other three creates many problems. There are many different localizations, using different standards and of different quality. There are no common translations for the vast amount of computer terminology originating in English. In the end, people use English loanwords ("kompjuter" for "computer", "kompajlirati" for "compile," etc.), and if they are unaccustomed to the translated terms, they may not understand what some option in a menu is supposed to do based on the translated phrase. Therefore, people who understand English, as well as those who are accustomed to English terminology (who are most, because English terminology is also mostly taught in schools because of these problems) regularly choose the original English versions of non-specialist software. When Cyrillic script is used (for Macedonian and partially Serbian), the problem is similar to other Cyrillic-based scripts. Newer versions of English Windows allow the code page to be changed (older versions require special English versions with this support), but this setting can be and often was incorrectly set. For example, Windows 98 and Windows Me can be set to most non-right-to-left single-byte code pages including 1250, but only at install time. Caucasian languages The writing systems of certain languages of the Caucasus region, including the scripts of Georgian and Armenian, may produce mojibake. This problem is particularly acute in the case of ArmSCII or ARMSCII, a set of obsolete character encodings for the Armenian alphabet which have been superseded by Unicode standards. ArmSCII is not widely used because of a lack of support in the computer industry. For example, Microsoft Windows does not support it. Asian encodings Another type of mojibake occurs when text encoded in a single-byte encoding is erroneously parsed in a multi-byte encoding, such as one of the encodings for East Asian languages. With this kind of mojibake more than one (typically two) characters are corrupted at once. For example, if the Swedish word is encoded in Windows-1252 but decoded using GBK, it will appear as "k鋜lek", where "" is parsed as "鋜". Compared to the above mojibake, this is harder to read, since letters unrelated to the problematic å, ä or ö are missing, and is especially problematic for short words starting with å, ä or ö (e.g. "än" becomes "鋘"). Since two letters are combined, the mojibake also seems more random (over 50 variants compared to the normal three, not counting the rarer capitals). In some rare cases, an entire text string which happens to include a pattern of particular word lengths, such as the sentence "Bush hid the facts", may be misinterpreted. Vietnamese In Vietnamese, the phenomenon is called chữ ma (Hán–Nôm: 𡨸魔, "ghost characters") or loạn mã (from Chinese 乱码, luànmǎ). It can occur when a computer tries to decode text encoded in UTF-8 as Windows-1258, TCVN3 or VNI. In Vietnam, chữ ma was commonly seen on computers that ran pre-Vista versions of Windows or cheap mobile phones. Japanese In Japan, mojibake is especially problematic as there are many different Japanese text encodings. Alongside Unicode encodings (UTF-8 and UTF-16), there are other standard encodings, such as Shift-JIS (Windows machines) and EUC-JP (UNIX systems). Even to this day, mojibake is often encountered by both Japanese and non-Japanese people when attempting to run software written for the Japanese market. Chinese In Chinese, the same phenomenon is called Luàn mǎ (Pinyin, Simplified Chinese , Traditional Chinese , meaning 'chaotic code'), and can occur when computerised text is encoded in one Chinese character encoding but is displayed using the wrong encoding. When this occurs, it is often possible to fix the issue by switching the character encoding without loss of data. The situation is complicated because of the existence of several Chinese character encoding systems in use, the most common ones being: Unicode, Big5, and Guobiao (with several backward compatible versions), and the possibility of Chinese characters being encoded using Japanese encoding. It is relatively easy to identify the original encoding when luànmǎ occurs in Guobiao encodings: An additional problem in Chinese occurs when rare or antiquated characters, many of which are still used in personal or place names, do not exist in some encodings. Examples of this are: The Big5 encoding's lack of the "煊" (xuān) in the name of Taiwanese politician Wang Chien-shien (), the "堃" (kūn) in the name of Yu Shyi-kun (), and the "喆" (zhé) in the name of singer David Tao (), GB 2312's lack of the "镕" (róng) in ex-PRC Premier Zhu Rongji (), and GBK's lack of the copyright symbol "©". Newspapers have dealt with missing characters in various ways, including using image editing software to synthesize them by combining other radicals and characters; using a picture of the personalities (in the case of people's names), or simply substituting homophones in the hope that readers would be able to make the correct inference. Indic text A similar effect can occur in Brahmic or Indic scripts of South Asia, used in such Indo-Aryan or Indic languages as Hindustani (Hindi-Urdu), Bengali, Punjabi, Marathi, and others, even if the character set employed is properly recognized by the application. This is because, in many Indic scripts, the rules by which individual letter symbols combine to create symbols for syllables may not be properly understood by a computer missing the appropriate software, even if the glyphs for the individual letter forms are available. One example of this is the old Wikipedia logo, which attempts to show the character analogous to "wi" (the first syllable of "Wikipedia") on each of many puzzle pieces. The puzzle piece meant to bear the Devanagari character for "wi" instead used to display the "wa" character followed by an unpaired "i" modifier vowel, easily recognizable as mojibake generated by a computer not configured to display Indic text. The logo as redesigned has fixed these errors. The idea of Plain Text requires the operating system to provide a font to display Unicode codes. This font is different from OS to OS for Singhala and it makes orthographically incorrect glyphs for some letters (syllables) across all operating systems. For instance, the 'reph', the short form for 'r' is a diacritic that normally goes on top of a plain letter. However, it is wrong to go on top of some letters like 'ya' or 'la' in specific contexts. For Sanskritic words or names inherited by modern languages, such as कार्य, IAST: kārya, or आर्या, IAST: āryā, it is apt to put it on top of these letters. By contrast, for similar sounds in modern languages which result from their specific rules, it is not put on top, such as the word करणाऱ्या, IAST: karaṇāryā, a stem form of the common word करणारा/री, IAST: karaṇārā/rī, in the Marathi language. But it happens in most operating systems. This appears to be a fault of internal programming of the fonts. In Mac OS and iOS, the muurdhaja l (dark l) and 'u' combination and its long form both yield wrong shapes. Some Indic and Indic-derived scripts, most notably Lao, were not officially supported by Windows XP until the release of Vista. However, various sites have made free-to-download fonts. Burmese Due to Western sanctions and the late arrival of Burmese language support in computers, much of the early Burmese localization was homegrown without international cooperation. The prevailing means of Burmese support is via the Zawgyi font, a font that was created as a Unicode font but was in fact only partially Unicode compliant. In the Zawgyi font, some codepoints for Burmese script were implemented as specified in Unicode, but others were not. The Unicode Consortium refers to this as ad hoc font encodings. With the advent of mobile phones, mobile vendors such as Samsung and Huawei simply replaced the Unicode compliant system fonts with Zawgyi versions. Due to these ad hoc encodings, communications between users of Zawgyi and Unicode would render as garbled text. To get around this issue, content producers would make posts in both Zawgyi and Unicode. Myanmar government designated 1 October 2019 as "U-Day" to officially switch to Unicode. The full transition was estimated to take two years. African languages In certain writing systems of Africa, unencoded text is unreadable. Texts that may produce mojibake include those from the Horn of Africa such as the Ge'ez script in Ethiopia and Eritrea, used for Amharic, Tigre, and other languages, and the Somali language, which employs the Osmanya alphabet. In Southern Africa, the Mwangwego alphabet is used to write languages of Malawi and the Mandombe alphabet was created for the Democratic Republic of the Congo, but these are not generally supported. Various other writing systems native to West Africa present similar problems, such as the N'Ko alphabet, used for Manding languages in Guinea, and the Vai syllabary, used in Liberia. Arabic Another affected language is Arabic (see below), in which text becomes completely unreadable when the encodings do not match. Examples The examples in this article do not have UTF-8 as browser setting, because UTF-8 is easily recognisable, so if a browser supports UTF-8 it should recognise it automatically, and not try to interpret something else as UTF-8. See also Code point Replacement character Substitute character Newline – The conventions for representing the line break differ between Windows and Unix systems. Though most software supports both conventions (which is trivial), software that must preserve or display the difference (e.g. version control systems and data comparison tools) can get substantially more difficult to use if not adhering to one convention. Byte order mark – The most in-band way to store the encoding together with the data – prepend it. This is by intention invisible to humans using compliant software, but will by design be perceived as "garbage characters" to incompliant software (including many interpreters). HTML entities – An encoding of special characters in HTML, mostly optional, but required for certain characters to escape interpretation as markup. While failure to apply this transformation is a vulnerability (see cross-site scripting), applying it too many times results in garbling of these characters. For example, the quotation mark " becomes &quot;, &amp;quot;, &amp;amp;quot;, &amp;amp;amp;quot;, and so on. Bush hid the facts References External links Character encoding Computer errors Nonsense
Mojibake
Technology
7,414
22,377,638
https://en.wikipedia.org/wiki/Cyber-Duck
Cyber-Duck is a digital transformation agency founded in 2005 and based in Elstree, United Kingdom. The company specialises in user experience (UX), software development and digital optimisation. The company employs over 90 staff in the UK and Europe. It works with clients from the financial, pharmaceutical, sport, motoring and security sectors, among others. These include the Bank of England, Cancer Research UK, GOV.UK Verify partner CitizenSafe, The Commonwealth of Nations and Sport England. History Cyber-Duck was founded in 2005 by Danny Bluestone in his flat in Mill Hill, United Kingdom. After a few months, the firm moved into its first office in Borehamwood. Projects with Ogilvy, London Creative and Wisteria followed before Cyber-Duck moved to offices in Devonshire House, Borehamwood. In 2010, the firm was commissioned to develop a website for the European Commission in the UK. In 2011, the company moved to a self-contained premises in Elstree, Hertfordshire. Shortly afterward, Cyber-Duck was listed on the Deloitte Technology Fast 500 EMEA in recognition of its substantial revenue growth over the previous five years. As the company grew, its expertise also broadened. This resulted in guest spots on several television shows. Cyber-Duck was featured in an episode of the Gadget Show in 2011, and Chief Production Officer Matt Gibson appeared on BBC Watchdog in 2013 to assist in researching websites and their checkout processes. The firm continued to attract business from companies in London, so the decision was made to open a new office in central London. The Farringdon office opened in 2015, and was followed by a rebrand. In 2016, Cyber-Duck went on to work with the Bank of England. Ahead of the launch of the new polymer £5 note, featuring Winston Churchill, the company was tasked with creating a user-friendly website to showcase the new banknote and promote public awareness. The success of the campaign led to further commissions, including 2017's website the New Ten and a redesign of the Bank of England's main website. The firm underwent significant growth in 2020, beginning working partnerships with Sport England and the College of Policing. During this time they also launched DevOps as a new service. In 2022, the Farringdon office closed and was relocated to a new office space in Holborn. The Laravel, Drupal and DevOps teams expanded, and Cyber-Duck became the lead Digital Agency for Worcester, Bosch Group. Several members of the team appeared on The Digital Society on Sky UK. Awards and accreditations Cyber-Duck is known for its focus on process accreditation as a driver of creativity. In 2011, the company obtained its first ISO 9241 accreditation in Human Centred Design for interactive systems. Two years later, Cyber-Duck obtained a further certification, the ISO 9001 for Quality Management Systems. It acquired another certification in 2016 with the ISO 27001 – the focus of this accreditation was Information Security Management. In 2022, Cyber-Duck gained the ISO 14001 certification in Environmental Management. Cyber-Duck's digital products have won numerous Wirehive 100, BIMA and Webby awards. Notably, the company's UX Companion, a free iOS and Android app that is a glossary of UX theories, featured in Usability Geek and Smashing Magazine. In 2021 they were awarded as one of the UK's 100 Best Small Companies to work for, and BIMA10 shortlisted for their work with Sport England and This Girl Can. References Companies based in Hertsmere Digital media Marketing companies established in 2005
Cyber-Duck
Technology
734
57,731,480
https://en.wikipedia.org/wiki/Ethinylestradiol%20benzoate
Ethinylestradiol benzoate, or 17α-ethynylestradiol 3-benzoate, is a synthetic estrogen and estrogen ester – specifically, the C3 benzoate ester of ethinylestradiol – which was first described in the late 1930s and was never marketed. See also List of estrogen esters § Esters of other steroidal estrogens References Abandoned drugs Ethynyl compounds Benzoate esters Tertiary alcohols Estranes Estrogen esters Sex hormone esters and conjugates Synthetic estrogens
Ethinylestradiol benzoate
Chemistry
126
996,933
https://en.wikipedia.org/wiki/Style%20sheet%20%28web%20development%29
A web style sheet is a form of separation of content and presentation for web design in which the markup (i.e., HTML or XHTML) of a webpage contains the page's semantic content and structure, but does not define its visual layout (style). Instead, the style is defined in an external style sheet file using a style sheet language such as CSS or XSLT. This design approach is identified as a "separation" because it largely supersedes the antecedent methodology in which a page's markup defined both style and structure. The philosophy underlying this methodology is a specific case of separation of concerns. Benefits Separation of style and content has advantages, but has only become practical after improvements in popular web browsers' CSS implementations. Speed Overall, users experience of a site utilising style sheets will generally be quicker than sites that don’t use the technology. ‘Overall’ as the first page will probably load more slowly – because the style sheet AND the content will need to be transferred. Subsequent pages will load faster because no style information will need to be downloaded – the CSS file will already be in the browser’s cache. Maintainability Holding all the presentation styles in one file can reduce the maintenance time and reduces the chance of error, thereby improving presentation consistency. For example, the font color associated with a type of text element may be specified — and therefore easily modified — throughout an entire website simply by changing one short string of characters in a single file. The alternative approach, using styles embedded in each individual page, would require a cumbersome, time consuming, and error-prone edit of every file. Accessibility Sites that use CSS with either XHTML or HTML are easier to tweak so that they appear similar in different browsers (Chrome, Internet Explorer, Mozilla Firefox, Opera, Safari, etc.). Sites using CSS "degrade gracefully" in browsers unable to display graphical content, such as Lynx, or those so very old that they cannot use CSS. Browsers ignore CSS that they do not understand, such as CSS 3 statements. This enables a wide variety of user agents to be able to access the content of a site even if they cannot render the style sheet or are not designed with graphical capability in mind. For example, a browser using a refreshable braille display for output could disregard layout information entirely, and the user would still have access to all page content. Customization If a page's layout information is stored externally, a user can decide to disable the layout information entirely, leaving the site's bare content still in a readable form. Site authors may also offer multiple style sheets, which can be used to completely change the appearance of the site without altering any of its content. Most modern web browsers also allow the user to define their own style sheet, which can include rules that override the author's layout rules. This allows users, for example, to bold every hyperlink on every page they visit. Browser extensions like Stylish and Stylus have been created to facilitate management of such user style sheets. Consistency Because the semantic file contains only the meanings an author intends to convey, the styling of the various elements of the document's content is very consistent. For example, headings, emphasized text, lists and mathematical expressions all receive consistently applied style properties from the external style sheet. Authors need not concern themselves with the style properties at the time of composition. These presentational details can be deferred until the moment of presentation. Portability The deferment of presentational details until the time of presentation means that a document can be easily re-purposed for an entirely different presentation medium with merely the application of a new style sheet already prepared for the new medium and consistent with elemental or structural vocabulary of the semantic document. A carefully authored document for a web page can easily be printed to a hard-bound volume complete with headers and footers, page numbers and a generated table of contents simply by applying a new style sheet. Practical disadvantages today As of 2006, specifications (for example, XHTML, XSL, CSS) and software tools implementing these specification are only reaching the early stages of maturity. So there are some practical issues facing authors who seek to embrace this method of separating content and style. Narrow adoption without the parsing and generation tools While the style specifications are quite mature and still maturing, the software tools have been slow to adapt. Most of the major web development tools still embrace a mixed presentation-content model. So authors and designers looking for GUI based tools for their work find it difficult to follow the semantic web method. In addition to GUI tools, shared repositories for generalized style sheets would probably aid adoption of these methods. See also Separation of concerns References External links CSS Zen Garden: A site which challenges designers to create new page layouts without touching the XHTML source. Includes dozens of layouts. CSS source can be viewed for every layout. Web development
Style sheet (web development)
Engineering
1,032
1,763,397
https://en.wikipedia.org/wiki/George%20Batchelor
George Keith Batchelor FRS (8 March 1920 – 30 March 2000) was an Australian applied mathematician and fluid dynamicist. He was for many years a Professor of Applied Mathematics in the University of Cambridge, and was founding head of the Department of Applied Mathematics and Theoretical Physics (DAMTP). In 1956 he founded the influential Journal of Fluid Mechanics which he edited for some forty years. Prior to Cambridge he studied at Melbourne High School and University of Melbourne. As an applied mathematician (and for some years at Cambridge a co-worker with Sir Geoffrey Taylor in the field of turbulent flow), he was a keen advocate of the need for physical understanding and sound experimental basis. His An Introduction to Fluid Dynamics (CUP, 1967) is still considered a classic of the subject, and has been re-issued in the Cambridge Mathematical Library series, following strong current demand. Unusual for an 'elementary' textbook of that era, it presented a treatment in which the properties of a real viscous fluid were fully emphasised. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1959. The Batchelor Prize award, is named in his honour and is awarded every four years at the meeting of the International Congress on Theoretical and Applied Mechanics. References External links An Introduction to Fluid Dynamics by G. K. Batchelor at Cambridge Mathematical Library. Obituaries for George Batchelor (with portraits) at the Department of Applied Mathematics and Theoretical Physics (DAMTP) of the University of Cambridge website Obituary by Julian Hunt Video recording of the K. Moffatt's lecture on life and work of George Batchelor 1920 births 2000 deaths Academics of the University of Cambridge Alumni of the University of Cambridge 20th-century Australian mathematicians Fellows of the American Academy of Arts and Sciences Fellows of the Royal Society Foreign associates of the National Academy of Sciences Fluid dynamicists Cambridge mathematicians People educated at Melbourne High School Royal Medal winners Australian textbook writers Mathematicians from Melbourne Journal of Fluid Mechanics editors
George Batchelor
Chemistry
397
14,280,179
https://en.wikipedia.org/wiki/2%2C2%27-Bis%282-indenyl%29%20biphenyl
2,2′-Bis(2-indenyl) biphenyl is an organic compound with the formula [C6H4C9H7]2. The compound is the precursor, upon deprotonation, to ansa-metallocene complexes within the area of transition metal indenyl complexes. Metals studied with 2,2′-bis(2-indenyl) biphenyl include titanium, zirconium, and hafnium. The ligand and its complexes have been prepared by the research group of the late Brice Bosnich at The University of Chicago. Zirconium and hafnium complexes made from this ligand were found to be active catalysts for the polymerization of the smallest alkenes – compounds with carbon-carbon double bonds—namely, ethylene and propylene. The use of such complexes in the polymerization of alkenes has since been reported, and patented by DSM Research. References Ligands Catalysts Hydrocarbons
2,2'-Bis(2-indenyl) biphenyl
Chemistry
211
2,710,870
https://en.wikipedia.org/wiki/Chi%20Ursae%20Majoris
Chi Ursae Majoris or χ Ursae Majoris, formally named Taiyangshou , is a single star in the northern circumpolar constellation of Ursa Major. The star has an orange hue and is visible to the naked eye at night with an apparent visual magnitude of 3.72. It is located at a distance of approximately 198 light-years from the Sun based on parallax, but is drifting closer with a radial velocity of −9 km/s. Nomenclature χ Ursae Majoris (Latinised to Chi Ursae Majoris) is the star's Bayer designation. It bore the traditional name Tai Yang Show, "the Sun Governor", from Chinese astronomy. The name was possibly derived from the word 太陽守, Pinyin: Tàiyángshǒu, meaning Guard of the Sun, because this star is marking itself and standing alone in the Guard of the Sun asterism, Purple Forbidden enclosure (see : Chinese constellations). It also bore traditional names of Arabic origin: Alkafzah, Alkaphrah, and El Koprah. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Taiyangshou for this star on 30 June 2017 and it is now so included in the List of IAU-approved Star Names. Properties Chi Ursae Majoris is an evolved, orange hued K-type giant with a stellar classification of K0.5 IIIb. It is a red clump giant, which means it is on the horizontal branch and is generating energy through helium fusion at its core. This star has expanded to 23 times the radius of the Sun with 1.49 times the Sun's mass. It is radiating 170 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,331 K. The spiral galaxy in Ursa Major, NGC 3877 (= H I.201), type Sc, is best found from Chi Ursae Majoris, which is almost exactly 15 arcminutes north of the galaxy. References External links K-type giants Horizontal-branch stars Ursa Major Ursae Majoris, Chi BD+48 1966 Ursae Majoris, 63 102224 057399 4518 Taiyangshou
Chi Ursae Majoris
Astronomy
492
612,849
https://en.wikipedia.org/wiki/IBiquity
iBiquity Digital Corporation was a company formed by the merger of USA Digital Radio and Lucent Digital Radio. Based in Columbia, Maryland, with additional offices in Basking Ridge, New Jersey, Los Angeles, California, and Auburn Hills, Michigan, iBiquity was a privately held intellectual properties company with investors in the technology, broadcasting, manufacturing, media, and financial industries. About IBOC can operate on both AM band and FM band broadcasts either in a digital-only mode or in a "hybrid" digital+analog mode. The stations can split the digital bandwidth to carry multiple audio program streams (called HD2 or HD3 multicast channels) as well as show on-screen text data such as song title and artist, traffic, and weather information. Nearly 2,000 stations in the US broadcast with this system. The technology is marketed under the trademark HD Radio. It is the only technology approved by the Federal Communications Commission for digital AM and FM broadcasting in the United States. Due in large part to its ability to deliver digital audio services while leveraging the existing analog spectrum (by broadcasting digital information on the sidebands), commercial implementation of the technology is gaining momentum in various countries on one side of the world, including Canada, Mexico, and the Philippines. Testing and showing the system are underway in China, Colombia, Germany, Indonesia, Jamaica, New Zealand, Poland, Switzerland, Thailand, and Ukraine, among other countries. According to iBiquity Digital, holder of the HD Radio trademark the "HD" in "HD Radio" does not stand for "High Definition" or "Hybrid Digital". It is simply part of their trademark, and does not have any meaning on its own. On September 2, 2015, iBiquity announced that DTS was purchasing them for $172 million USD, bringing the HD Radio technology under the same banner as DTS' eponymous theater surround sound systems. References External links HD Radio Defunct electronics companies of the United States Digital radio Companies based in Columbia, Maryland Somerset County, New Jersey Oakland County, Michigan Broadcast engineering HD Radio
IBiquity
Engineering
420
68,082,871
https://en.wikipedia.org/wiki/HR%203831
HR 3831, also known as HD 83368, is a binary star system in the southern constellation of Vela at a distance of 233 light years. This object is barely visible to the naked eye as a dim, blue star with an apparent visual magnitude of 6.232. It is approaching the Earth with a heliocentric radial velocity of 4.0 km/s. The star system is a visual binary with a 3.29″ projected separation, identified as such in 2002. The larger star, HD 83368A, is a pulsating variable of a rapidly oscillating Ap type. It has a single yet strongly distorted dipole pulsation mode with a frequency of 1427 μHz. The primary star is chemically peculiar, exhibiting spots of enhanced concentrations of lithium, europium and oxygen. The star's variability was discovered by Pierre Renson, and announced in 1977. It was given its variable star designation, IM Velorum, in 1981. See also Vela (Chinese astronomy) References A-type main-sequence stars F-type main-sequence stars Binary stars Ap stars Rapidly oscillating Ap stars Vela (constellation) CD-48 4831 83368 Velorum, IM 047145 3831 J09362541-4845042
HR 3831
Astronomy
274
77,207,161
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Delange%20theorem
The Erdős–Delange theorem is a theorem in number theory concerning the distribution of prime numbers. It is named after Paul Erdős and Hubert Delange. Let denote the number of prime factors of an integer , counted with multiplicity, and be any irrational number. The theorem states that the real numbers are asymptotically uniformly distributed modulo 1. It implies the prime number theorem. The theorem was stated without proof in 1946 by Paul Erdős, with a remark that "the proof is not easy". Hubert Delange found a simpler proof and published it in 1958, together with two other ways of deducing it from results of Erdős and of Atle Selberg. References Theorems about prime numbers Paul Erdős
Erdős–Delange theorem
Mathematics
151
2,006,073
https://en.wikipedia.org/wiki/Outline%20of%20environmental%20studies
The following outline is provided as an overview of and topical guide to environmental studies: Environmental studies – multidisciplinary academic field which systematically studies human interaction with the environment. Environmental studies connects principles from the physical sciences, commerce/economics, the humanities, and social sciences to address complex contemporary environmental issues. It is a broad field of study that includes the natural environment, the built environment, and the relationship between them. Fields of study Aquatic and environmental engineering Biomimetics (Biomimicry) Climatology Conservation biology Conservation movement Ecocriticism Ecological economics Ecological engineering Ecological genetics Ecological humanities Ecological literacy Ecological psychology Energy and environment Energy conservation Environmental archaeology Environmental chemistry Environmental degradation Environmental design Environmental economics Environmental effects on physiology Environmental engineering Environmental ethics Environmental finance Environmental geography Environmental geology Environmental history Environmental impact assessment Environmentalism Environmental issues Environmental justice Environmental law Environmental management Environmental monitoring Environmental movement Environmental organization Environmental philosophy Environmental psychology Environmental policy Environmental protection Environmental racism Environmental remediation Environmental restoration Environmental science Environmental sociology Environmental soil science Environmental studies Environmental technology Environmental toxicology Geomatics Green chemistry Green computing Green economy Green engineering Landscape architecture Natural resource management Fishing Hunting Hydrology Oceanography Pollutant Pollution Renewable energy Renewable resource Restoration ecology Sustainability science Sustainability studies Toxicology Traditional environmental knowledge Waste management Waste minimisation Whaling Wildlife conservation Wildlife management Wildlife observation Degrees Primary undergraduate and graduate degrees in Environmental studies include the following. Bachelors Bachelor of Arts in Environmental Studies BA 4sem Bachelor of Environmental Studies (BES) Bachelor of Science in Environmental Studies (BS) Masters Master of Arts in Environmental Studies (MA) Master of Professional Studies in Environmental Studies (MPS) Master of Science in Environmental Studies (MS) Doctoral Doctor of Philosophy in Environmental Studies (PhD) Environmental education institutions and organizations Listed here are primary, secondary, and non-degree granting environmental education institutions and organizations. Arkansas Environmental Academy Canadian Centre for Environmental Education (CCEE) Dekalb Academy of Technology and Environment Earhart Environmental Magnet Elementary School Ecovillage Training Center (in Tennessee in the US) Environmental Campus Birkenfeld, Germany Environmental Charter High School Environmental Law Institute (ELI) European Academy of Environmental Affairs Fanling Environmental Resource Centre Jacobsburg Environmental Education Center Jane Goodall Center for Excellence in Environmental Studies Jane Goodall Environmental Middle School Jennings Environmental Education Center Jupiter Environmental Research and Field Studies Academy Kings Gap Environmental Education and Training Center Lancaster Environment Centre (LEC) London Environmental Education Forum (LEEF) Merry Lea Environmental Center NatureBridge (in California in the US) Nolde Forest Environmental Education Center Pine Jog Environmental Education Center Rachel Carson Center, Environmental Studies Certificate Programme, Munich Germany School of Environmental Studies, Minnesota School of Social Ecology Sigurd Olson Environmental Institute Southern Environmental Center Subject Centre for Geography, Earth and Environmental Sciences, UK Sunnyside Environmental School Sustainability Management School (SUMAS) (Switzerland) School of Planning and Architecture, New Delhi The School for Field Studies (SFS) Tokyo Global Engineering Corporation, Japan (global) See also Children's Environmental Exposure Research Study (CHEERS), USA Fourth International Conference on Environmental Education (2007), India Milthorpe Lecture, Macquarie University, Australia National Environmental Education Act (1990), USA North American collegiate sustainability programs Phase I environmental site assessment, USA Environmental groups and resources serving K–12 schools IB Group 4 subjects Index of environmental articles Outline of environmental journalism Lists List of environment research institutes List of environmental design degree–granting institutions List of institutions awarding Bachelor of Environmental Science degrees List of institutions awarding Bachelor of Environmental Studies degrees List of years in the environment in the environment and environmental sciences Organizations and networks Chartered Environmentalist Environmental Change Network (ECN) Natural Environment Research Council (NERC) Society for Environment and Education References Environmental studies Environmental studies Environmental studies
Outline of environmental studies
Environmental_science
741
49,539,806
https://en.wikipedia.org/wiki/Acorn%20TV
Acorn TV is a British-American over-the-top VOD streaming service offering television programming from Australia, Canada, other Commonwealth countries, Spain, New Zealand, and the United Kingdom. In other countries, it is available on a variety of devices including Amazon Fire TV, Apple TV, Android TV, Chromecast, and Roku. The service is owned by RLJ Entertainment, a joint venture between AMC Networks (who owns a controlling 83% stake) and the RLJ Companies (who owns the remaining 17%). History Acorn Media Group has distributed British television in the United States since 1994, originally selling VHS tapes before moving into DVD and Blu-ray media. Continuing the company's expansion into new formats, Acorn TV launched as a subsection of Acorn's direct-to-consumer e-commerce website in 2011. In 2013, Acorn TV was relaunched as a standalone service with expanded content offerings and monthly and annual subscription options. In 2013, the service began offering exclusive content, starting with the United States premiere of Doc Martin, Series 6. In 2015, Acorn TV was the only niche streaming service to have a program nominated for an Emmy when Curtain: Poirot's Last Case was nominated for Outstanding Television Movie. As of December 31, 2016, it had 430,000 paid subscribers. Acorn TV launched in the United Kingdom as a service in its own right on April 29, 2020. On 24 November 2022, Acorn TV announced without further elaboration that it would no longer be available in South Africa by the end of 2022, and requested that subscribers cancel their memberships. The service previously launched in South Africa from December 2018. On August 29, 2023, it was announced that Acorn TV would end its services in Portugal as of September 29, 2023. On December 20, 2023, Acorn TV announced that it would not be available in Latin America after February 6, 2024. Programming Acorn TV offers a combination of new and classic mysteries, dramas, comedies, and documentaries. The service licenses content from producers and distributors including ITV, Channel 4, BBC Studios, All3Media, DRG, ZDF, and Content Media. Original programming Because its parent company, RLJ Entertainment, has a 64% stake in Agatha Christie Limited, the licensing arm of the Christie estate, Acorn TV was able to offer the United States premieres of the final episodes of Agatha Christie's Poirot in summer 2014, BBC co-production Partners in Crime in September 2015, and The Witness for the Prosecution in 2017. RLJ Entertainment also owns all rights to Foyle's War, allowing Acorn TV to offer the United States premiere of the final season in 2015. Subsequent original series include Agatha Raisin, Close to the Enemy, The Level, Striking Out, Queens of Mystery, Dead Still, and Ms Fisher's Modern Murder Mysteries. On October 20, 2020, it was announced that the series Dalgliesh will premiere in 2021. United States premieres Since its launch, Acorn TV has offered the United States premieres of some or all seasons of British series including Detectorists, Vera, Inspector George Gently, and Midsomer Murders. After initially focusing exclusively on programming from the United Kingdom, Acorn TV expanded its content offering to include programs from other territories, including Australia's A Place to Call Home, Miss Fisher's Murder Mysteries, Jack Irish, and Janet King, New Zealand's The Brokenwood Mysteries, the Australian-New Zealand series 800 Words, Canada's Murdoch Mysteries and 19-2, and Ireland's Clean Break and Striking Out. In 2015, the service began offering foreign-language dramas. The service also offers a selection of documentaries, including historical, travel, arts, and science titles. On August 5, 2019, the Australian-New Zealand mystery series My Life Is Murder premiered on Acorn. On September 30, 2020, it was announced that the Irish drama series The South Westerlies premiered on November 9, 2020. On October 27, 2020, it was announced that the miniseries A Suitable Boy will premiere on December 7, 2020. See also Acorn DVD BritBox Philo AMC+ Shudder Allblk Amazon Prime Video List of streaming media services References External links Streaming media systems Subscription video on demand services
Acorn TV
Technology
885
73,755,426
https://en.wikipedia.org/wiki/Null%20infinity
In theoretical physics, null infinity is a region at the boundary of asymptotically flat spacetimes. In general relativity, straight paths in spacetime, called geodesics, may be space-like, time-like, or light-like (also called null). The distinction between these paths stems from whether the spacetime interval of the path is positive (corresponding to space-like), negative (corresponding to time-like), or zero (corresponding to null). Light-like paths physically correspond to physical phenomena which propagate through space at the speed of light, such as electromagnetic radiation and gravitational radiation. The boundary of a flat spacetime is known as conformal infinity, and can be thought of as the end points of all geodesics as they go off to infinity. The region of null infinity corresponds to the terminus of all null geodesics in a flat Minkowski space. The different regions of conformal infinity are most often visualized on a Penrose diagram, where they make up the boundary of the diagram. There are two distinct regions of null infinity, called past and future null infinity, which can be denoted using a script '' as and . These two regions are often referred to as 'scri-plus' and 'scri-minus' respectively. Geometrically, each of these regions actually has the structure of a topologically cylindrical three dimensional region. The study of null infinity originated from the need to describe the global properties of spacetime. While early methods in general relativity focused on the local structure built around local frames of reference, work beginning in the 1960s began analyzing global descriptions of general relativity, analyzing the structure of spacetime as a whole. The original study of null infinity originated with Roger Penrose's work analyzing black hole spacetimes. Null infinity is a useful mathematical tool for analyzing behavior in asymptotically flat spaces when limits of null paths need to be taken. For instance, black hole spacetimes are asymptotically flat, and null infinity can be used to characterize radiation in the limit that it travels outward away from the black hole. Null infinity can also be considered in the context of spacetimes which are not necessarily asymptotically flat, such as in the FLRW cosmology. Conformal compactification in Minkowski spacetime The metric for a flat Minkowski spacetime in spherical coordinates is . Conformal compactification induces a transformation which preserves angles, but changes the local structure of the metric and adds the boundary of the manifold, thus making it compact. For a given metric , a conformal compactification scales the entire metric by some conformal factor such that such that all of the points at infinity are scaled down to a finite value. Typically, the radial and time coordinates are transformed into null coordinates and . These are then transformed as and in order to use the properties of the inverse tangent function to map infinity to a finite value. The typical time and space coordinates may be introduced as and . After these coordinate transformations, a conformal factor is introduced, leading to a new unphysical metric for Minkowski space: . This is the metric on a Penrose diagram, illustrated. Unlike the original metric, this metric describes, a manifold with a boundary, given by the restrictions on and . There are two null surfaces on this boundary, corresponding to past and future null infinity. Specifically, future null infinity consists of all points where and , and past null infinity consists of all points where and . From the coordinate restrictions, null infinity is a three dimensional null surface, with a cylindrical topology . The construction given here is specific to the flat metric of Minkowski space. However, such a construction generalizes to other asymptotically flat spaces as well. In such scenarios, null infinity still exists as a three dimensional null surface at the boundary of the spacetime manifold, but the manifold's overall structure might be different. For instance, in Minkowski space, all null geodesics begin at past null infinity and end at future null infinity. However, in the Schwarzschild black hole spacetime, the black hole event horizon leads to two possibilities: geodesics may end at null infinity, but may also end at the black hole's future singularity. The presence of null infinity (along with the other regions of conformal infinity) guarantees geodesic completion on the spacetime manifold, where all geodesics terminate either at a true singularity or intersect the boundary of infinity. Other physical applications The symmetries of null infinity are characteristically different from that of the typical regions of spacetime. While the symmetries of a flat Minkowski spacetime are given by the Poincaré group, the symmetries of null infinity are instead given by the Bondi–Metzner–Sachs (BMS) group. The work by Bondi, Metzner, and Sachs characterized gravitational radiation using analyses related to null infinity, whereas previous work such as the ADM framework dealt with characterizations of spacelike infinity. In recent years, interest has grown in studying gravitons on the boundary null infinity. Using the BMS group, quanta on null infinity can be characterized as massless spin-2 particles, consistent with the quanta of general relativity being gravitons. References General relativity Lorentzian manifolds Theoretical physics Wikipedia Student Program
Null infinity
Physics
1,093
1,739,637
https://en.wikipedia.org/wiki/Ree%20group
In mathematics, a Ree group is a group of Lie type over a finite field constructed by from an exceptional automorphism of a Dynkin diagram that reverses the direction of the multiple bonds, generalizing the Suzuki groups found by Suzuki using a different method. They were the last of the infinite families of finite simple groups to be discovered. Unlike the Steinberg groups, the Ree groups are not given by the points of a connected reductive algebraic group defined over a finite field; in other words, there is no "Ree algebraic group" related to the Ree groups in the same way that (say) unitary groups are related to Steinberg groups. However, there are some exotic pseudo-reductive algebraic groups over non-perfect fields whose construction is related to the construction of Ree groups, as they use the same exotic automorphisms of Dynkin diagrams that change root lengths. defined Ree groups over infinite fields of characteristics 2 and 3. and introduced Ree groups of infinite-dimensional Kac–Moody algebras. Construction If is a Dynkin diagram, Chevalley constructed split algebraic groups corresponding to , in particular giving groups with values in a field . These groups have the following automorphisms: Any endomorphism of the field induces an endomorphism of the group Any automorphism of the Dynkin diagram induces an automorphism of the group . The Steinberg and Chevalley groups can be constructed as fixed points of an endomorphism of X(F) for the algebraic closure of a field. For the Chevalley groups, the automorphism is the Frobenius endomorphism of , while for the Steinberg groups the automorphism is the Frobenius endomorphism times an automorphism of the Dynkin diagram. Over fields of characteristic 2 the groups and and over fields of characteristic 3 the groups have an endomorphism whose square is the endomorphism associated to the Frobenius endomorphism of the field . Roughly speaking, this endomorphism comes from the order 2 automorphism of the Dynkin diagram where one ignores the lengths of the roots. Suppose that the field has an endomorphism whose square is the Frobenius endomorphism: . Then the Ree group is defined to be the group of elements of such that . If the field is perfect then and are automorphisms, and the Ree group is the group of fixed points of the involution of . In the case when is a finite field of order (with p = 2 or 3) there is an endomorphism with square the Frobenius exactly when k = 2n + 1 is odd, in which case it is unique. So this gives the finite Ree groups as subgroups of B2(22n+1), F4(22n+1), and G2(32n+1) fixed by an involution. Chevalley groups, Steinberg group, and Ree groups The relation between Chevalley groups, Steinberg group, and Ree groups is roughly as follows. Given a Dynkin diagram X, Chevalley constructed a group scheme over the integers whose values over finite fields are the Chevalley groups. In general one can take the fixed points of an endomorphism of where is the algebraic closure of a finite field, such that some power of is some power of the Frobenius endomorphism φ. The three cases are as follows: For Chevalley groups, for some positive integer n. In this case the group of fixed points is also the group of points of X defined over a finite field. For Steinberg groups, for some positive integers m, n with m dividing n and m > 1. In this case the group of fixed points is also the group of points of a twisted (quasisplit) form of X defined over a finite field. For Ree groups, for some positive integers m, n with m not dividing n. In practice m=2 and n is odd. Ree groups are not given as the points of some connected algebraic group with values in a field. they are the fixed points of an order m=2 automorphism of a group defined over a field of order with n odd, and there is no corresponding field of order pn/2 (although some authors like to pretend there is in their notation for the groups). Ree groups of type 2B2 The Ree groups of type 2B2 were first found by using a different method, and are usually called Suzuki groups. Ree noticed that they could be constructed from the groups of type B2 using a variation of the construction of . Ree realized that a similar construction could be applied to the Dynkin diagrams F4 and G2, leading to two new families of finite simple groups. Ree groups of type 2G2 The Ree groups of type 2G2(32n+1) were introduced by , who showed that they are all simple except for the first one 2G2(3), which is isomorphic to the automorphism group of . gave a simplified construction of the Ree groups, as the automorphisms of a 7-dimensional vector space over the field with 32n+1 elements preserving a bilinear form, a trilinear form, and a product satisfying a twisted linearity law. The Ree group has order where q = 32n+1 The Schur multiplier is trivial for n ≥ 1 and for 2G2(3)′. The outer automorphism group is cyclic of order 2n + 1. The Ree group is also occasionally denoted by Ree(q), R(q), or E2*(q) The Ree group 2G2(q) has a doubly transitive permutation representation on points, and more precisely acts as automorphisms of an S(2, q+1, q3+1) Steiner system. It also acts on a 7-dimensional vector space over the field with q elements as it is a subgroup of G2(q). The 2-sylow subgroups of the Ree groups are elementary abelian of order 8. Walter's theorem shows that the only other non-abelian finite simple groups with abelian Sylow 2-subgroups are the projective special linear groups in dimension 2 and the Janko group J1. These groups also played a role in the discovery of the first modern sporadic group. They have involution centralizers of the form , and by investigating groups with an involution centralizer of the similar form Janko found the sporadic group J1. determined their maximal subgroups. The Ree groups of type 2G2 are exceptionally hard to characterize. studied this problem, and was able to show that the structure of such a group is determined by a certain automorphism of a finite field of characteristic 3, and that if the square of this automorphism is the Frobenius automorphism then the group is the Ree group. He also gave some complicated conditions satisfied by the automorphism . Finally used elimination theory to show that Thompson's conditions implied that in all but 178 small cases, that were eliminated using a computer by Odlyzko and Hunt. Bombieri found out about this problem after reading an article about the classification by , who suggested that someone from outside group theory might be able to help solving it. gave a unified account of the solution of this problem by Thompson and Bombieri. Ree groups of type 2F4 The Ree groups of type were introduced by . They are simple except for the first one , which showed has a simple subgroup of index 2, now known as the Tits group. gave a simplified construction of the Ree groups as the symmetries of a 26-dimensional space over the field of order 22n+1 preserving a quadratic form, a cubic form, and a partial multiplication. The Ree group has order q12(q6 + 1) (q4 − 1) (q3 + 1) (q − 1) where q = 22n+1. The Schur multiplier is trivial. The outer automorphism group is cyclic of order 2n + 1. These Ree groups have the unusual property that the Coxeter group of their BN pair is not crystallographic: it is the dihedral group of order 16. showed that all Moufang octagons come from Ree groups of type . See also List of finite simple groups References External links ATLAS: Ree group R(27) Finite groups
Ree group
Mathematics
1,778
11,306,806
https://en.wikipedia.org/wiki/Phyllosticta%20palmetto
Phyllosticta palmetto is a fungal plant pathogen infecting coconuts. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Coconut palm diseases palmetto Taxa named by Benjamin Matlack Everhart Fungus species
Phyllosticta palmetto
Biology
54
31,037,037
https://en.wikipedia.org/wiki/Institute%20of%20Cosmology%20and%20Gravitation%2C%20University%20of%20Portsmouth
The ICG is a research institute at the University of Portsmouth devoted to topics in cosmology, galaxy evolution and gravitation. It has nearly 50 staff, post-docs and students working on subjects from inflation in the early Universe to understanding the stellar populations in galaxies. Research at the Institute is supported by grants from STFC (the UK Science and Technology Facilities Council), the Royal Society and the European Union. History The Institute of Cosmology and Gravitation, or ICG, was established as an independent research department by the University of Portsmouth in January 2002. It was formed from members of the Relativity and Cosmology Group that had been set up by Prof David Matravers, head of the School of Mathematical Studies at the University of Portsmouth, following the arrival of Roy Maartens as a lecturer in 1994. David Wands joined the group as a research fellow in 1996. The group won their first major research grant from PPARC (the UK Particle Physics and Astronomy Research Council) in 1998 to study the evolution of cosmological structure. In the UK Research Assessment Exercise in 2001 (RAE2001) research submitted by the group was awarded a grade 5, recognising the international excellence of their work in applied mathematics, and leading to the establishment of the ICG the next year. Bob Nichol joined the ICG from Carnegie Mellon University (Pittsburgh) in 2004 initiating a research programme in observational cosmology. In the 2008 UK government Research Assessment Exercise (RAE2008), 75% of the ICG research was judged to be "Internationally Excellent" or better (3* or 4* status, with 4* being the highest). This ranking places the ICG in the top 6 Applied Maths groups in the UK. In 2009 the Institute moved from offices in Mercantile House to purpose-built rooms for nearly 50 researchers on the top floor of the Dennis Sciama Building. The building was officially opened by the Astronomer Royal, and former student of Dennis Sciama, Prof Martin Rees. Roy Maartens was the director of the ICG from January 2002 until October 2010 when he took up a Square Kilometre Array research chair at the University of the Western Cape, South Africa, dividing his time between Portsmouth (30%) and Cape Town (70%). Bob Nichol and David Wands took over as directors, followed by Adam Amara. Since October 2023 the director of the ICG is David Bacon. Resources The ICG is a member of the following projects: The South East Physics Network Astrophysics collaboration (SEPNet ASTRO) The Sloan Digital Sky Survey (SDSS) The UK National Cosmology Supercomputer Consortium (COSMOS) The Dark Energy Survey (DES) The Low Frequency Array (LOFAR) UK consortium The European Network for Theoretical Astroparticle Physics (ENTApP) which is one of the Networking Activities of the ILIAS Integrated Infrastructures Initiative Galaxy Zoo The Spitzer Extragalactic Representative Volume Survey (SERVS) The UniMass project In addition, the University of Portsmouth is home to the SCIAMA supercomputer SEPnet Computing Infrastructure for Astrophysical Modelling and Analysis External links Home page of the Institute of Cosmology and Gravitation Outreach website of the Institute of Cosmology and Gravitation Home page of the University of Portsmouth References Astronomy institutes and departments Research institutes in the United Kingdom Astronomy in the United Kingdom Educational institutions established in 2002 2002 establishments in England
Institute of Cosmology and Gravitation, University of Portsmouth
Astronomy
701
3,554,435
https://en.wikipedia.org/wiki/National%20Synchrotron%20Light%20Source
The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory (BNL) in Upton, New York was a national user research facility funded by the U.S. Department of Energy (DOE). Built from 1978 through 1984, and officially shut down on September 30, 2014, the NSLS was considered a second-generation synchrotron. The NSLS experimental floor consisted of two electron storage rings: an X-ray ring and a VUV (vacuum ultraviolet) ring which provided intense, focused light spanning the electromagnetic spectrum from the infrared through X-rays. The properties of this light and the specially designed experimental stations, called beamlines, allowed scientists in many fields of research to perform experiments not otherwise possible at their own laboratories. History Ground was broken for the NSLS on September 28, 1978. The VUV ring began operations in late 1982 and the X-ray ring was commissioned in 1984. In 1986, a second phase of construction expanded the NSLS by , which added offices, laboratories and room for new experimental equipment. After 32 years of producing synchrotron light, the final stored beam was dumped at 16.00 EDT on 30 September 2014, and NSLS was officially shut down. During the construction of the NSLS, two scientists, Renate Chasman and George Kenneth Green, invented a special periodic arrangement of magnetic elements (a magnetic lattice) to provide optimized bending and focusing of electrons. The design was called the Chasman–Green lattice, and it became the basis of design for every synchrotron storage ring. Storage rings are characterized by the number of straight sections and bend sections in their design. The bend sections produce more light than the straight sections due to the change in angular momentum of the electrons. Chasman and Green accounted for this in their design by adding insertion devices, known as wigglers and undulators, in the straight sections of the storage ring. These insertion devices produce the brightest light among the sections of the ring and thus, beamlines are typically built downstream from them. VUV ring The VUV ring at the National Synchrotron Light Source was one of the first of the 2nd generation light sources to operate in the world. It was initially designed in 1976 and commissioned in 1983. During the Phase II upgrade in 1986, two insertion wigglers/undulators were added to the VUV ring, providing the highest brightness source in the vacuum ultraviolet region until the advent of 3rd generation light sources. X-ray ring The X-ray ring at the National Synchrotron Light Source was one of the first storage rings designed as a dedicated source of synchrotron radiation. The final lattice design was completed in 1978 and the first stored beam was obtained in September 1982. By 1985, the experimental program was in a rapid state of development, and by the end of 1990, the Phase II beamlines and insertion devices were brought into operation. Design Electrons generate the synchrotron radiation that was used at the end stations of beamlines. The electrons are first produced by a 100 KeV triode electron gun. These electrons then proceeded through a linear accelerator (linac), which got them up to 120 MeV. Next, the electrons entered a booster ring, where their energy was increased to 750 MeV, and were then injected into either the VUV ring or the X-ray ring. In the VUV ring, the electrons were further ramped up to 825 MeV and electrons in the X-ray ring were ramped to 2.8 GeV. Once in the ring, VUV or X-ray, the electrons orbit and lose energy as a result of changes in their angular momentum, which cause the expulsion of photons. These photons are deemed white light, i.e. polychromatic, and are the source of synchrotron radiation. Before being used in a beamline endstation, the light is collimated before reaching a monochromator or series of monochromators to get a single and fixed wavelength. During normal operations, the electrons in the storage rings lost energy and as such, the rings were re-injected every 12 (X-ray ring) and 4 (VUV ring) hours. The difference in time arose from the fact that VUV light has a larger wavelength and thus has lower energy which leads to faster decay, while the X-rays have a very small wavelength and are high energy. This was the first synchrotron to be controlled using microprocessors. Facilities The UV ring had 19 beamlines, while the X-ray ring had 58 beamlines. The beamlines were operated and funded in numerous ways. However, since the NSLS was a user facility, any scientist that submitted a proposal could be granted beamtime after peer-review. There were two types of beamlines at the NSLS: Facility Beamlines (FBs), which were operated by the NSLS staff and reserved a minimum of 50 percent of their beamtime for users, and Participating Research Team (PRT) beamlines, which were operated and staffed by external groups and reserved at least 25 percent of their beamtime for users. Each X-ray beamline had an endstation called a hutch. These are large enclosures made of radiation shielding materials, such as steel and leaded glass, to protect the users from the ionizing radiation of the beam. On the X-ray floor, many of the experiments conducted used techniques such as X-ray diffraction, high-resolution powder diffraction (PXRD), XAFS, DAFS (X-ray diffraction anomalous fine structure), WAXS, and SAXS. On the VUV ring, the endstations were usually UHV (ultra-high vacuum) chambers that were used to conduct experiments using methods such as XPS, UPS, LEEM, and NEXAFS. In some beamlines, there were other analytical tools used in conjunction with synchrotron radiation, such as a mass spectrometer, a high-power laser, or a gas chromatography mass spectrometer. These techniques helped supplement and better quantify the experiments carried out at the endstation. Achievements and statistics Nobel prizes In 2003, Roderick MacKinnon won the Nobel Prize in Chemistry for deciphering the structure of the neuronal ion channel. His work was in part conducted at the NSLS. In 2009, Venkatraman Ramakrishnan and Thomas A. Steitz, and Ada E. Yonath won the Nobel Prize in Chemistry for imaging the ribosome with atomic resolution through their use of x-ray crystallography at the NSLS and other synchrotron light sources. User statistics The National Synchrotron Light Source hosted more than 2,200 users from 41 U.S. states and 30 other countries in 2009. In 2009, there were 658 journal publications and 764 total publications including journal publications, books, patents, thesis, and reports. NSLS-II The NSLS was permanently shutdown on September 30, 2014, after more than 30 years of service. It was replaced by the NSLS-II, which was designed to be 10,000 times brighter. See also Center for Functional Nanomaterials List of synchrotron radiation facilities Synchrotron radiation Synchrotron United States Department of Energy national laboratories References External links Original NSLS web page BNL: National Synchrotron Light Source II (NSLS-II) BNL Photon Sciences: About NSLS-II Brookhaven National Laboratory – a passion for discovery Lightsources.org Brookhaven National Laboratory Particle physics facilities Synchrotron radiation facilities
National Synchrotron Light Source
Materials_science
1,577
2,420,418
https://en.wikipedia.org/wiki/Color%20of%20water
The color of water varies with the ambient conditions in which that water is present. While relatively small quantities of water appear to be colorless, pure water has a slight blue color that becomes deeper as the thickness of the observed sample increases. The hue of water is an intrinsic property and is caused by selective absorption and scattering of blue light. Dissolved elements or suspended impurities may give water a different color. Intrinsic color The intrinsic color of liquid water may be demonstrated by looking at a white light source through a long pipe that is filled with purified water and closed at both ends with a transparent window. The light cyan color is caused by weak absorption in the red part of the visible spectrum. Absorptions in the visible spectrum are usually attributed to excitations of electronic energy states in matter. Water is a simple three-atom molecule, H2O, and all its electronic absorptions occur in the ultraviolet region of the electromagnetic spectrum and are therefore not responsible for the color of water in the visible region of the spectrum. The water molecule has three fundamental modes of vibration. Two stretching vibrations of the O–H bonds in the gaseous state of water occur at = 3650 cm and = 3755 cm−1. Absorption due to these vibrations occurs in the infrared region of the spectrum. The absorption in the visible spectrum is due mainly to the harmonic = 14,318 cm, which is equivalent to a wavelength of 698 nm. In liquid state at these vibrations are red-shifted by hydrogen bonding, resulting in red absorption at 740 nm, other harmonics such as giving red absorption at 660 nm. The absorption curve for heavy water (DO) is of a similar shape, but is shifted further towards the infrared end of the spectrum, because the vibrational transitions have a lower energy. For this reason, heavy water does not absorb red light and thus large bodies of DO would lack the characteristic cyan color of the more commonly found light water (HO). Absorption intensity decreases markedly with each successive overtone, resulting in very weak absorption for the third overtone. For this reason, the pipe needs to have a length of a meter or more and the water must be purified by microfiltration to remove any particles that could produce Mie scattering. Color of lakes and oceans Lakes and oceans appear cyan for several reasons. One is that the surface of the water reflects the color of the sky, which ranges from cyan to light azure. It is a common misconception that this reflection is the sole reason bodies of water appear cyan, though it can contribute. This contribution usually makes the body of water appear more a shade of azure rather than cyan depending on how bright the sky is. Water in swimming pools with white-painted sides and bottom will appear cyan, even in indoor pools where there is no sky to be reflected. The deeper the pool, the more intense the cyan color becomes. Some of the light hitting the surface of the ocean is reflected but most of it penetrates the water surface, interacting with water molecules and other substances in the water. Water molecules can vibrate in three different modes when they interact with light. The red, orange, and yellow wavelengths of light are absorbed so the remaining light seen is composed of green, cyan, and blue wavelengths. This is the main reason the ocean's color is cyan. The relative contribution of reflected skylight and the light scattered back from the depths is strongly dependent on observation angle. Scattering from suspended particles also plays an important role in the color of lakes and oceans, causing the water to look greener or bluer in different areas. A few tens of meters of water will absorb all light, so without scattering, all bodies of water would appear black. Because most lakes and oceans contain suspended living matter and mineral particles, light from above is scattered and some of it is reflected upwards. Scattering from suspended particles would normally give a white color, as with snow, but because the light first passes through many meters of cyan-colored liquid, the scattered light appears cyan. In extremely pure water—as is found in mountain lakes, where scattering from particles is very low—the scattering from water molecules themselves also contributes a cyan color. Diffuse sky radiation due to Rayleigh scattering in the atmosphere along one's line of sight gives distant objects a cyan or light azure tint. This is most commonly noticed with distant mountains, but also contributes to the cyanness of the ocean in the distance. Color of glaciers Glaciers are large bodies of ice and snow formed in cold climates by processes involving the compaction of fallen snow. While snowy glaciers appear white from a distance, the long path lengths of internal reflected light causes glaciers to appear a deep blue when viewed up close and when shielded from direct ambient light. Relatively small amounts of regular ice appear white because plenty of air bubbles are present, and also because small quantities of water appear to be colorless. In glaciers, on the other hand, the pressure causes the air bubbles, trapped in the accumulated snow, to be squeezed out increasing the density of the created ice. Large quantities of water appear cyan, therefore a large piece of compressed ice, or a glacier, would also appear cyan. Color of water samples Dissolved and particulate material in water can cause it to be appear more green, tan, brown, or red. For instance, dissolved organic compounds called tannins can result in dark brown colors, or algae floating in the water (particles) can impart a green color. Color variations can be measured with reference to a standard color scale. Two examples of standard color scales for natural water bodies are the Forel-Ule scale and the Platinum-Cobalt scale. For example, slight discoloration is measured against the Platinum-Cobalt scale in Hazen units (HU). The color of a water sample can be reported as: Apparent color is the color of a body of water being reflected from the surface of the water, and consists of color from both dissolved and suspended components. Apparent color may also be changed by variations in sky color or the reflection of nearby vegetation. True color is measured after a sample of water has been collected and purified (either by centrifuging or filtration). Pure water tends to look cyan in color and a sample can be compared to pure water with a predetermined color standard or comparing the results of a spectrophotometer. Testing for color can be a quick and easy test which often reflects the amount of organic material in the water, although certain inorganic components like iron or manganese can also impart color. Water color can reveal physical, chemical and bacteriological conditions. In drinking water, green can indicate copper leaching from copper plumbing and can also represent algae growth. Blue can also indicate copper, or might be caused by syphoning of industrial cleaners in the tank of commodes, commonly known as backflowing. Reds can be signs of rust from iron pipes or airborne bacteria from lakes, etc. Black water can indicate growth of sulfur-reducing bacteria inside a hot water tank set to too low a temperature. This usually has a strong sulfur or rotten egg (HS) odor and is easily corrected by draining the water heater and increasing the temperature to or higher. Sulfate reducing bacteria are not known to cause issues in cold water plumbing. Learning the water impurity indication color spectrum can make identifying and solving cosmetic, bacteriological and chemical problems easier. Water quality and color The presence of color in water does not necessarily indicate that the water is not drinkable. Water with high water clarity is generally more cyan in color due to low concentrations of particles and/or dissolved substances. Color-causing particulate substances can be easily removed by filtration. Color-causing dissolved substances such as tannins are only toxic to animals in large concentration. Color from dissolved substances is not removed by typical water filters; however the use of coagulants may succeed in trapping the color-causing compounds within the resulting precipitate. Other factors can affect the color seen: Particles and solutes can absorb light, as in tea or coffee. Green algae in rivers and streams often lend a blue-green color. The Red Sea has occasional blooms of red Trichodesmium erythraeum algae. Particles in water can scatter light. The Colorado River is often muddy red because of suspended reddish silt in the water—this gives the river its name, from Spanish , . Some mountain lakes and streams with finely ground rock, such as glacial flour, are turquoise. Light scattering by suspended matter is required in order that the blue light produced by water's absorption can return to the surface and be observed. Such scattering can also shift the spectrum of the emerging photons toward the green, a color often seen when water laden with suspended particles is observed. Color names Various cultures divide the semantic field of colors differently from the English language usage, and some do not distinguish between blue and green in the same way. An example is Welsh in which can mean blue or green, or Vietnamese in which likewise can mean either. Conversely, in Russian and some other languages, there is no single word for blue, but somewhat different words for light blue (, ) and dark blue (, ). Other color names assigned to bodies of water are sea green and ultramarine blue. Unusual oceanic colorings have given rise to the terms red tide and black tide. The Ancient Greek poet Homer uses the epithet "wine-dark sea"; in addition, he also describes the sea as "grey". William Ewart Gladstone has suggested that this is due to the Ancient Greek classifying colors primarily by luminosity rather than hue, while others believe Homer was color blind. The Ancient Indian wisdom of Veda considers water's life-giving contributions a part of the divine. It recognizes water as an ancient god, Varuna, and the color of Varuna is described as blue. In the Gayatri associated with Varuna, the phrase "Neela purusha" comes in the second line, which calls the water deity the blue one. References Further reading External links Water Color, USGS Water Science School What color is water?—WebExhibits' Causes of Colour Color Shades of blue Water chemistry Water pollution Water physics Water quality indicators
Color of water
Physics,Chemistry,Materials_science,Environmental_science
2,092
11,421,679
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20R79
In molecular biology, Small nucleolar RNA R79 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA R79 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. Plant snoRNA R79 was identified in a screen of Arabidopsis thaliana. References External links Small nuclear RNA
Small nucleolar RNA R79
Chemistry
198
37,977,795
https://en.wikipedia.org/wiki/Lorden%27s%20inequality
In probability theory, Lorden's inequality is a bound for the moments of overshoot for a stopped sum of random variables, first published by Gary Lorden in 1970. Overshoots play a central role in renewal theory. Statement of inequality Let X1, X2, ... be independent and identically distributed positive random variables and define the sum Sn = X1 + X2 + ... + Xn. Consider the first time Sn exceeds a given value b and at that time compute Rb = Sn − b. Rb is called the overshoot or excess at b. Lorden's inequality states that the expectation of this overshoot is bounded as Proof Three proofs are known due to Lorden, Carlsson and Nerman and Chang. See also Wald's equation References Stochastic processes Probabilistic inequalities
Lorden's inequality
Mathematics
179
51,678,457
https://en.wikipedia.org/wiki/O%27Connell%20effect
The O'Connell effect is an asymmetry in the photometric light curve of certain close eclipsing binary stars. It was named after the astronomer Daniel Joseph Kelly O'Connell, SJ of Riverview College in New South Wales who in 1951 studied this phenomenon and distinguished it from the so-called periastron effect described by earlier authors, as it does not necessarily appear near the periastron, when tidal effects and an increase in mutual radiation may cause an increase in luminosity. The effect The out-of-eclipse brightness maxima of some binary stars are unequally high. This is contrary to expectations that the observed luminosity of an eclipsing binary should be the same when its components switch positions every half period. The maximum following the primary minimum is nearly always brighter than the preceding one. This is called the positive O'Connell effect, the reverse case is referred to as the negative O'Connell effect. The difference increases with the ellipticity of the stars, and the differences in their sizes and densities. Also, spectral differences have been observed between subsequent maxima. Attempts at explanation In some systems where the phenomenon has been observed, such as in CG Cygni, RT Lacertae, XY Ursae Majoris, or YY Eridani, the luminosity difference between subsequent maxima has been found to be variable, in others relatively stable. Furthermore, it has been observed in a variety of configurations, such as over-contact, semi-detached, and near contact systems alike. These factors make an explanation difficult and suggest that various mechanisms may be responsible for the effect to manifest. Several reasons have thus been proposed: an asymmetric distribution of starspots, impacts of one-way gas streams between the components of the binary system, or the flow of circumstellar matter, asymmetrically deflected due to Coriolis forces. Examples The O'Connell effect has been observed, among others, in the binary systems W Crucis, RT Lacertae, CX Canis Majoris, TU Crucis, AQ Monocerotis, DQ Velorum, and CG Cygni. References External links A Possible Explanation of the O’Connell Effect in Close Binary Stars - Unsolved problems in astronomy Periodic phenomena Optical phenomena
O'Connell effect
Physics,Astronomy
477
57,122,799
https://en.wikipedia.org/wiki/Nikos%20Kyrpides
Nikos Kyrpides (Greek: Νίκος Κυρπίδης) is a Greek-American bioscientist who has worked on the origins of life, information processing, bioinformatics, microbiology, metagenomics and microbiome data science. He is a senior staff scientist at the Berkeley National Laboratory, head of the Prokaryote Super Program and leads the Microbiome Data Science program at the US Department of Energy Joint Genome Institute. Education Kyrpides was born in Serres, Greece, where he studied biology at the Aristotle University of Thessaloniki and received his PhD in molecular biology and biotechnology from the University of Crete. He pursued postdoctoral studies in microbiology with Carl Woese at the University of Illinois at Urbana-Champaign and in bioinformatics with Ross Overbeek at the Argonne National Laboratory. From 1999 to 2004 Kyrpides worked in the biotech industry in Chicago, where he led the development of genome analysis and bioinformatics. He joined the United States Department of Energy Joint Genome Institute (JGI) in 2004 to lead the Genome Biology Program and develop the data management and comparative analysis platforms for microbial genomes and metagenomes. Kyrpides became the Metagenomics Program head in 2010 and founded the Prokaryotic Super Program in 2011, which he still leads with the Microbiome Data Science Group. Research Kyrpides's early work focused on the origins and evolution of the genetic code. In collaboration with Christos Ouzounis, he developed a series of hypotheses for the transfer of information from proteins to nucleic acids known as reverse interpretation. With the advent of genomics, Kyrpides turned his interest to the study and understanding of the last universal common ancestor. With Ouzounis he coined the acronym "LUCA" at a conference organized by Patrick Forterre at Les Treilles, France, and performed some of the first comparative genome analysis to predict the gene content of the LUCA. Kyrpides's work on the information processing systems revealed several previously-unsuspected relationships, suggesting new models for the evolution of those processes. He identified previously-undetected relationships between the eukaryotic and bacterial translation machinery, suggesting that the rudiments of translation initiation would have been present at the universal-ancestor stage. Kyrpides's work on the evolution of transcription helped change the understanding of the nature and organization of archaeal transcription machinery, which (at the time) was that transcription in Archaea was strictly similar to that in eukaryotes. Kyrpides and Ouzounis demonstrated the parallel existence of a large number of bacterial-type transcription factors in archaeal genomes. He led the development of several pioneering data-management systems in microbial genomics and metagenomics, which are widely used in the scientific community (with several thousand users worldwide). These include systems for data management and curation of genome projects and their associated metadata, such as the Genomes OnLine Database (GOLD), and comparative-genomics systems such as ERGO and the Integrated Microbial Genomes (IMG). Kyrpides's current research focuses on microbiome research, with an emphasis on microbiome data science. This includes the understanding of structure and function of various microorganisms and microbial communities and the elucidation of the evolutionary dynamics shaping the microbial genomes. To accomplish that, his group is developing novel computational methods for enabling large-scale comparative analysis and mining and visualizing big data. He proposed and published the first study on the use of standard benchmarking data for the evaluation of method accuracy in metagenomics. This approach has become the standard in the field. Some of Kyrpides's recent research in microbiome data science include the exploration of Earth’s virome, the identification of new bacterial phyla the prediction of novel folds using metagenomic sequences, and the discovery and characterization of new protein families from microbiome data. International initiatives Kyrpides began the MikroBioKosmos (MBK) initiative in Greece in 2007, with the goal of exploring and commercially using microbial national resources. MikroBioKosmos became a scientific society, with Kyrpides its first president. He is a founding member of two bioinformatics societies in Greece: the Hellenic Society of Computational Biology and Bioinformatics (HSCBB) in 2010 and Hellenic Bioinformatics in 2016. Kyrpides is also a board member of the international Genomic Standards Consortium (GSC), which aims to enable genomic data integration, discovery and comparison with international, community-driven standards. He began the Genomic Encyclopedia of Bacteria and Archaea (GEBA) project at the JGI and the Microbial Earth Project with Hans-Peter Klenk, Philip Hugenholtz and Jonathan Eisen in 2007, with the goal of improving the genome characterization of phylogenetically-diverse cultured microbes. The latter project evolved into an international effort to sequence all the type strains of bacteria and archaea, through a series of GEBA 1,000-genome projects. The rapid growth of microbial genome sequences at the end of 2010, without a parallel venue for describing those projects in a standardized manner, led to the need for a new scientific forum which would be a clearinghouse for capturing and presenting this information to the community. This idea led Kyrpides, George Garrity and Dawn Field to launch a new scientific journal: Standards in Genomic Sciences (SIGS), which became part of BioMed Central. Kyrpides proposed the development of a Microbial Environmental Genomics Administration in 2009, analogous to NASA, for the study and exploration of the most abundant life on the planet. In 2016, following the enormous growth of microbiome data, he outlined the need for a common infrastructure for microbiome data analysis and proposed the development of a National Microbiome Data Center (NMDC), later renamed to National Microbiome Data Collaborative. With Emiley Eloe-Fadrosh, Kyrpides organized the first NMDC workshop to launch this initiative at the Joint Genome Institute. This was followed by additional workshops in 2017 hosted by the American Society for Microbiology to promote the initiative. Awards and honours Kyrpides has received several awards, including the 2022 Exceptional Scientific Achievement Award from the Director of Lawrence Berkeley National Laboratory, the 2018 USFCC/J. Roger Porter Award from the American Society for Microbiology, the 2014 van Niel International Prize for Studies in Bacterial Systematics from the International Union of Microbiological Societies (IUMS), a 2007 outstanding-performance award from the Lawrence Berkeley National Laboratory, and the 2012 Academic Excellence Prize from the Empirikion Foundation. He is an elected fellow of the American Academy of Microbiology (AAM) (2014), and has been on the Thomson Reuters list of the world’s most frequently-cited scientists since 2014. A bacterial genus (Kyrpidia) was named after Kyrpides in 2011. In 2017, he received an honorary doctorate from the Aristotle University of Thessaloniki. References American microbiologists Greek biologists University of Crete alumni Aristotle University of Thessaloniki alumni Living people Bioinformatics 1963 births People from Serres
Nikos Kyrpides
Engineering,Biology
1,496
1,788,467
https://en.wikipedia.org/wiki/Falcarinol
Falcarinol (also known as carotatoxin or panaxynol) is a natural pesticide and fatty alcohol found in carrots (Daucus carota), Panax ginseng and ivy. In carrots, it occurs in a concentration of approximately 2 mg/kg. As a toxin, it protects roots from fungal diseases, such as liquorice rot that causes black spots on the roots during storage. The compound requires the freezing condition to maintain well because it is sensitive to light and heat. Chemistry Falcarinol is a polyyne with two carbon-carbon triple bonds and two double bonds. The double bond at the carbon 9 position has cis stereochemistry was introduced by the desaturation, which requires oxygen and NADPH (or NADH) cofactors and creates a bend in the molecule. It is structurally related to oenanthotoxin and cicutoxin. Biological effects Falcarinol is an intense irritant that can cause allergic reactions and contact dermatitis. It was shown that falcarinol acts as a covalent cannabinoid receptor type 1 inverse agonist and blocks the effect of anandamide in keratinocytes, leading to pro-allergic effects in human skin. Normal consumption of carrots has no toxic effect in humans. Biosynthesis Starting with oleic acid (1), which possesses a cis double bond at the carbon 9 position from desaturation and a bound of phospholipids (-PL), a bifunctional desaturase/acetylnase system occurred with oxygen (a) to introduce the second cis double bond at the carbon 12 position to form linoleic acid (2). This step was then repeated to turn the cis double bond at the carbon 12 position into a triple bond (also called acetylenic bond) to form crepenynic acid (3). Crepenynic acid was reacted with oxygen (b) to form a second cis double bond at the carbon 14 position (conjugated position) leading to the formation of dehydrocrepenynic acid (4). Allylic isomerization (c) was responsible for the changes from the cis double bond at the carbon 14 position into the triple bond (5) and formation of the more favored trans (E) double bond at the carbon 17 position (6). Finally, after forming the intermediate (7) by decarboxylation (d), falcarinol (8) was produced by hydroxylation (e) at the carbon 16 position that introduced the (R)-configuration to the system. See also Falcarindiol References Plant toxins Fungicides Neurotoxins Alkene derivatives Secondary alcohols Conjugated diynes
Falcarinol
Chemistry,Biology
585
1,118,130
https://en.wikipedia.org/wiki/Eye%20relief
The eye relief of an optical instrument (such as a telescope, a microscope, or binoculars) is the distance from the last surface of an eyepiece within which the user's eye can obtain the full viewing angle. If a viewer's eye is outside this distance, a reduced field of view will be obtained. The calculation of eye relief is complex, though generally, the higher the magnification and the larger the intended field of view, the shorter the eye relief. Eye relief and exit pupil The eye relief property should not be confused with the exit pupil width of an instrument: that is best described as the width of the cone of light that is available to the viewer at the exact eye relief distance. An exit pupil larger than the observer's pupil wastes some light, but allows for some fumbling in side-to-side movement without vignetting or clipping. Conversely, an exit pupil smaller than the eye's pupil will have all of its available light used, but since it cannot tolerate much side-to-side error in eye alignment, will often result in a vignetted or clipped image. The exit pupil width of say, a binocular, can be calculated as the objective diameter divided by the magnification, and gives the width of the exit cone of light in the same dimensions as the objective. For example, a 10 × 42 binocular has a 4.2 mm wide exit cone, and fairly comfortable for general use, whereas doubling the magnification with a zoom feature to 20 × results in a much more critical 2.1 mm exit cone. Eye relief distance can be particularly important for eyeglass wearers and shooters. The eye of an eyeglass wearer is typically further from the eyepiece, so that user needs a longer eye relief in order to still see the entire field of view. A simple practical test as to whether or not spectacles limit the field of view can be conducted by viewing first without spectacles and then again with them. Ideally there should be no difference in the field. For a shooter, eye relief is also a safety consideration. If the eye relief of a telescopic sight is too short, leaving the eye close to the sight, the firearm's recoil can force the optic's eyepiece to hit and cut into the skin around the shooter's eye, leaving a curved scarring laceration on the medial end of the supraorbital ridge and the eyebrow. This is frequently called a "scope bite", or the "idiot cut", due to the obvious and long-lasting nature of such a mistake. Typical eye relief distances for telescopic sights are often between one and four inches (25 to 100 mm), as opposed to the much shorter 15 to 17 mm for typical binoculars. The exit pupil widths in rifle sights are designed to be larger than the eye's pupil, to allow for a range of motion without vignetting. Available eye relief The eye relief given in product specifications does not always give a realistic view of what a user can expect. Although eye-cups can usually be folded down to allow the spectacle wearer to get closer to binocular eyepieces, there are sometimes lens mountings that do not allow the theoretical eye relief to be obtained. A better measure for those with strict needs would be one that takes account of this available eye relief, the theoretical value less any thickness of the lens' rims. This point can account for confusion in performance and is rarely expressed clearly. Additionally, when a spectacle wearer orders new glasses, the optician will ask them whether they prefer their spectacles close to the eyes or at some distance. This distance is referred to as the back vertex distance, or BVD on a prescription. Since this property affects the available eye relief of any binocular or other optics used, (telescopes, microscopes, etc.) it should be borne in mind at the eye testing stage. The matter should be discussed with the optician, though the only realistic way of testing the comfort is to try the optical device while wearing the usual spectacles. The optician can however make sure that the BVD is no worse in the new glasses than in the old ones that were used during evaluation. Adding prescription lenses In the event that a spectacle wearer cannot obtain the eye relief that they require, some cameras and microscopes allow prescription lenses to be fitted onto their eyepieces. In this way, the user can temporarily dispense with glasses in favor of the lens mounted on the optics. Although this method does not afford good incidental vision for the field around them, it might still be of use to some. Optics
Eye relief
Physics,Chemistry
953
6,654,559
https://en.wikipedia.org/wiki/Data%20Interchange%20Standards%20Association
The Data Interchange Standards Association (DISA) was the organization that supported various other organizations, for the most part, responsible for the development of cross-industry electronic business interchange standards. DISA served as the Secretariat for ASC X12 and their X12 EDI and XML standards development process. As of January 2016, DISA no longer exists. The Accredited Standards Committee (ASC) X12 develops and maintains the most widely implemented EDI standards. These standards interface with a multitude of e-commerce technologies and serve as the premier tool for integrating e-commerce applications. Through the X12 Committee's standards and active participation in emerging and relevant technical initiatives (XML, ebXML), they foster cross-industry consensus and set the norm for more effective data exchange. See also American National Standards Institute Electronic Data Interchange External links Accredited Standards Committee X12 Twitter Video & Gif Downloader Online Information technology organizations based in North America Standards organizations in the United States Data interchange standards
Data Interchange Standards Association
Technology
199