id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
39,941,590
https://en.wikipedia.org/wiki/NGC%205668
NGC 5668 is a nearly face-on spiral galaxy, visual magnitude about 11.5, located about 81 million light years away in the constellation Virgo. It was discovered on 29 April 1786 by William Herschel. NGC 5668 is a member of the NGC 5638 Group of galaxies, itself one of the Virgo III Groups strung out to the east of the Virgo Supercluster of galaxies. In addition, A.M. Garcia listed NGC 5668 in the 31 member NGC 5746 galaxy group (also known as LGG 386). As seen from the Earth, it is inclined by an angle of 18° to the line of sight along a position angle of 145°. The morphological classification in the De Vaucouleurs system is SA(s)d, indicating a pure spiral structure with loosely wound arms. However, optical images of the galaxy indicate the presence of a weak bar structure spanning an angle of 12″ across the nucleus. There is a dwarf galaxy located around to the southeast of NGC 5668, and the two may be gravitationally interacting. Supernovae Three supernovae have been observed in this galaxy: SN1952G (type unknown, mag. 17.9) was discovered by Fritz Zwicky on 18 April 1952. SN1954B (typeIa, mag. 12.3) was discovered by Paul Wild on 4 May 1954. [Note: Some sources incorrectly list the discovery date as 27 April 1954.] SN2004G (typeII, mag. 17.2) was discovered by Reiki Kushida on 19 January 2004. It was initially imaged at 43" to the west and 12".5 south of the galaxy core. High velocity clouds of neutral hydrogen have been observed in NGC 5668, which may have their origin in supernova explosions and strong stellar winds. Gallery See also List of NGC objects (5001–6000) References External links Virgo (constellation) Unbarred spiral galaxies 5668 09363 17860429 Discoveries by William Herschel 052018 +01-37-028 14309+0440
NGC 5668
[ "Astronomy" ]
443
[ "Virgo (constellation)", "Constellations" ]
39,943,982
https://en.wikipedia.org/wiki/Ethernet%20train%20backbone
An Ethernet train backbone (ETB) is a train communication network based on Ethernet technology standardised with IEC-61375-2-5. This is a train-wide communication backbone such as Wire Train Bus (WTB). Notes and references See also Ethernet consist network (ECN) External links Industrial Ethernet Network topology Networking standards Ethernet standards
Ethernet train backbone
[ "Mathematics", "Technology", "Engineering" ]
73
[ "Networking standards", "Network topology", "Computer standards", "Computer networks engineering", "Topology", "Industrial Ethernet" ]
39,944,840
https://en.wikipedia.org/wiki/LG%20G2
The LG G2 is an Android smartphone developed by LG Electronics. Serving as a successor to 2012's Optimus G and the 2013 Optimus G Pro phablet, the G2 was unveiled at a press event in New York City on 7 August 2013, and first released in September 2013. The G2 is primarily distinguished by software features that LG billed would "learn" from users, a high fidelity sound system designed to produce higher quality audio, a 1080p IPS LCD screen with technology that the company claimed would improve energy efficiency and reduce the size of the bezel around it, along with the unique placement of its power and volume keys—eschewing their typical location on the edge of a smartphone by placing them on the rear below the camera lens. The device was released to mostly positive reception; the G2 was universally praised for LG's efforts to produce a more seamless and compact design that nonetheless maximized screen size, its high performance, the quality of its display and primary camera, along with its long-lasting battery. Critics were divided on certain aspects of its design, such as its rear button layout, and its plastic chassis—which was panned for closely resembling recent Samsung Galaxy products and being a regression from the glass-based chassis of the Optimus G. Similarly, while its software and user interface was praised for its usability and large number of customization options, some reviewers felt that the software suffered from feature creep and contained notable usability regressions in comparison to "stock" Android. Sales of the G2 exceeded LG's estimates; in late-December 2013, a Korean news agency reported that at least 3 million units of the G2 had been sold worldwide. Release The G2 was first unveiled during a press event at New York City's Jazz at Lincoln Center on 7 August 2013. LG announced that it would begin to release the G2 globally on over 130 carriers within the next two months, in markets such as South Korea and the United States. To promote the G2, LG attempted to hold a city-wide scavenger hunt in Seoul, South Korea; during a press event at a local park on 9 August 2013, helium balloons (tying in with its "G in the Cloud" advertising campaign) were released that contained 100 vouchers. After the vouchers were scattered through the city by the deflating balloons, LG planned to give away G2s to those who found the vouchers. While only members of the media were formally invited, the event was disrupted by members of the public who learned about the promotion on the internet. As the balloons were released, attendees attempted to use BB guns and other makeshift tools to retrieve them. The resulting quarrel which broke out over the balloons resulted in 20 injuries; following the incident, LG apologized and stated that it would pay for the medical treatment of those injured in the event. LG also called off plans to hold similar events in other South Korean cities. The G2 was first released in the United States by Verizon on 12 September 2013, and released by AT&T the following day. T-Mobile released the G2 on 25 September, while Sprint released theirs on 8 November 2013. The G2 was released in Canada on 27 September 2013, across six national and regional carriers, including Bell, Rogers, SaskTel, Telus, Vidéotron, and Wind Mobile. Specifications Hardware The G2's exterior consists of a polycarbonate shell—unlike its predecessor, which used a glass-based construction. The rear cover is adorned with a subtle pattern resembling carbon fiber. The G2's volume and power keys are located directly below the camera on the rear of the device. The power button contains an LED lamp, which can be used as a notification light. The positioning of the buttons on the rear deviates from the majority of smartphones, where they are located on the bezel (side edge) of the phone. LG argued that buttons located on the bezel were harder to reach on larger smartphones, and made it more likely for users to drop their phone when adjusting the volume during a call. As such, the G2's buttons are instead located where the index finger would normally lie when the phone is held. Alongside the power button, the G2 is also powered on by double-tapping on the screen, and turned off by double-tapping on the status bar or a blank area on the home screen, a feature branded as "KnockOn". When the phone is off, the volume keys can also be used to launch directly into the camera or QuickMemo applications by holding them down. The G2 is powered by a 2.26GHz quad-core Snapdragon 800 processor with 2GB of RAM and support for LTE or LTE Advanced networks where available. The G2 is equipped with a 5.2-inch 1080p IPS display; to reduce the size of its screen bezel, wiring for touchscreen components is routed both above and below the screen itself. To help conserve battery life, the G2 also implements a panel self-refresh system; if the display is showing static content, it is refreshed solely from framebuffer memory (referred to as "graphics RAM"), allowing other display components (such as the GPU) to become idle. LG claimed that this system would allow the screen to use 26% less power than comparable displays on other smartphones. The G2's audio hardware and software is optimized to support 24-bit/192 kHz audio; during LG's press event, ringtones recorded by the Vienna Boys' Choir (which are also bundled with the device) were used to demonstrate the high quality audio from its internal speaker. The G2 also includes a 13-megapixel primary rear-facing camera with optical image stabilization, and an infrared emitter which allows it to serve as a universal remote with the accompanying QuickRemote app. The G2 comes with either 16 or 32GB of non-expandable storage, and includes a non-removable 3000 mAh battery. The Verizon Wireless model of the G2 offers support for Qi wireless charging. Unlike the models released in other countries, South Korean models of the G2 have a removable back cover, a MicroSD card slot for expanded storage, and a removable 2610 mAh battery. Software The G2 ships with Android 4.2.2 "Jelly Bean" with a custom interface and software. It contains a number of features that are designed to "learn" from users by predicting future actions, and allow for flexibility and customization. The G2 retains features from previous LG models such as the Optimus G and G Pro, including QuickMemo (which allows users to write notes on top of a screenshot), QuickRemote (a feature which allows the device to serve as a universal remote), QSlide pop-up apps, and Voice Mate. New features introduced by the G2 include TextLink, which analyzes text messages to detect content such as addresses and times that can be passed to other apps (such as the calendar, a note, or Google Maps), a pop-up menu of relevant apps triggered when plugging in headphones or a USB cable (Plug & Pop), the ability to answer a phone call by holding the phone to the user's head (AnswerMe), Slide Aside, a multitasking feature which allows users to "slide" away apps onto cards with a three-finger gesture, the Clip Tray (which collects content that had been copied to the clipboard), and Guest Mode. The G2 uses on-screen buttons; users can change their background color (which includes black and white options, either solid colored or with a gradient), customize the order of the buttons, or add additional buttons for opening QuickMemo or the notification shade. The G2's default music player supports the playback of WAV and FLAC files with 24-bit/192kHz audio. The time catch shot camera feature prevents missing out on moments by capturing photos while idle and keeping up to five in memory. An update to Android 4.4.2 "KitKat" was released in South Korea in November 2013, and for international models in March 2014. LG touted a "noticeable speed boost" over Jelly Bean, along with battery life improvements, user interface tweaks, and other improvements brought by KitKat. A further update added a new security feature known as "Knock Code" (as introduced by the LG G Pro 2), which allows users to unlock their device by tapping quadrants of the screen in a sequence. An update to Android 5.0.1 "Lollipop" was first released in South Korea in January 2015. Alongside other internal improvements, it introduces the refreshed "G UI" first introduced by the LG G3, which itself received improvements to match the new visual style and features of Lollipop, The update was also released for the international model and the U.S. carrier versions, but was not released in Canada. Model variants Several different model variants of the G2 are sold, with most variants differing only in support for regional network types and bands. However, the South Korean version features a removable (but smaller) battery and a MicroSD slot, while the U.S. Verizon Wireless version includes Qi wireless charging, but has a noticeably different rear cover design with different designs for the buttons and camera, and replaces the solid black option for the button background with a pink pattern option. In January 2014, in honor of the Chinese New Year, LG released two "limited edition" models of the G2 in selected Asian markets; available in red or gold colors, the limited edition models featured textured casings instead of the glossier plastic used normally by the G2. Accessories The QuickWindow case accessory for the G2 was unveiled on 30 July 2013—prior to the unveiling of the phone itself. The QuickWindow case consists of a plastic shell with a polyurethane flip cover. The cover contains a rounded rectangular window that exposes a portion of the display, allowing a number of functions to be accessed without opening the cover, including notifications, a customizable clock, and a music player. Reception Pre-release While complimenting its performance and other unique features, The Verge believed that LG was trying too hard to compete with the Samsung Galaxy S4 by closely imitating its design, specifications, and emphasis on features instead of differentiating itself through further innovations. TechRadar also praised its performance and display quality, but considered the design of the G2 itself to be "dull", and believed that while offering many options for advanced users, LG's skinned version of Android 4.2 was too complex for "casual" users (especially noting its notification pull-down, where roughly half the screen is taken up by options). Critical reception The LG G2 was released to mostly positive reception. In December 2013, the British magazine Stuff named the G2 its 2013 Phone of the Year and Gadget of the Year, reporting that "LG has previously struggled to make an impact on the smartphone market, but the LG G2 is as good as smartphones get in 2013, and shows the established names how it should be done." The G2 was considered by critics to be well-built, but was criticized for replacing the glass-based construction of the Optimus G with a plainer, plastic-based design, drawing comparisons to recent Samsung products. Ars Technica further criticized the Verizon Wireless version for having a cheaper appearance than the international versions, with a plainer rear cover, modified buttons, and a different speaker layout. The G2's rear buttons were met with equally mixed reception, with most reviewers believing that users would be able to adjust themselves to operate them. Accordingly, the ability to wake the phone by tapping on the screen was considered a more convenient method. The G2 was praised for its high-end hardware, with Engadget describing the device as a "beast" with specifications that "seem familiar to anyone who's read a flagship Android phone review in the last 12 months", recognizing that it had become harder for manufacturers to differentiate their flagship products beyond displays and processors. The G2's display was praised for its high resolution and color accuracy, along with LG's efforts to reduce the screen bezel size. The G2 was also praised for having unexpectedly longer battery life than any of its competitors (along with Motorola's Droid Razr Maxx). After lasting about 20 hours of "standard" use in its testing, the G2's battery was considered by Engadget to be "a sign that we're finally crossing into a world of sensible smartphone batteries." LG's Android interface design received mixed reviews; TechRadar gave it a positive review, describing it as being "easy enough for novice and expert smartphone users alike", and noting its dynamic elements and customization features. Its increased customization abilities (including different lock screen and home screen animations, and the ability to change the background and layout of the on-screen navigation buttons) was noted by reviewers. The usefulness of the "Slide Aside" feature was questioned due to the availability of other, more efficient means to switch apps. LG's software was generally panned for being unpolished in places, suffering from feature creep, and containing too many unneeded visual effects and skeuomorphic elements (the latter having generally fallen out of favor). The G2's software was also panned for containing usability digressions in comparison to stock Android, such as the notification tray being taken up by options, not using Android 4.2's updated "Complete action using" menu and behavior, and, despite using on-screen buttons, continued use of the "Menu" key which was officially deprecated by Google in its Android human interface guidelines for Android 4.x (on apps which comply with the HIG, overflow menus are intended to be displayed within the apps themselves. The device's Menu key is replaced by a "Recent apps" key, and a small "Menu" key appears to the side when needed). The Nexus 5 was released by LG shortly afterwards and shares much of the G2's hardware albeit with a lower-quality rear camera and smaller battery to hit a lower price point; the Nexus 5 has been touted as a clean Android software alternative with the added advantage of running the latest Android 4.4 "KitKat" while the G2 had to make do with a bloatware-filled Android 4.2.2 "Jelly Bean" for a time. The G2's rear-facing camera was considered good for its class, with its processor contributing to quicker HDR photo processing than its competitors. The Verge remarked that despite LG having "practically stole[n]" Samsung's camera design and modes, the G2's camera interface were among the better implementations of Android camera software due to its available options. However, its low-light photos and some of its other modes were panned for not being as good as those of other devices such as the Nokia Lumia 920 and HTC One. In a photography-focused review by Digital Photography Review, the optical image stabilization system was praised for helping maintain good levels of exposure, and well-lit photos were found to have a decent level of detail, noting that its lens was "sharp pretty much all across the frame and free of chromatic aberrations." However, it was noted that "as the light gets dimmer and in the ISO starts to increase", the device began to suffer from "very heavy-handed noise reduction which results in visible softness", and further noted that "[its] detail starts to suffer as soon as you go higher than base ISO and by ISO 400 most low-contrast detail is gone." However, in a December 2013 comparison against other recent phones such as the One, Galaxy S4 Zoom, Xperia Z1, iPhone 5S, and Lumia 1020 by TechRadar, the G2 was named the best cameraphone of the six for "[performing] very well in terms of picture quality, ease of use and functionality, as well as post processing", although it was panned for not having as many options as its competitors, and for the probability of fingers accidentally getting into landscape shots due to the positioning of the lens. Sales In December 2013, Asia Today reported that 2.3 million units of the G2 had been sold since its release in September 2013, with at least 600,000 sold in South Korea alone. These numbers were below LG's original estimates of 3 million units. However, later in the month, news agency Yonhap reported more positive numbers from analysts, with at least 3 million units sold and 900,000 sold in South Korea. See also References External links Android (operating system) devices LG Electronics smartphones Mobile phones introduced in 2013 Discontinued flagship smartphones Mobile phones with infrared transmitter
LG G2
[ "Technology" ]
3,530
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
39,944,913
https://en.wikipedia.org/wiki/Integral%20closure%20of%20an%20ideal
In algebra, the integral closure of an ideal I of a commutative ring R, denoted by , is the set of all elements r in R that are integral over I: there exist such that It is similar to the integral closure of a subring. For example, if R is a domain, an element r in R belongs to if and only if there is a finitely generated R-module M, annihilated only by zero, such that . It follows that is an ideal of R (in fact, the integral closure of an ideal is always an ideal; see below.) I is said to be integrally closed if . The integral closure of an ideal appears in a theorem of Rees that characterizes an analytically unramified ring. Examples In , is integral over . It satisfies the equation , where is in the ideal. Radical ideals (e.g., prime ideals) are integrally closed. The intersection of integrally closed ideals is integrally closed. In a normal ring, for any non-zerodivisor x and any ideal I, . In particular, in a normal ring, a principal ideal generated by a non-zerodivisor is integrally closed. Let be a polynomial ring over a field k. An ideal I in R is called monomial if it is generated by monomials; i.e., . The integral closure of a monomial ideal is monomial. Structure results Let R be a ring. The Rees algebra can be used to compute the integral closure of an ideal. The structure result is the following: the integral closure of in , which is graded, is . In particular, is an ideal and ; i.e., the integral closure of an ideal is integrally closed. It also follows that the integral closure of a homogeneous ideal is homogeneous. The following type of results is called the Briancon–Skoda theorem: let R be a regular ring and an ideal generated by elements. Then for any . A theorem of Rees states: let (R, m) be a noetherian local ring. Assume it is formally equidimensional (i.e., the completion is equidimensional.). Then two m-primary ideals have the same integral closure if and only if they have the same multiplicity. See also Dedekind–Kummer theorem Notes References Eisenbud, David, Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts in Mathematics, 150, Springer-Verlag, 1995, . Further reading Irena Swanson, Rees valuations. Commutative algebra Ring theory Algebraic structures
Integral closure of an ideal
[ "Mathematics" ]
540
[ "Mathematical structures", "Mathematical objects", "Ring theory", "Fields of abstract algebra", "Algebraic structures", "Commutative algebra" ]
39,945,265
https://en.wikipedia.org/wiki/Production%20flow%20analysis
In operations management and industrial engineering, production flow analysis refers to methods which share the following characteristics: Classification of machines Technological cycles information control Generating a binary product-machines matrix (1 if a given product requires processing in a given machine, 0 otherwise) Methods differ on how they group together machines with products. These play an important role in designing manufacturing cells. Rank order clustering Given a binary product-machines n-by-m matrix , rank order clustering is an algorithm characterized by the following steps: For each row i compute the number Order rows according to descending numbers previously computed For each column p compute the number Order columns according to descending numbers previously computed If on steps 2 and 4 no reordering happened go to step 6, otherwise go to step 1 Stop Similarity coefficients Given a binary product-machines n-by-m matrix, the algorithm proceeds by the following steps: Compute the similarity coefficient for all with being the number of products that need to be processed on both machine i and machine j, u comprises the number of components which visit machine j but not k and vice versa. Group together in cell k the tuple (i*,j*) with higher similarity coefficient, with k being the algorithm iteration index Remove row i* and column j* from the original binary matrix and substitute for the row and column of the cell k, Go to step 2, iteration index k raised by one Unless this procedure is stopped the algorithm eventually will put all machines in one single group. References Industrial engineering
Production flow analysis
[ "Engineering" ]
302
[ "Industrial engineering" ]
39,945,942
https://en.wikipedia.org/wiki/FLACS
FLACS (FLame ACceleration Simulator) is a commercial Computational Fluid Dynamics (CFD) software used extensively for explosion modeling and atmospheric dispersion modeling within the field of industrial safety and risk assessment. Main application areas of FLACS are in petrochemical, process manufacturing, food processing, wood processing, metallurgical, and nuclear safety industries. FLACS has dedicated modules to simulate gas explosion, dust explosion and explosions involving chemical explosives like TNT. FLACS is also extensively used to simulate flammable and toxic gas dispersion. It was applied in the investigation of many high profile accidents such as Buncefield fire, Piper Alpha, TWA Flight 800, and the Petrobras 36 platform. History FLACS software development started in-house in the early 1980s under the sponsorship program, Gas Explosion Safety (GSP), funded by the oil companies BP, Elf Aquitaine, Esso, Mobil, Norsk Hydro and Statoil. FLACS-86 was released to GSP sponsors in 1986. Continuous research and development from then onwards resulted in many commercial releases. In 2006, FLACS v8.1 was released to customers. Till then FLACS was developed for Unix and Linux platforms. In 2008, however, FLACS v9.0 was released for Microsoft Windows platform. FLACS v9.1 and FLACS-Wind was developed in 2010. A fully parallelized FLACSv10.0 (using OpenMP) with a new solver for incompressible flows was released in 2012. FLACSv10.0 also constitutes a Homogeneous Equilibrium Model (HEM) for two-phase flow calculations. Related software CFX (proprietary software) Fire Dynamics Simulator (GPL) OpenFOAM (GPL) KFX DNV GL See also Computational fluid dynamics Computer simulation Gas explosion Dust explosion Atmospheric dispersion modeling References External links FLACS official website GexCon AS (FLACS developers) Computational fluid dynamics
FLACS
[ "Physics", "Chemistry" ]
413
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
39,945,948
https://en.wikipedia.org/wiki/Serval%20Project
The Serval Project (often referred to as Serval) is a project financed by the Shuttleworth Foundation, as well as various other organisations and accepting individual donations. It is headquartered at Flinders University in Adelaide, Australia. The project aims to develop technology that can be used to create direct connections between cellular phones through their Wi-Fi interfaces, without the need of a mobile phone operator. The technology allows for live voice calls whenever the mesh is able to find a route between the participants. Text messages and other data can be communicated using a store and forward system called Rhizome, allowing communication over unlimited distances and without a stable live mesh connection between all participants. The Serval Project includes a collaborate mapping application intended to support disaster relief and recovery efforts. A "mesh extender" is being developed, which establishes a short range Serval mesh over Wi-Fi and joins it with other more distant meshes by linking to other mesh extenders over packet radio operating in the ISM 915 MHz band. Serval Mesh Serval Mesh is an Android application and the Serval Project's flagship product. It is currently distributed through various application distribution platforms and repositories and can also be downloaded directly from the project's website. The application may be shared directly from one device to others nearby over WiFi or Bluetooth. The Serval Mesh application is built out of two components: a user interface called Batphone, and a core networking, encryption, and file sharing component called Serval DNA. The Batphone source code is licensed to the public under the terms of the GPLv3 license, whereas the Serval DNA source code is licensed under the terms of the GPLv2 license. See also Smartphone ad hoc networks Similar projects Briar (software) B.A.T.M.A.N. FireChat References External links Politics and technology Internet-related activism Proposed telecommunications infrastructure Mesh networking
Serval Project
[ "Technology" ]
390
[ "Mobile computer stubs", "Wireless networking", "Mobile technology stubs", "Mesh networking" ]
39,948,617
https://en.wikipedia.org/wiki/Kepler-67
Kepler-67 is a star in the open cluster NGC 6811 in the constellation Cygnus. It has slightly less mass than the Sun and has one confirmed planet, slightly smaller than Neptune, announced in 2013. Planetary system References External links Kepler-67, The Open Exoplanet Catalogue Kepler 67, Exoplanet.eu G-type main-sequence stars Cygnus (constellation) 2115 Planetary transit variables Planetary systems with one confirmed planet
Kepler-67
[ "Astronomy" ]
94
[ "Cygnus (constellation)", "Constellations" ]
39,948,694
https://en.wikipedia.org/wiki/Murders%20of%20Bernice%20and%20Ben%20Novack%20Jr.
In 2009, Bernice Novack and her son, Fontainebleau Miami Beach hotel heir Ben Novack Jr., were murdered three months apart. Narcy Novack (née Narcisa Véliz Pacheco; born 1956), Ben's estranged wife was convicted of orchestrating the murders, and after a highly publicized trial was sentenced to life in prison without the possibility of parole. Crimes On April 5, 2009, Ben's 87-year-old mother, Bernice (December 2, 1921 – April 4, 2009), was found dead in her Fort Lauderdale, Florida garage. Her husband Ben Novack Sr., who built the hotel and owned it until 1977, had died in 1985. Her death was initially ruled to be the result of an accidental fall while trying to get out of her car in her garage, but after her son's murder three months later, a subsequent police investigation revealed that her death was a homicide. On the morning of July 12, 2009, her son, who was 53, was found bludgeoned and suffocated to death in the penthouse suite at the Hilton Hotel in Rye Brook, New York. He was bound with duct tape and his eyes were gouged out. At the time of his death, he was having an affair with porn actress Rebecca E. Bliss (1976-2023). He was also the heir to a multimillion-dollar estate. Trial Narcy Novack, from Fort Lauderdale, was arrested for the murders of her husband and mother-in-law in July 2010, three days shy of a year after her husband's death. Her brother, Cristóbal Véliz, was also accused of enlisting Alejandro Gutiérrez-García, Joel González, and Denis Ramírez to participate in both murders. Narcy Novack and Cristóbal Véliz were tried together in a federal courtroom in White Plains, New York in 2012. The duo's defense was to blame Narcy's only daughter from a previous marriage, May Abad, for having orchestrated the killings, stating that she was motivated to collect on Ben Novack Jr.'s estate, including a large collection of Batman memorabilia. Prosecutors alleged that Narcy was afraid that her husband would leave her for his mistress, and that a prenuptial agreement would only leave her $65,000 instead of the bulk of her late husband's estate. They claimed she was motivated by "hatred, greed, and vengeance." Verdict At the conclusion of the trial, Narcy and Veliz were each convicted of murder, conspiracy to commit murder, domestic violence, stalking, money laundering, and witness tampering. Narcy waived her right to appear in court when the guilty verdict was read. She also did not appear in court when she was sentenced to life in prison without parole. Novack is currently incarcerated at the Federal Correctional Institution Tallahassee in Tallahassee, Florida. Véliz was also sentenced to life in prison without parole and he is currently incarcerated at the United States Penitentiary, Big Sandy in Inez, Kentucky. Gutiérrez-García, González, and Ramírez all pleaded guilty to lesser charges. In accordance with the slayer rule, Narcy Novack is ineligible to inherit her husband's estate. Ben Novack Jr.'s estate, valued at $4.2 million, is expected to go to Novack's daughter, May Abad, and Abad's two sons. In the media The Novack murders have been televised on several programs including Deadly Rich, My Dirty Little Secret (ID), 48 Hours, Dateline NBC, Snapped, True Crime with Aphrodite Jones and Dying to Belong. The story was also the basis for the 2015 made-for-television Lifetime movie Beautiful & Twisted, directed by Christopher Zalla and starring Rob Lowe as Ben Novack Jr., Paz Vega as Narcy, and Candice Bergen as Bernice Novack. References External links FBI Press Release: Sentencing of Narcisa Veliz Novack and Cristobal Veliz (December 17, 2012) U.S. Department of Justice Press Release: Alejandro Garcia Sentenced In White Plains Federal Court To 17 Years And Six Months In Prison For The Beating Deaths Of Ben And Bernice Novack (September 12, 2013) 2009 in Florida 2009 in New York (state) 2009 murders in the United States 2012 in New York (state) April 2009 crimes in the United States Deaths by beating in the United States Deaths from asphyxiation Domestic violence in the United States History of Fort Lauderdale, Florida July 2009 crimes in the United States Mariticides Murder-for-hire cases Murdered American Jews People murdered in Florida People murdered in New York (state) Rye, New York Stalking
Murders of Bernice and Ben Novack Jr.
[ "Biology" ]
981
[ "Behavior", "Aggression", "Stalking" ]
39,949,762
https://en.wikipedia.org/wiki/Tipped-in%20page
In the book trade, a tipped-in page or tipped-in plate is a page that is printed separately from the main text of the book, but attached to the book. The page may be glued onto a regular page or even bound along with the other pages. There are various reasons for tipped-in-pages, including photographic prints and reviews. Description A tipped-in page or, if it is an illustration, tipped-in plate, is a page that is printed separately from the main text of the book, but attached to the book. A tipped-in page may be glued onto a regular page, or even bound along with the other pages. It is often printed on a different kind of paper, using a different printing process, and of a different format than a regular page. Tipped-in pages that are glued to a bound page on its inner side may be called paste ins. Some authors include loose pages inserted into a book as tipped-in, but in this case, it is usually called an insert instead. Tissue guard A tissue guard is a tipped-in page consisting of a sheet of thin, often semi-transparent paper that is inserted facing an illustration or plate image, primarily to prevent its ink from transferring onto the opposite page. It is usually added after the book is bound. Tissue guards were once important because early book illustrations were commonly printed separately from the text, often by a different process such as lithography that employed a greasy ink that could transfer onto a facing page over time. Illustrations made with modern inks seldom require tissue guards and so they are not commonly found in modern books. Tissue guards were commonly used in conjunction with a book frontispiece, but were also sometimes used with illustrations elsewhere within the book if the bookbinder felt they were needed. Most were made of a semi-transparent tissue paper similar to glassine or onionskin, although some were merely made of a thinner paper that achieved a similar effect. Use Typical uses of tipped-in pages added by the publisher include: color illustrations, generally printed using a different process (e.g. intaglio or lithography) and on different paper an author's signature, signed on a blank or preprinted page, before the book is bound original photographic prints maps, often larger than the book format and folded to fit coupons, advertisements, or reply cards errata sheets, only produced after the printing run a short addendum a replacement for a missing, damaged, or incorrectly printed page Owners of books may also tip in such items as: a letter from the author a review Examples Coffee table art books featuring high quality tipped-in color plates were popular starting in the late 1940s and into the 1980s. Examples include several large series of books on painting published by Editions d'Art Albert Skira, Geneva: e.g. Painting, Color, History (23 volumes 1949–1972); The Great Centuries of Painting (14 volumes 1950–1959); The Taste of Our Time (57 volumes 1953–1972) with "hand-tipped colorplates". Harry N. Abrams, Inc., New York also published many fine art books during this period with tipped-in plates, examples include the 56 volume series The Library of Great Painters published 1959–1985 with each book having ca. 48 "tipped-on colorplates" or "hand-tipped plates in full color". References Glossary of the International League of Antiquarian Booksellers, s.v. tipped-in Book design
Tipped-in page
[ "Engineering" ]
713
[ "Book design", "Design" ]
39,949,779
https://en.wikipedia.org/wiki/1561%20celestial%20phenomenon%20over%20Nuremberg
An April 1561 broadsheet by Hans Glaser described a mass sighting of celestial phenomena or unidentified flying objects (UFO) above Nuremberg (then a Free Imperial City of the Holy Roman Empire). Ufologists have speculated that these phenomena may have been extraterrestrial spacecraft. Skeptics assert that the phenomenon was likely to have been another atmospheric phenomenon, such as a sun dog, although the print doesn't fit the usual classic description of the phenomena. History A broadsheet news article printed in April 1561 describes a mass sighting of celestial phenomena. The broadsheet, illustrated with a woodcut and text by Hans Glaser, measures by . The document is archived in the prints and drawings collection at the Zentralbibliothek Zürich in Zürich, Switzerland. According to the broadsheet, around dawn on 14 April 1561, "many men and women" of Nuremberg saw what the broadsheet describes as an aerial battle "out of the sun", followed by the appearance of a large black triangular object and exhausted combattant spheres falling to earth in clouds of smoke. The broadsheet claims that witnesses observed hundreds of spheres, cylinders, and other odd-shaped objects that moved erratically overhead. The woodcut illustration depicts objects of various shapes, including crosses (with or without spheres on the arms), small spheres, two large crescents, a black spear, and cylindrical objects from which several small spheres emerged and darted around the sky at dawn. The broadsheet text The text of the broadsheet has been translated by Ilse Von Jacobi as follows: Modern interpretations According to author Jason Colavito, the woodcut broadsheet became known in modern culture after being published in Carl Jung's 1958 book Flying Saucers: A Modern Myth of Things Seen in the Skies, a book which analyzed the archetypal meaning of UFOs. Jung expressed a view that the spectacle was most likely a natural phenomenon with religious and military interpretations overlying it. "If the UFOs were living organisms, one would think of a swarm of insects rising with the sun, not to fight one another but to mate and celebrate the marriage flight." A military interpretation would view the tubes as cannons and the spheres as cannonballs, emphasize the black spearhead at the bottom of the scene, and Glaser's own testimony that the globes fought vehemently until exhausted. A religious view would emphasize the crosses. Jung thinks the images of four globes coupled by lines suggested crossed marriage quaternities and forms the model for "the primitive cross cousin marriage". He also posited that it could also be an individuation symbol and that the association of sunrise suggests "the revelation of the light". Other events 1566 celestial phenomenon over Basel: A series of events on 27–28 July and 7 August 1566 reported in a Flugblatt (an early form of newspaper) as occurred in Basel, where red and black spheres have led to an apparent battle in the sky. References External links Selected works of Hans Glaser 1561 celestial phenomenon 1561 in the Holy Roman Empire 16th century in Bavaria 16th-century engravings Atmospheric optical phenomena Meteorological phenomena UFO sightings in Germany
1561 celestial phenomenon over Nuremberg
[ "Physics" ]
657
[ "Physical phenomena", "Earth phenomena", "Optical phenomena", "Meteorological phenomena", "Atmospheric optical phenomena" ]
42,735,438
https://en.wikipedia.org/wiki/The%20Astounding%2C%20the%20Amazing%2C%20and%20the%20Unknown
The Astounding, the Amazing, and the Unknown is an alternate historical adventure novel written by Paul Malmont, the sequel to The Chinatown Death Cloud Peril (2007). It features real-life pulp magazine authors of the past as the heroes of adventures reminiscent of their favored genres. The book was first published in hardcover by Simon & Schuster and audiobook by Brilliance Audio in July 2011. The title is drawn from those of the magazines, Astounding Science-Fiction, Amazing Stories, and Unknown, for which his main protagonists wrote. Plot The story, divided into short, numbered "episodes" rather than chapters, is presented as a "story about Nikola Tesla" recounted by Richard Feynman to a group of other Manhattan Project scientists in the wake of World War II. It involves the efforts of a similar think-tank, the Kamikaze Group, to uncover the secret of a rumored "super-weapon" Tesla had developed before his death, one supposedly responsible for the mysterious Tunguska explosion of 1908. Feynman makes no claims for the tale's veracity, a caution warranted at the end of the book when his informant is revealed to have been pulp writer L. Ron Hubbard, a participant in the novel's events portrayed as a self-promoting, delusional narcissist. Malmont bases the Kamikaze Group on the trio of science fiction writers, Robert A. Heinlein, L. Sprague de Camp and Isaac Asimov, who in actual history spent most of the war doing aeronautical engineering research for the U.S. Navy at the Philadelphia Naval Shipyard's Naval Air Experimental Station. He portrays them as engaged in a joint project there to develop super-scientific weapons to help the U.S. win the war, though in reality they worked separately on technical improvements to airplanes and weapons systems. Heinlein leads the fictional project, which also draws on the assistance of other pulp authors of his acquaintance, most notably Hubbard, Walter B. Gibson, and Lester Dent, with cameo roles by John W. Campbell, Norvell Page, Hugh B. Cave, Frederik Pohl, Cleve Cartmill, Kurt Vonnegut, Judith Merril, and Ray Bradbury. Additional historical luminaries such as Jack Parsons, Albert Einstein and Jimmy Stewart also put in appearances. After Tesla's mysterious death the military raids his apartment and confiscates his papers, spurred by apparent German interest in his discoveries. Heinlein is charged with investigating the supposed "wonder weapon" referred to in the papers. Together with de Camp, Asimov, Hubbard and Gibson he explores a sub-basement of the Empire State Building where Tesla had left some of his equipment, and is almost trapped there by an unknown adversary. Further inquiries take the core group to the former site of Wardenclyffe Tower, the inventor's wireless transmission facility in Shoreham, New York, which apparently doubled as the "sending" component of the weapon. Afterwards Hubbard is sent first to the Aleutian Islands and then the South Pacific to retrieve the receiver, while Asimov is put in charge of securing the capacitors needed to make the transmitter work. Meanwhile, government goons are investigating the group itself, alarmed by the publication of Cartmill's story "Deadline," with its all-too accurate description of a nuclear weapon similar to that being developed by the Manhattan Project. Their brutal interrogation of Heinlein is interrupted by a deadly phone call via which they are somehow electrocuted in the same fashion as Tesla. Matters come to a head as the Kamikaze Group seemingly makes good a promise to "make a ship disappear" (a nod to the "Philadelphia Experiment" urban legend), while Heinlein, convinced that time is running out, works with Tesla's ex-assistant at a duplicate of Wardencyffe Tower to make the wonder weapon functional. Once it is activated the assistant reveals himself as the group's prime adversary, and nearly succeeds in killing the group members before falling to his death in the pits beneath the tower. After the villain's defeat the military shuts down the Kamikaze Group. Heinlein is disheartened to discover that his group was conceived and regarded as a mere blind to distract the Axis powers from the U.S.'s real super weapon effort—the Manhattan Project. His assertions of the reality of Tesla's weapon fall on deaf ears. Much later, Hubbard, returning to the U.S. after the war, learns what has become of his former compatriots, encounters Feynman, and tells him the story. Reception Michael Dirda, writing for The Washington Post, feels the novel "never quite lives up to its title. It opens slowly, breaks up its narrative among too many different characters and plot lines, and is unpersuasively framed as a story related by physicist Richard Feynman. Frequent comic episodes, some verging on slapstick, don't wholly come off. Nor are the big scenes - involving secret tunnels underneath the Empire State Building or the final showdown at Tesla's Tower - altogether fresh. I couldn't follow much of the science, and Hubbard's feverish dreams reminded me of accounts of bad LSD trips." He concludes, however, that "if you're already a fan of any of the writer-heroes of this novel, you'll probably be irresistibly drawn to The Astounding, the Amazing, and the Unknown. And the book does have some good moments. It's almost worth reading just to arrive at the pronouncement: 'Oh my God! . . . You've vaporized Isaac Asimov!'" Rege Behe in the Pittsburgh Tribune calls the book "humorous and clever, . . . an opus waiting for Hollywood to call." In reference to the book's title he notes that it "is all of those things, a joyous romp grounded in the golden age of science and pulp fiction in the 1940s. The author's cast . . . are wonderfully realized." Paul Di Filippo in The Speculator "applaud[s] its vigorous storytelling and historical acumen," calling the book's style "by turns analytical, journalistic, affectionate, elegiac, philosophical, and, well, pulpish. [Malmont] reconstructs his characters and their era with historical fidelity and empathy without feeling chained to total textbook accuracy . . . [blending] verisimilitude with outlandish blood-and-thunder action" in a "seething scrum of high-minded conversation and low-down deeds . . . Malmont plainly has a blast recreating this dangerous, promising era and putting its inhabitants through their larger-than-life paces." Di Filippo does twit the author for his anachronistic use of the term "sci-fi," not coined until the late 1950s, but feels that "[t]aken all in all, the book delivers both thrills and meditative reflections on the writerly condition." The book was also reviewed by Amy Goldschlager in Locus, no. 612, January 2012. Notes 2011 American novels American alternate history novels Simon & Schuster books Cultural depictions of Nikola Tesla Tunguska event Novels about writers Cultural depictions of Albert Einstein
The Astounding, the Amazing, and the Unknown
[ "Physics" ]
1,512
[ "Unsolved problems in physics", "Tunguska event" ]
42,735,534
https://en.wikipedia.org/wiki/Nonribosomal%20code
The nonribosomal code refers to key amino acid residues and their positions within the primary sequence of an adenylation domain of a nonribosomal peptide synthetase used to predict substrate specificity and thus (partially) the final product. Analogous to the nonribosomal code is prediction of peptide composition by DNA/RNA codon reading, which is well supported by the central dogma of molecular biology and accomplished using the genetic code simply by following the DNA codon table or RNA codon table. However, prediction of natural product/secondary metabolites by the nonribosomal code is not as concrete as DNA/RNA codon-to-amino acid and much research is still needed to have a broad-use code. The increasing number of sequenced genomes and high-throughput prediction software has allowed for better elucidation of predicted substrate specificity and thus natural products/secondary metabolites. Enzyme characterization by, for example, ATP-pyrophosphate exchange assays for substrate specificity, in silico substrate-binding pocket modelling and structure-function mutagenesis (in vitro tests or in silico modelling) helps support predictive algorithms. Much research has been done on bacteria and fungi, with prokaryotic bacteria having easier-to-predict products. The nonribosomal peptide synthetase (NRPS), a multi-modular enzyme complex, minimally contains repeating, tri-domains (adenylation (A), peptidyl carrier protein (PCP) and lastly condensation(C)). The adenylation domain (A) is the focus for substrate specificity since it is the initiating and substrate recognition domain. In one example, adenylation substrate-binding pocket (defined by 10 residue within) alignments led to clusters giving rise to defined specificity (i.e. the residues of the enzyme pocket can predict nonribosomal peptide sequence). In silico mutations of substrate-determining residues also led to varying or relaxed specificity. Additionally, the NRPS collinearity principle/rule dictates that given the order of adenylation domains (and substrate-specificity code) throughout the NRPS one can predict the amino acid sequence of the produced small peptide. NRPS, NRPS-like or NRPS-PKS complexes also exist and have domain variations, additions and/or exclusions. Supporting examples The A-domains have 8 amino acid-long non-ribosomal signatures. LTKVGHIG → Asp (Aspartic acid) VGEIGSID → Orn (Orinithine) AWMFAAVL → Val (Valine) See also Nonribosomal peptide Natural product Secondary metabolite References Molecular biology
Nonribosomal code
[ "Chemistry", "Biology" ]
577
[ "Biochemistry", "Molecular biology" ]
42,736,145
https://en.wikipedia.org/wiki/Analytical%20light%20scattering
Analytical light scattering (ALS), also loosely referred to as SEC-MALS, is the implementation of static light scattering (SLS) and dynamic light scattering (DLS) techniques in an online or flow mode. A typical ALS instrument consists of an HPLC/FPLC chromatography system coupled in-line with appropriate light scattering and refractive index detectors. The advantage of ALS over conventional steady-state light scattering methods is that it allows separation of molecules/macromolecules on a chromatography column prior to analysis with light scattering detectors. Accordingly, ALS enables one to determine hydrodynamic properties of a single monodisperse species as opposed to bulk or average measurements on a sample afforded by conventional light scattering. References Scattering, absorption and radiative transfer (optics) Physical chemistry Scientific techniques
Analytical light scattering
[ "Physics", "Chemistry" ]
170
[ " absorption and radiative transfer (optics)", "Applied and interdisciplinary physics", "Scattering", "nan", "Physical chemistry" ]
42,736,510
https://en.wikipedia.org/wiki/Retro%20screening
Retro (or reverse) screening (RS) is a relatively new approach to determine the specificity and selectivity of a therapeutic drug molecule against a target protein or another macromolecule. It proceeds in the opposite direction to the so-called virtual screening (VS). In VS, the goal is to use a protein target to identify a high-affinity ligand from a search library typically containing hundreds of thousands of small molecules. In contrast, RS employs a known drug molecule to screen a protein library containing hundreds of thousands of individual structures (obtained from both experimental and modeling techniques). Accordingly, the extent to which this drug cross-reacts with the human proteome provides a measure of its efficacy and the potential long-term side-effects. RS is expected to play a key role in providing an additional layer of quality control in drug discovery. Bioinformatics Drug discovery Cheminformatics Alternatives to animal testing
Retro screening
[ "Chemistry", "Engineering", "Biology" ]
186
[ "Animal testing", "Biological engineering", "Life sciences industry", "Drug discovery", "Bioinformatics", "Alternatives to animal testing", "Computational chemistry", "nan", "Medicinal chemistry", "Cheminformatics" ]
42,736,966
https://en.wikipedia.org/wiki/Critical%20theory
Critical theory is a social, historical, and political school of thought and philosophical perspective which centers on analyzing and challenging systemic power relations in society, arguing that knowledge, truth, and social structures are fundamentally shaped by power dynamics between dominant and oppressed groups. Beyond just understanding and critiquing these dynamics, it explicitly aims to transform society through praxis and collective action with an explicit sociopolitical purpose. Critical theory's main tenets center on analyzing systemic power relations in society, focusing on the dynamics between groups with different levels of social, economic, and institutional power. Unlike traditional social theories that aim primarily to describe and understand society, critical theory explicitly seeks to critique and transform it. Thus, it positions itself as both an analytical framework and a movement for social change. Critical theory examines how dominant groups and structures influence what society considers objective truth, challenging the very notion of pure objectivity and rationality by arguing that knowledge is shaped by power relations and social context. Key principles of critical theory include examining intersecting forms of oppression, emphasizing historical contexts in social analysis, and critiquing capitalist structures. The framework emphasizes praxis (combining theory with action) and highlights how lived experience, collective action, ideology, and educational systems play crucial roles in maintaining or challenging existing power structures. The historical evolution of critical theory traces back to the first generation of the Frankfurt School in the 1920s. Figures like Max Horkheimer, Theodor Adorno, Herbert Marcuse, and others sought to expand traditional Marxist analysis by incorporating insights from psychology, culture, and philosophy, moving beyond pure economic determinism. Their work was significantly influenced by Freud’s psychoanalytic theories, particularly how subjective experience shaped human consciousness, behavior, and social reality. Freud's concept that an individual's lived experience could differ dramatically from objective reality aligned with critical theory's critique of positivism, science, and pure rationality. Critical theory continued to evolve beyond the first generation of the Frankfurt School. Jürgen Habermas, often identified with the second generation, shifted the focus toward communication and the role of language in social emancipation. Around the same time, post-structuralist and postmodern thinkers, including Michel Foucault and Jacques Derrida, were reshaping academic discourse with critiques of knowledge, meaning, power, institutions, and social control with deconstructive approaches that further challenged assumptions about objectivity and truth. Though neither Foucault nor Derrida belonged formally to the Frankfurt School tradition, their works profoundly influenced later formulations of critical theory. Collectively, the post-structuralist and postmodern insights expanded the scope of critical theory, weaving cultural and linguistic critiques into its Marxian roots. With the emigration of Herbert Marcuse, contemporary critical theory has expanded to the United States and today it covers a wide range of social critique within economics, ethics, history, law, politics, psychology, and sociology, with a diverse list of subjects including critical animal studies, critical criminology, dependency theory and imperialism studies, critical environmental justice, feminist theory and gender studies, critical historiography, intersectionality, critical legal studies, critical pedagogy, postcolonialism, critical race theory, queer theory, and critical terrorism studies. Modern critical theory represents a movement away from Marxism’s purely economic analysis to a broader examination of social and cultural power structures with the incorporation and transformation of Freudian concepts and postmodernism, while retaining Marxism’s emphasis on analyzing how dominant groups and systems shape and control society through exploitation and oppression along with social and political praxis, the adaptation and reformulation of multiple Marxian conceptual frameworks (including alienation, reification, ideology, emancipation, base and superstructure), and a general skepticism towards and critique of capitalism. Criticism of critical theory have come from various intellectual perspectives. Critics have raised concerns about critical theory’s reliance on Marxist revisionism and its frequent emphasis on subjective narratives, which can sometimes be at odds with empirical methodologies. They also point to issues of circular reasoning and a lack of falsifiability in some critical theory arguments, as well as an epistemological and methodological stance that challenges or conflicts with traditional scientific methods and ideals of rationality and objectivity. History Max Horkheimer first defined critical theory () in his 1937 essay "Traditional and Critical Theory", as a social theory oriented toward critiquing and changing society as a whole, in contrast to traditional theory oriented only toward understanding or explaining it. Wanting to distinguish critical theory as a radical, emancipatory form of Marxist philosophy, Horkheimer critiqued both the model of science put forward by logical positivism, and what he and his colleagues saw as the covert positivism and authoritarianism of orthodox Marxism and Communism. He described a theory as critical insofar as it seeks "to liberate human beings from the circumstances that enslave them". Critical theory involves a normative dimension, either by criticizing society in terms of some general theory of values or norms (oughts), or by criticizing society in terms of its own espoused values (i.e. immanent critique). Significantly, critical theory not only conceptualizes and critiques societal power structures, but also establishes an empirically grounded model to link society to the human subject. It defends the universalist ambitions of the tradition, but does so within a specific context of social-scientific and historical research. The core concepts of critical theory are that it should: be directed at the totality of society in its historical specificity (i.e., how it came to be configured at a specific point in time) improve understanding of society by integrating all the major social sciences, including geography, economics, sociology, history, political science, anthropology, and psychology Postmodern critical theory is another major product of critical theory. It analyzes the fragmentation of cultural identities in order to challenge modernist-era constructs such as metanarratives, rationality, and universal truths, while politicizing social problems "by situating them in historical and cultural contexts, to implicate themselves in the process of collecting and analyzing data, and to relativize their findings". Marx Marx explicitly developed the notion of critique into the critique of ideology, linking it with the practice of social revolution, as stated in the 11th section of his Theses on Feuerbach: "The philosophers have only interpreted the world, in various ways; the point is to change it." In early works, including The German Ideology, Marx developed his concepts of false consciousness and of ideology as the interests of one section of society masquerading as the interests of society as a whole. Adorno and Horkheimer One of the distinguishing characteristics of critical theory, as Theodor W. Adorno and Max Horkheimer elaborated in their Dialectic of Enlightenment (1947), is an ambivalence about the ultimate source or foundation of social domination, an ambivalence that gave rise to the "pessimism" of the new critical theory about the possibility of human emancipation and freedom. This ambivalence was rooted in the historical circumstances in which the work was originally produced, particularly the rise of Nazism, state capitalism, and culture industry as entirely new forms of social domination that could not be adequately explained in the terms of traditional Marxist sociology. For Adorno and Horkheimer, state intervention in the economy had effectively abolished the traditional tension between Marxism's "relations of production" and "material productive forces" of society. The market (as an "unconscious" mechanism for the distribution of goods) had been replaced by centralized planning. Contrary to Marx's prediction in the Preface to a Contribution to the Critique of Political Economy, this shift did not lead to "an era of social revolution" but to fascism and totalitarianism. As a result, critical theory was left, in Habermas's words, without "anything in reserve to which it might appeal, and when the forces of production enter into a baneful symbiosis with the relations of production that they were supposed to blow wide open, there is no longer any dynamism upon which critique could base its hope". For Adorno and Horkheimer, this posed the problem of how to account for the apparent persistence of domination in the absence of the very contradiction that, according to traditional critical theory, was the source of domination itself. Habermas In the 1960s, Habermas, a proponent of critical social theory, raised the epistemological discussion to a new level in his Knowledge and Human Interests (1968), by identifying critical knowledge as based on principles that differentiated it either from the natural sciences or the humanities, through its orientation to self-reflection and emancipation. Although unsatisfied with Adorno and Horkheimer's thought in Dialectic of Enlightenment, Habermas shares the view that, in the form of instrumental rationality, the era of modernity marks a move away from the liberation of enlightenment and toward a new form of enslavement. In Habermas's work, critical theory transcended its theoretical roots in German idealism, and progressed closer to American pragmatism. Habermas's ideas about the relationship between modernity and rationalization are in this sense strongly influenced by Max Weber. He further dissolved the elements of critical theory derived from Hegelian German idealism, though his epistemology remains broadly Marxist. Perhaps his two most influential ideas are the concepts of the public sphere and communicative action, the latter arriving partly as a reaction to new post-structural or so-called "postmodern" challenges to the discourse of modernity. Habermas engaged in regular correspondence with Richard Rorty, and a strong sense of philosophical pragmatism may be felt in his thought, which frequently traverses the boundaries between sociology and philosophy. Modern critical theorists Contemporary philosophers and researchers who have focused on understanding and critiquing critical theory include Nancy Fraser, Axel Honneth, Judith Butler, and Rahel Jaeggi. Honneth is known for his works Pathology of Reason and The Legacy of Critical Theory, in which he attempts to explain critical theory's purpose in a modern context. Jaeggi focuses on both critical theory's original intent and a more modern understanding that some argue has created a new foundation for modern usage of critical theory. Butler contextualizes critical theory as a way to rhetorically challenge oppression and inequality, specifically concepts of gender. Honneth established a theory that many use to understand critical theory, the theory of recognition. In this theory, he asserts that in order for someone to be responsible for themselves and their own identity they must be also recognized by those around them: without recognition in this sense from peers and society, individuals can never become wholly responsible for themselves and others, nor experience true freedom and emancipation—i.e., without recognition, the individual cannot achieve self-actualization. Like many others who put stock in critical theory, Jaeggi is vocal about capitalism's cost to society. Throughout her writings, she has remained doubtful about the necessity and use of capitalism in regard to critical theory. Most of Jaeggi's interpretations of critical theory seem to work against the foundations of Habermas and follow more along the lines of Honneth in terms of how to look at the economy through the theory's lens. She shares many of Honneth's beliefs, and many of her works try to defend them against criticism Honneth has received. To provide a dialectical opposite to Jaeggi's conception of alienation as 'a relation of relationlessness', Hartmut Rosa has proposed the concept of resonance. Rosa uses this term to refer to moments when late modern subjects experience momentary feelings of self-efficacy in society, bringing them into a temporary moment of relatedness with some aspect of the world. Rosa describes himself as working within the critical theory tradition of the Frankfurt School, providing an extensive critique of late modernity through his concept of social acceleration. However his resonance theory has been questioned for moving too far beyond the Adornoian tradition of "looking coldly at society". Schools and Derivates Postmodern critical social theory Focusing on language, symbolism, communication, and social construction, critical theory has been applied in the social sciences as a critique of social construction and postmodern society. While modernist critical theory (as described above) concerns itself with "forms of authority and injustice that accompanied the evolution of industrial and corporate capitalism as a political-economic system", postmodern critical theory politicizes social problems "by situating them in historical and cultural contexts, to implicate themselves in the process of collecting and analyzing data, and to relativize their findings". Meaning itself is seen as unstable due to social structures' rapid transformation. As a result, research focuses on local manifestations rather than broad generalizations. Postmodern critical research is also characterized by the crisis of representation, which rejects the idea that a researcher's work is an "objective depiction of a stable other". Instead, many postmodern scholars have adopted "alternatives that encourage reflection about the 'politics and poetics' of their work. In these accounts, the embodied, collaborative, dialogic, and improvisational aspects of qualitative research are clarified." The term critical theory is often appropriated when an author works in sociological terms, yet attacks the social or human sciences, thus attempting to remain "outside" those frames of inquiry. Michel Foucault has been described as one such author. Jean Baudrillard has also been described as a critical theorist to the extent that he was an unconventional and critical sociologist; this appropriation is similarly casual, holding little or no relation to the Frankfurt School. In contrast, Habermas is one of the key critics of postmodernism. Communication studies When, in the 1970s and 1980s, Habermas redefined critical social theory as a study of communication, with communicative competence and communicative rationality on the one hand, and distorted communication on the other, the two versions of critical theory began to overlap to a much greater degree than before. Critical disability theory Critical legal studies Immigration studies Critical theory can be used to interpret the right of asylum and immigration law. Critical finance studies Critical finance studies apply critical theory to financial markets and central banks. Critical management studies Critical international relations theory Critical race theory Critical pedagogy Critical theorists have widely credited Paulo Freire for the first applications of critical theory to education/pedagogy, considering his best-known work to be Pedagogy of the Oppressed, a seminal text in what is now known as the philosophy and social movement of critical pedagogy. Dedicated to the oppressed and based on his own experience helping Brazilian adults learn to read and write, Freire includes a detailed class analysis in his exploration of the relationship between the colonizer and the colonized. In the book, he calls traditional pedagogy the "banking model of education", because it treats the student as an empty vessel to be filled with knowledge. He argues that pedagogy should instead treat the learner as a co-creator of knowledge. In contrast to the banking model, the teacher in the critical-theory model is not the dispenser of all knowledge, but a participant who learns with and from the students—in conversation with them, even as they learn from the teacher. The goal is to liberate the learner from an oppressive construct of teacher versus student, a dichotomy analogous to colonizer and colonized. It is not enough for the student to analyze societal power structures and hierarchies, to merely recognize imbalance and inequity; critical theory pedagogy must also empower the learner to reflect and act on that reflection to challenge an oppressive status quo. Critical consciousness Critical university studies Critical psychology Critical criminology Critical animal studies Critical social work Critical ethnography Critical data studies Critical environmental justice Critical environmental justice applies critical theory to environmental justice. Criticism While critical theorists have often been called Marxist intellectuals, their tendency to denounce some Marxist concepts and to combine Marxian analysis with other sociological and philosophical traditions has resulted in accusations of revisionism by Orthodox Marxist and by Marxist–Leninist philosophers. Martin Jay has said that the first generation of critical theory is best understood not as promoting a specific philosophical agenda or ideology, but as "a gadfly of other systems". Critical theory has been criticized for not offering any clear road map to political action (praxis), often explicitly repudiating any solutions. Those objections mostly apply to first-generation Frankfurt School, while the issue of politics is addressed in a much more assertive way in contemporary theory. Another criticism of critical theory "is that it fails to provide rational standards by which it can show that it is superior to other theories of knowledge, science, or practice." Rex Gibson argues that critical theory suffers from being cliquish, conformist, elitist, immodest, anti-individualist, naive, too critical, and contradictory. Hughes and Hughes argue that Habermas' theory of ideal public discourse "says much about rational talkers talking, but very little about actors acting: Felt, perceptive, imaginative, bodily experience does not fit these theories". Some feminists argue that critical theory "can be as narrow and oppressive as the rationalization, bureaucratization, and cultures they seek to unmask and change. Critical theory's language has been criticized as being too dense to understand, although "Counter arguments to these issues of language include claims that a call for clearer and more accessible language is anti-intellectual, a new 'language of possibility' is needed, and oppressed peoples can understand and contribute to new languages." Bruce Pardy, writing for the National Post, argued that any challenges to the "legitimacy [of critical theory] can be interpreted as a demonstration of their [critical theory's proponents'] thesis: the assertion of reason, logic and evidence is a manifestation of privilege and power. Thus, any challenger risks the stigma of a bigoted oppressor." Robert Danisch, writing for The Conversation, argued that critical theory, and the modern humanities more broadly, focus too much on criticizing the current world rather than trying to make a better world. See also Modernism Antipositivism Critical military studies Cultural studies Information criticism Marxist cultural analysis Outline of critical theory Popular culture studies Outline of organizational theory Postcritique Quare theory Lists List of critical theorists List of works in critical theory Journals Constellations Representations Critical Inquiry Telos Law and Critique References Footnotes Works cited Bibliography "Problematizing Global Knowledge." Theory, Culture & Society 23(2–3). 2006. . Calhoun, Craig. 1995. Critical Social Theory: Culture, History, and the Challenge of Difference. Blackwell. – A survey of and introduction to the current state of critical social theory. Charmaz, K. 1995. "Between positivism and postmodernism: Implications for methods." Studies in Symbolic Interaction 17:43–72. Conquergood, D. 1991. "Rethinking ethnography: Towards a critical cultural politics." Communication Monographs 58(2):179–94. . Corchia, Luca. 2010. La logica dei processi culturali. Jürgen Habermas tra filosofia e sociologia. Genova: Edizioni ECIG. . Dahms, Harry, ed. 2008. No Social Science Without Critical Theory, (Current Perspectives in Social Theory 25). Emerald/JAI. Gandler, Stefan. 2009. Fragmentos de Frankfurt. Ensayos sobre la Teoría crítica. México: 21st Century Publishers/Universidad Autónoma de Querétaro. . Geuss, Raymond. 1981. The Idea of a Critical Theory. Habermas and the Frankfurt School. Cambridge University Press. . Honneth, Axel. 2006. La société du mépris. Vers une nouvelle Théorie critique, La Découverte. . Horkheimer, Max. 1982. Critical Theory Selected Essays. New York: Continuum Publishing. Morgan, Marcia. 2012. Kierkegaard and Critical Theory. New York: Lexington Books. Rolling, James H. 2008. "Secular blasphemy: Utter(ed) transgressions against names and fathers in the postmodern era." Qualitative Inquiry 14(6):926–48. – An example of critical postmodern work. Sim, Stuart, and Borin Van Loon. 2001. Introducing Critical Theory. . – A short introductory volume with illustrations. Thomas, Jim. 1993. Doing Critical Ethnography. London: Sage. pp. 1–5 & 17–25. Tracy, S. J. 2000. "Becoming a character for commerce: Emotion labor, self subordination and discursive construction of identity in a total institution." Management Communication Quarterly 14(1):90–128. – An example of critical qualitative research. Willard, Charles Arthur. 1982. Argumentation and the Social Grounds of Knowledge. University of Alabama Press. — 1989. A Theory of Argumentation. University of Alabama Press. — 1996. Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. Chicago: University of Chicago Press. Chapter 9. Critical Theory External links Gerhardt, Christina. "Frankfurt School". The International Encyclopedia of Revolution and Protest. Ness, Immanuel (ed). Blackwell Publishing, 2009. Blackwell Reference Online. "Theory: Death Is Not the End" N+1 magazine's short history of academic Critical Theory. Critical Legal Thinking A Critical Legal Studies website which uses Critical Theory in an analysis of law and politics. L. Corchia, Jürgen Habermas. A Bibliography: works and studies (1952–2013), Pisa, Edizioni Il Campano – Arnus University Books, 2013, 606 pages. Sim, S.; Van Loon, B. (2009). Introducing Critical Theory: A Graphic Guide. Icon Books Ltd. Archival collections Guide to the Critical Theory Offprint Collection. Special Collections and Archives, The UC Irvine Libraries, Irvine, Cali Guide to the Critical Theory Institute Audio and Video Recordings, University of California, Irvine. Special Collections and Archives, The UC Irvine Libraries, Irvine, California. University of California, Irvine, Critical Theory Institute Manuscript Materials. Special Collections and Archives, The UC Irvine Libraries, Irvine, California. Conflict theory Humanities Philosophy of technology Philosophical schools and traditions Social philosophy Political ideologies
Critical theory
[ "Technology" ]
4,641
[ "Philosophy of technology", "Science and technology studies" ]
42,737,919
https://en.wikipedia.org/wiki/Vitali%E2%80%93Carath%C3%A9odory%20theorem
In mathematics, the Vitali–Carathéodory theorem is a result in real analysis that shows that, under the conditions stated below, integrable functions can be approximated in L1 from above and below by lower- and upper-semicontinuous functions, respectively. It is named after Giuseppe Vitali and Constantin Carathéodory. Statement of the theorem Let X be a locally compact Hausdorff space equipped with a Borel measure, μ, that is finite on every compact set, outer regular, and tight when restricted to any Borel set that is open or of finite mass. If f is an element of L1(μ) then, for every ε > 0, there are functions u and v on X such that u ≤ f ≤ v, u is upper-semicontinuous and bounded above, v is lower-semicontinuous and bounded below, and References Theorems in real analysis
Vitali–Carathéodory theorem
[ "Mathematics" ]
190
[ "Theorems in mathematical analysis", "Theorems in real analysis" ]
42,739,040
https://en.wikipedia.org/wiki/NGC%204485
NGC 4485 is an irregular galaxy located in the northern constellation of Canes Venatici. It was discovered January 14, 1788 by William Herschel. This galaxy is located at a distance of 29 million light years and is receding with a heliocentric radial velocity of 483 km/s. NGC 4485 is interacting with the spiral galaxy NGC 4490 and as a result both galaxies are distorted and are undergoing intense star formation. They have a projected separation of and are surrounded by an extended hydrogen envelope with a dense bridge of gas joining the two. Both galaxies are otherwise isolated and of low mass. The star formation rate in NGC 4485 is ·yr−1. Gallery References External links Irregular galaxies Canes II Group Canes Venatici 4485 07648 41326
NGC 4485
[ "Astronomy" ]
161
[ "Canes Venatici", "Galaxy stubs", "Astronomy stubs", "Constellations" ]
42,739,142
https://en.wikipedia.org/wiki/Vijay%20P.%20Singh
Vijay P. Singh (born July 15, 1946) is a Distinguished Professor and a Regents Professor, and holds the Caroline and William N. Lehrer Distinguished Chair in Water Engineering at Texas A&M University. His research interests include Surface-water Hydrology, Groundwater Hydrology, Hydraulics, Irrigation Engineering, Environmental Quality, and Water Resources. Birth and education Vijay P. Singh was born in Agra, India in 1946. He graduated with a BS in Engineering and Technology with emphasis on Soil and Water Conservation Engineering from [G.B Pant University of Agriculture and Technology], India in 1967. He earned an MS in Engineering with specialization in Hydrology from University of Guelph, Canada in 1970 and a Ph.D. in Civil Engineering with specialization in Hydrology and Water Resources from Colorado State University, Fort Collins, USA in 1974. He also earned a D.Sc. from the University of the Witwatersrand, Johannesburg, South Africa in 1998. He was elected a member of the National Academy of Engineering (NAE) in 2022 for his contributions to wave modeling and development of entropy-based theories of hydrologic processes and hydroclimatic extremes. Vijay P. Singh is also a two-time winner of the Norman Medal, the highest honor by the American Society of Civil Engineers established in 1872 to recognize engineering papers distinguished by their "practical value" and "impact on engineering practice". He is one of the few scientists to have received this award twice: he won a Norman Medal in 2010 for a paper with Tommaso Moramarco and Claudia Pandolfo, and in 2023 for a paper with Solomon Vimal. References External links Department of Biological & Agricultural Engineering, TAMU Texas A&M University faculty American hydrologists Indian hydrologists Hydrologists Indian scientists American scientists 1946 births American people of Indian descent Living people
Vijay P. Singh
[ "Environmental_science" ]
376
[ "Hydrology", "Hydrologists" ]
42,742,118
https://en.wikipedia.org/wiki/LG%20G%20Pad%208.0
The LG G Pad 8.0 (also known as LG G Tab 8.0) is an 8.0-inch Android-based tablet computer produced and marketed by LG Electronics. It belongs to the LG G series, and was announced on 13 May 2014 along with the G Pad 7.0, and G Pad 10.1. This is one of LG's new tablet size variants aimed to compete directly with the Samsung Galaxy Tab 4 series. History The G Pad 8.0 was first announced on 13 May 2014. It was officially unveiled at the MedPI tradeshow in Monaco. It was released in July 2014. Features The G Pad 8.0 is released with Android 4.4.2 Kitkat. LG has customized the interface with its Optimus UI software. As well as apps from Google, including Google Play, Gmail and YouTube, it has access to LG apps such as QPair, QSlide, KnockOn, and Slide Aside. The G Pad 8.0 is available in a Wi-Fi-only, 3G & Wi-Fi, and 4G/LTE & Wi-Fi variants. Internal storage is 16 GB, with a microSDXC card slot for expansion. It has an 8.0-inch IPS LCD screen with a resolution of 1280x800 pixel. It also features a front camera without flash and rear-facing camera. It also has the ability to record HD videos. References G Pad 8.0 Android (operating system) devices Tablet computers Tablet computers introduced in 2014
LG G Pad 8.0
[ "Technology" ]
323
[ "Mobile computer stubs", "Mobile technology stubs" ]
42,742,695
https://en.wikipedia.org/wiki/National%20Smelting%20Company
The National Smelting Company was a nationalised zinc smelting company in Avonmouth, England. It was established by Minister of Munitions Winston Churchill to produce mustard gas during World War I. After World War I, it was bought by private business interests. From 1929 it became part of Australia's Imperial Smelting Corporation. The site – also known as the Britannia smelting works – was where the Imperial Smelting Process was developed. From 1967, the Avonmouth Works was home to the largest and most efficient zinc blast furnace in the world. The site remained operational until 2003 when the production of zinc, cadmium, lead and sulphuric acid ceased. The site is being redeveloped as a supermarket distribution centre for Asda and a recycling plant for SITA UK. Background During the later part of World War I, it was proposed to make Avonmouth Docks the UK centre of production of dichloroethyl sulphide, also known as mustard gas. However, its production was against the Hague Conventions of 1899 and 1907, which explicitly forbade the use of "poison or poisoned weapons" in warfare. Hence covered by the Official Secrets Act, as a cover the Ministry of Munitions under its minister Winston Churchill nationalised many small smelting works under the new National Smelting Company (NSC). Before the outbreak of World War I, much of Britain's zinc had originated in Australia, but had been smelted in Germany. The NSC was hence publicly commissioned to build a new zinc smelting works and sulphuric acid plant at Merebank, Avonmouth Docks. Mustard gas Having already built the nearby No.23 filling factory at Chittening, operated by Nobel Explosives, shells there were already being filled with chloropicrin. Construction of the chemical plant began in 1917, but did not finish until 1923, costing £800,000. The plant came into operation from Spring 1918, producing of dichloroethyl sulphide using the Despretz–Niemann–Guthrie process per day. The chemical product was than shipped to the main filling factory production site at Banbury, plus secondary sites at Chittening and Hereford. Although the first shells did not arrive in France until September 1918, two months before The Armistice, it was used that same month during the breaking of the Hindenburg Line within the Hundred Days' Offensive. By November 1918, Chittening had produced 85,424 mustard gas shells. The human cost of producing mustard gas was high. In December 1918 the chemical plant's medical officer reported that in the six months it was operational, there were 1,400 illnesses reported by its 1,100 mostly female workers – all medically attributable to their work. Three people died because of accidents, four died from associated illnesses, and there were 160 accidents resulting in over 1,000 burns. At Chittening there were reported 1,213 cases of associated illness, including two deaths which were later attributed to influenza. Operational history After World War I, demand for zinc and sulphuric acid greatly fell, and after running into commercial difficulties it was taken over by a group of British industrialists with interests in metals and chemicals, who succeeded in reviving its business under the name Commonwealth Smelting Company. In 1929 the NSC was bought by Australia's Imperial Smelting Corporation, which in 1949 merged with Zinc Corporation to become Consolidated Zinc. Throughout the consolidation, the smaller NSC plants were closed down to concentrate production on Avonmouth – now known as the Britannia smelting works – where the Imperial Smelting Process was developed. From 1967, the Avonmouth Works was home to the largest and most efficient zinc blast furnace in the world. Consolidated Zinc, having failed to develop suitable new mining projects, merged from 1962 with the Rio Tinto Company, a mining company. The resulting company, known as The Rio Tinto – Zinc Corporation (RTZ), and its main subsidiary, Conzinc Riotinto of Australia (CRA), would eventually become today's Rio Tinto. With smelting cheaper elsewhere in the world, the site ceased production in the 1990s, but remained open as a stock-holding and distribution centre until 2003. Plants and support services in operation during the late 1960s include: 12. The Sulfuric Acid Plant 3. The Vertical Retort Plant – a zinc plant 4. The Sinter Plant 5. The Cadmium Plant 6. The Beryllium Plant 7. The Works Laboratory 8. The General Stores 9. The Changing Rooms 10. The Hydrofluoric Acid Plant 11. The Isceon Plant – a hydrocarbon refrigerant plant 12. The Aluminum Sulfate Plant 13. The Plant Investigation Department 14. The Sample House 15. Yard and Traffic 16. Vehicle Shop 17. Main workshop 18. Water Fitters Shop 19. Ammonium Sulfate Plant 20. The Works Study Department 21. The Model Shop 22. The Works Estimators Department 23. The Medical Department 24. The Fire Department 25. Security 26. The Instrument Shop 27. The Instrument Development Shop 28. Battery Acid plant 29. The Zinc Stores 30. Personnel Office 31. Main office block 32. Works Pay Stations 33. The Research Pilot Plant. 34. The Green Ore Store. 35. Works Labs 36. Zinc road canteen 37. Works canteen 38. Training Centre 39. Phosphate Plant 40. Staff canteen 41. Main gate entrance. 42. Zinc Ore bucket overhead delivery line - from ships at the docks. 43. Main employee car park. Redevelopment In 2012 SITA UK started redevelopment of the site, but after construction workers were affected by mustard-gas type symptoms, the Ministry of Defence were called in to test and approve the site. However, after MoD approval, a few months later construction workers found a mustard gas shell, which was disposed of by the 11 Explosive Ordnance Disposal Regiment RLC at Porton Down. The site was closed off for a year while experts from the Defence Science and Technology Laboratory conducted a series of tests. In late 2013 MoD clearance was given, allowing the site to be redeveloped as a supermarket distribution centre for Asda, and a recycling plant for SITA UK. See also William Champion (metallurgist) References Manufacturing companies established in 1917 Military history of the United Kingdom during World War I Former nationalised industries of the United Kingdom Government munitions production in the United Kingdom Defunct companies based in Bristol Zinc smelters Non-ferrous metallurgical works in the United Kingdom Former Rio Tinto (corporation) subsidiaries Port of Bristol Avonmouth 1917 establishments in England 2003 disestablishments in England
National Smelting Company
[ "Chemistry" ]
1,352
[ "Non-ferrous metallurgical works in the United Kingdom", "Metallurgical facilities" ]
42,742,946
https://en.wikipedia.org/wiki/National%20data%20protection%20authority
There are several national data protection authorities across the world, tasked with protecting information privacy. In the European Union and the EFTA member countries, their status was formalized by the Data Protection Directive and they were involved in the Madrid Resolution. This project is a part of the work of the International Law Commission of the United Nations. Authorities by group of states On the European level, it is the G29 and the European Data Protection Supervisor (EDPS). The process was backed in 2005 by the Council of Europe, during the World Summit on the Information Society (Tunis, November 2005), and in 2006/2007 within forums on Internet governance (Athens 2006, Rio 2007). On 12 June 2007, OECD issued a recommendation entitled "OECD Recommendation on Cross-border Co-operation in the Enforcement of Laws Protecting Privacy". It aimed to improve national Privacy law enforcement so national authorities can better cooperate with foreign authorities and put in place efficient international mechanisms to ease trans-frontier cooperation for legislation protecting privacy. This recommendation was implemented with the 2010 founding of the Global Privacy Enforcement Network. An Ibero-American network of data protection exists. In May 2008, during its 6th meeting, in Colombia, its declaration asking international conferences on data protection and privacy to "pursue their efforts, regardless of their geographical location, in order to adopt common legal instruments". Another network is that of the Central and Eastern data protection authority (CEDPA). This network has expressed its will to pursue and strengthen its activities within the CEDPA, notably to elaborate common solutions and assist new members with the establishment of data protection legislation. That was during the June 2008 meeting in Poland. List of national data protection authorities European Economic Area : Austrian Data Protection Authority () : Belgian Data Protection Authority (, , ), also known as APD-GBA : Bulgarian Data Protection Authority () : Office of the Commissioner for Personal Data Protection () : Office for Personal Data Protection () : Danish Data Protection Agency () : Estonian Data Protection Inspectorate () : Office of the Data Protection Ombudsman () : (lit. 'National Commission on Informatics and Liberty'), also known as CNIL : Federal Commissioner for Data Protection and Freedom of Information () Note: Competent supervisory authorities for the enforcement of data protection in the private sector are the respective state authorities. : Hellenic Data Protection Authority (), also known as HDPA : Hungarian National Authority for Data Protection and Freedom of Information () : Data Protection Authority () : Data Protection Commissioner (), also known as DPC : Italian Data Protection Authority (), also known as Italian DPA : Data State Inspectorate (, ) : Datenschutzstelle : State Data Protection Inspectorate () : National Commission for Data Protection (, ), also known as CNPD : Office of the Information and Data Protection Commissioner, also known as IDPC : Dutch Data Protection Authority () : Norwegian Data Protection Authority () : Polish Data Protection Commissioner () : National Commission Data Protection (), also known as NCDP : National Authority for the Supervision of Personal Data Processing (), also known as ANSPDCP : Office for Personal Data Protection of the Slovak Republic () : Information Commissioner of the Republic of Slovenia () : Spanish Data Protection Agency () : Transparency and Data Protection Council of Andalusia () : Basque Data Protection Authority (, ) : Catalan Data Protection Authority () : Swedish Data Protection Authority (), also known as Swedish DPA : Information Commissioner's Office, also known as ICO Europe : Information and Data Protection Commissioner (IDP) (Komisionerit për të Drejtën e Informimit dhe Mbrojtjen e të Dhënave Personale (KDIMDP)) : Data Protection Agency of Andorra () : Croatian Personal Data Protection Agency () : Personal Data Protection Service () : Data Protection Office : Directorate for Personal Data Protection () : Office of the Data Protection Supervisor : Commission de contrôle des informations nominatives (lit. 'Personal Data Control Board'), also known as CCIN : Federal Service for Supervision in the Sphere of Telecom, Information Technologies and Mass Communications (Roskomnadzor) : Commissioner for Information of Public Importance and Personal Data Protection () : Federal Data Protection and Information Commissioner (, , ), also known as FDPIC : Turkish Data Protection Authority () : Ukrainian Parliament Commissioner for Human Rights () Africa : Data Protection Agency (), known as APD : No national authority is responsible for data protection. : Data Protection Commission : Commission nationale de contrôle de la protection des données à caractère personnel (lit. 'National Commission for the Control of the Protection of Personal Data'), also known as CNDP : No national authority is responsible for data protection. : National Information Technology Development Agency (NITDA) and Nigerian Communications Commission (NCC) provide services regarding data protection. : Commission de protection des Données Personnelles (lit. 'Commission for the protection of Personal Data'), also known as CDP : Information Regulator : National Authority for Protection of Personal Data (), known as INPDP : There is currently no data protection authority but the Zimbabwe Media Commission comments on the degree of protection of privacy from public bodies programs. Asia : Cyberspace Administration of China (CAC) : Office of the Privacy Commissioner for Personal Data (PCPD) : Data Protection Board of India : Personal Data Protection Authority Institute : The Privacy Protection Authority () : Personal Information Protection Commission (Japan) (PPC) : Data protection is regulated by the state. : Office for Personal Data Protection, known as OPDP : There is a Personal Data Protection Commissioner : National Commission for Personal Data Protection : National Privacy Commission : Qatar Ministry of Transport and Communications : No national authority is responsible for data protection. : A Personal Data Protection Commission is created following the Personal Data Protection Act 2012 (Singapore) : Personal Information Protection Commission (South Korea) (PIPC) : No national authority is responsible for data protection. : Office of the Personal Data Protection Committee : No national authority is responsible for data protection. : Regulators for data protection are sector-specific. Oceania : Office of the Australian Information Commissioner : Privacy Commissioner (New Zealand) North America : Office of the Privacy Commissioner of Canada () : National Institute of Transparency for Access to Information and Personal Data Protection () : There is no single national authority. South America : Dirección Nacional de Protección de Datos Personales (lit. 'National Directorate for Personal Data Protection'), known as PDP : No national authority is responsible for data protection. : National Data Protection Authority (ANPD) : There is no dedicated authority. : Superintendency of Industry and Commerce (SIC) : Ministerio de Justicia y Derechos Humanos (Perú) (lit. 'Ministry of Justice and Human Rights') : Personal Data Control and Regulatory Unit. : No national authority is responsible for data protection. Central America : Agency for the Protection of Individual's Data (), known as PRODHAB : No national authority is responsible for data protection. : National Civil Registry () and Institute for the Access to Public Information () : No national authority is responsible for data protection. See also General aspects Behavioural targeting Biometric Information Privacy Act CNIL Cookies (Internet) Data security Database Digital identity Geolocation Privacy and Surveillance Act Health data Identity (psychology) Identity (social science) Information leakage Information security Obfuscation On the Internet, nobody knows you're a dog Passenger name record Social web User profile Violation of privacy Technical aspects Digital certificate OpenID Strong authentication :Category:Identity management Legal aspects Escrow Identity document Identity theft Personal identity verification Protection Profile References External links List of national data protection authorities in Europe International Conference of Data Protection and Privacy Commissioners Handbook on European data protection law Comparison of data protection laws around the world Information privacy
National data protection authority
[ "Engineering" ]
1,623
[ "Cybersecurity engineering", "Information privacy" ]
42,744,533
https://en.wikipedia.org/wiki/Lauri%20Love
Lauri Love (; born 14 December 1984, United Kingdom) is a British activist who was previously charged by the United States for his alleged activities with the hacker collective Anonymous. Love's case has been cited as precedent in the Julian Assange extradition proceedings. Early life and education Love is from Stradishall, Suffolk. His parents, Alexander Love, a prison chaplain at HM Prison Highpoint North, and Sirkka-Liisa Love (a Finnish citizen), who also works at the prison, live in Stradishall. He has dual citizenship of the United Kingdom and Finland. After dropping out of sixth form college and working in a turkey plant, Love applied for a Finnish passport, and then served in the Finnish Army for six months, became a conscientious objector and finished another six months of his obligation in alternative civilian service. After that, he applied at the University of Nottingham in England and dropped out in his second term after a physical and mental collapse, then at the University of Glasgow in Scotland, but dropped out in his second year, again for health reasons. He was part of the 2011 Hetherington House Occupation, a student protest at Glasgow University. United States indictment In January 2013, the website of the United States Sentencing Commission was replaced with a video protesting the treatment of activist Aaron Swartz who had committed suicide days earlier. The video claimed that those responsible had obtained secrets from the United States Army, Missile Defense Agency, and NASA but they were only ever released in encrypted form. The subsequent investigation named Lauri Love in two indictments (2013 in District of New Jersey, 2014 in Southern District of New York and Eastern District of Virginia) for allegedly "breaching thousands of computer systems in the United States and elsewhere – including the computer networks of federal agencies – to steal massive quantities of confidential data". The United States made an extradition request. Love's attorney in America was Tor Ekeland. National Crime Agency arrest The National Crime Agency (NCA) arrested Love in October 2013. In February 2015, BBC News revealed that Love was taking legal action for the return of computers seized by the NCA when he was arrested. In May 2016, Judge Nina Tempia of the Westminster Magistrates' Court ruled that Love did not have to tell the NCA what his passwords, or encryption keys, are. Extradition hearing During Love's two day extradition hearing on 28 and 29 June 2016 at the Westminster Magistrates' Court in London, his father testified that Lauri Love is autistic and so should not be extradited. Specifically, he testified that his son was not diagnosed autistic until he was an adult serving in the Finnish Army. Psychologist Simon Baron-Cohen, who diagnosed Love as autistic in 2012, testified that Love should not be extradited because of his diagnosed disorders, which also include eczema, psychosis, and depression. Baron-Cohen stated that Love told him that he would commit suicide if extradited. Love, who lived at home with his parents, testified at his extradition hearing on 29 June 2016. He was supported by the Courage Foundation. Love's barrister for this extradition hearing was Ben Cooper of Doughty Street Chambers. The case was adjourned. On 16 September 2016, at Westminster Magistrates' Court, a judge ruled that Love could be extradited to the United States. Love's solicitor Karen Todner said that they would appeal, and on 5 February 2018, Lord Chief Justice Lord Burnett and Mr Justice Ouseley, at the High Court, upheld his appeal against extradition because his extradition would be "oppressive by reason of his physical and mental condition". Popular culture In January 2018, it was announced that novelist Frederick Forsyth would publish a novel inspired by the Lauri Love and Gary McKinnon stories. The novel, The Fox, was released in Autumn 2018. Personal life As of the late 2010s, Love was in a long-term relationship with fashion model Sylvia Mann. References External links 1984 births Alumni of the University of Suffolk Anonymous (hacker group) Computer law English people of Finnish descent English people of Scottish descent Finnish people of British descent Finnish people of Scottish descent Living people Hackers People from the Borough of St Edmundsbury People with Asperger syndrome Finnish people with disabilities English people with disabilities Alumni of the University of Nottingham
Lauri Love
[ "Technology" ]
888
[ "Computer law", "Lists of people in STEM fields", "Computing and society", "Hackers" ]
42,744,699
https://en.wikipedia.org/wiki/Salvinia%20effect
The Salvinia effect describes the permanent stabilization of an air layer upon a hierarchically structured surface submerged in water. Based on biological models (e.g. the floating ferns Salvinia, backswimmer Notonecta), biomimetic Salvinia-surfaces are used as drag reducing coatings (up to 30% reduction were previously measured on the first prototypes. When applied to a ship hull, the coating would allow the boat to float on an air-layer, reducing energy consumption and emissions. Such surfaces require an extremely water repellent super-hydrophobic surface and an elastic hairy structure in the millimeter range to entrap air while submerged. The Salvinia effect was discovered by the biologist and botanist Wilhelm Barthlott (University of Bonn) and his colleagues and has been investigated on several plants and animals since 2002. Publications and patents were published between 2006 and 2016. The best biological models are the floating ferns (Salvinia) with highly sophisticated hierarchically structured hairy surfaces, and the back swimmers (e.g.Notonecta) with a complex double structure of hairs (setae) and microvilli (microtrichia). Three of the ten known Salvinia species show a paradoxical chemical heterogeneity: hydrophilic hair tips, in addition to the super-hydrophobic plant surface, further stabilizing the air layer. Salvinia, Notonecta and other organisms with air retaining surfaces Immersed in water, extremely water repellent (super-hydrophobic), structured surfaces trap air between the structures and this air-layer is maintained for a period of time. A silvery shine, due to the reflection of light at the interface of air and water, is visible on the submerged surfaces. Long lasting air layers also occur in aquatic arthropods which breathe via a physical gill (plastron) e. g. the water spider (Argyroneta) and the saucer bug (Aphelocheirus) Air layers are presumably also conducive to the reduction of friction in fast moving animals under water, as is the case for the back swimmer Notonecta. The best known examples for long term air retention under water are the floating ferns of genus Salvinia. About ten species of very diverse sizes are found in lentic water in all warmer regions of the earth, one widely spread species (S. natans) found in temperate climates can be even found in Central Europe. The ability to retain air is presumably a survival technique for these plants. The upper side of the floating leaves is highly water repellent and possesses highly complex and species-specific very distinctive hairs. Some species present multicellular free-standing hairs of 0.3–3 mm length (e. g. S. cucullata) while on others, two hairs are connected at the tips (e.g. S. oblongifolia). S. minima and S. natans have four free standing hairs connected at a single base. The Giant Salvinia (S. molesta), as well as S. auriculata, and other closely related species, display the most complex hairs: four hairs grow on a shared shaft; they are connected at their tips. These structures resemble microscopic eggbeaters and are therefore referred to as “eggbeater trichomes”. The entire leaf surface, including the hairs, is covered with nanoscale wax crystals which are the reason for the water repellent properties of the surfaces. These leaf surfaces are therefore a classical example of a “hierarchical structuring“. The egg-beater hairs of Salvinia molesta and closely related species (e.g. S. auriculata) show an additional remarkable property. The four cells at the tip of each hair (the anchor cells), as opposed to the rest of the hair, are free of wax and therefore hydrophilic; in effect, wettable islands surrounded by a super-hydrophobic surface. This chemical heterogeneity, the Salvinia paradox, enables a pinning of the air water interface to the plant and increases the pressure and longtime stability of the air layer. The air retaining surface of the floating fern does not lead to a reduction in friction. The ecological extremely adaptable Giant Salvinia (S. molesta) is one of the most important invasive plants in all tropical and subtropical regions of the earth and is the cause of economic as well as ecological problems. Its growth rate might be the highest of all vascular plants. In the tropics and under optimal conditions, S. molesta can double its biomass within four days. The Salvinia effect, described here, most likely plays an essential role in its ecological success; the multilayered floating plant mats presumably maintain their function of gas exchange within the air-layer. The working principle The Salvinia effect defines surfaces which are able to permanently keep relatively thick air layers as a result of their hydrophobic chemistry, in combination with a complex architecture in nano- and microscopic dimensions. This phenomenon was discovered during a systematic research on aquatic plants and animals by Wilhelm Barthlott and his colleagues at the University of Bonn between 2002 and 2007. Five criteria have been defined, they enable the existence of stable air layers under water and as of 2009 define the Salvinia effect: (1) hydrophobic surfaces chemistry in combination with (2) nanoscalic structures generate superhydrophobicity, (3) microscopic hierarchical structures ranging from a few mirco- to several millimeters with (4) undercuts and (5) elastic properties. Elasticity appears to be important for the compression of the air-layer in dynamic hydrostatic conditions. An additional optimizing criterion is the chemical heterogeneity of the hydrophilic tips (Salvinia Paradox). This is a prime example of a hierarchical structuring on several levels. In plants and animals, air retaining salvinia effect surfaces are always fragmented in small compartments with a length of 0.5 to 8 cm and the borders are sealed against loss of air by particular microstructures. Compartments with sealed edges are also important for technical applications. The working principle is illustrated in for the Giant Salvinia. The leaves of S. molesta are capable of keeping an air layer on its surfaces for a long time when submerged in water. If a leaf is pulled under water, the leaf surface shows a silvery shine. The distinctive feature of S. molesta lies in the long term stability. While the air layer on most hydrophobic surfaces vanishes shortly after submerging, S. molesta is able to stabilize the air for several days to several weeks. The time span is thereby just limited by the lifetime of the leaf. The high stability is a consequence of a seemingly paradoxical combination of a superhydrophobic (extremely water repellent) surface with hydrophilic (water attractive) patches on the tips of the structures. When submerged under water, no water can penetrate the room between the hairs due to the hydrophobic character of the surfaces. However, the water is pinned to the tip of each hair by the four wax free (hydrophilic) end cells. This fixation results in a stabilization of the air layer under water. The principle is shown in the figure. Two submerged, air retaining surfaces are schematically shown: on the left hand side: a hydrophobic surface. On the right hand side: a hydrophobic surface with hydrophilic tips. If negative pressure is applied, a bubble is quickly formed on the purely hydrophobic surfaces (left) stretching over several structures. With increasing negative pressure the bubble grows and can detach from the surface. The air bubble rises to the surface and the air layer decreases until it vanishes completely. In case of the surface with hydrophilic anchor cells (right) the water is pinned to the tips of every structure by the hydrophilic patch on top. These linkages allow the formation of a bubble stretching over several structures; bubble release is suppressed because several links have to be broken first. This results in a higher energy input for the bubble formation. Therefore, an increased negative pressure is needed to form a bubble able to detach from the surface and rise upwards. Biomimetic technical application Underwater air retaining surfaces are of great interest for technical applications. If a transfer of the effect to a technical surface is successful, ship hulls could be coated with this surface to reduce friction between ship and water resulting in less fuel consumption, fuel costs and reduction of its negative environmental impact (antifouling effect by the air layer). In 2007 first test boats already achieved a ten percent friction reduction and the principle was subsequently patented. By now scientists assume a friction reduction of over 30%. The underlying principle is schematically shown in a figure. Two flow profiles of laminar flow in water over a solid surface and water flowing over an air retaining surface are compared here. If water flows over a smooth solid surface, the velocity at the surface is zero due to the friction between water and surface molecules. If an air layer is situated between the solid surface and the water the velocity is higher than zero. The lower viscosity of air (55 times lower than the viscosity of water) reduces the transmission of friction forces by the same factor. Researchers are currently working on the development of a biomimetic, permanently air retaining surface modeled on S. molesta to reduce friction on ships. Salvinia-Effect surfaces have been proven to quickly and efficiently adsorb oil and can be used for oil-water separation applications Animations See also Lotus effect Petal effect References Further reading P. Ditsche-Kuru, M. J. Mayser, E. S. Schneider, H. F. Bohn, K. Koch, J.-E. Melskotte, M. Brede, A. Leder. M. Barczewski, A. Weis, A. Kaltenmaier, S. Walheim, Th. Schimmel, W. Barthlott: Eine Lufthülle für Schiffe – Können Schwimmfarn und Rückenschwimmer helfen Sprit zu sparen? In: A. B. Kesel, D. Zehren (ed.): Bionik: Patente aus der Natur −5. Bremer Bionik Kongress. A. B. Kesel & D. Zehren. Bremen 2011,Seiten 159–165. S. Klein: Effizienzsteigerung in der Frachtschifffahrt unter ökonomischen und ökologischen Aspekten am Beispiel der Reederei Hapag Lloyd, Projektarbeit Gepr. Betriebswirt (IHK), Akademie für Welthandel, 2012. W. Baumgarten, B. Böhnlein, A. Wolter, M. Brede, W. Barthlott, A. Leder: Einfluss der Strömungsgeschwindigkeit auf die Stabilität von Luft-Wasser Grenzflächen an biomimetischen, Luft haltenden Beschichtungen. In: B. Ruck, C. Gromke, K. Klausmann, A. Leder, D. Dopheide (Hrsg.): Lasermethoden in der Strömungsmesstechnik. 22. Fachtagung, 9.–11. September 2014, Karlsruhe; (Tagungsband). Karlsruhe, Dt. Ges. für Laser-Anemometrie GALA e.V., , S. 36.1–36.5 (Online). M. Rauhe: Salvinia-Effekt Gute Luft unter Wasser. In: LOOKIT. Nr. 4, 2010, S. 26–28. External links www.lotus-salvinia.de Video: Das Geheimnis des Südamerikanischen Schwimmfarns Video: Lufthaltende Schiffsbeschichtungen nach biologischem Vorbild zur Reibungsreduktion Nanotechnology Surface science
Salvinia effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,532
[ "Nanotechnology", "Condensed matter physics", "Surface science", "Materials science" ]
42,744,942
https://en.wikipedia.org/wiki/Enumerate%20%28project%29
ENUMERATE is a collaborative project, led by Collections Trust in the United Kingdom and funded by the European Commission, to create "a reliable baseline of statistical data about digitization, digital preservation and online access to cultural heritage in Europe". Cultural institutions increasingly use digital media to disseminate their heritage material. To obtain a useful overview of the current state, a precursor NUMERIC project to gather statistics was carried out between 2007 and 2009. Three reports have been published under the auspices of ENUMERATE. The first ENUMERATE Core survey report was published in May 2012. In 2013 a report on the ENUMERATE Thematic Surveys on Digital Collections in European Cultural Heritage Institutions was published. The second ENUMERATE Core survey report was published in January 2014. Formal partners The partners of the ENUMERATE EC-funded project, collectively known as the ENUMERATE network, are: Collections Trust, UK (project coordinator); Digitaal Erfgoed Nederland (DEN), Netherlands; Stiftung Preußischer Kulturbesitz, Germany; Digibís, Spain; FARO Vlaams Steunpunt voor Cultureel Erfgoed, Belgium; Ministère de la Culture et de la Communication, France; Österreichische Nationalbibliothek, Austria; Narodna in univerzitetna knjižnica (National and University Library), Slovenia; Országos Széchényi Könyvtár (OSZK; National Széchényi Library), Hungary; The European Library (hosted by the Koninklijke Bibliotheek, Netherlands). Footnotes External links Enumerate.eu Information technology Cultural heritage of Europe Research projects
Enumerate (project)
[ "Technology" ]
357
[ "Information and communications technology", "Information technology" ]
42,744,956
https://en.wikipedia.org/wiki/Lorella%20Jones
Lorella Margaret Jones (February 22, 1943 – February 9, 1995), was a professor of physics and director of the Computer-based Education Research Laboratory (CERL) at the University of Illinois at Urbana–Champaign. Jones was interested in the application of computers to physics education and championed the cause of women in physics. She wrote an essay entitled "Intellectual Contributions of Women in Physics" in Women of Science: Righting the Record. Early life Lorella Margaret Jones was born on February 22, 1943, in Toronto to an astronomer and industrial physicist. She grew up with her parents, Donald A. Jones and Florence Shirley Patterson Jones of Urbana, and a sister Irene Jones of Livermore, California. Aside from her interest in physics, Jones enjoyed hobbies such as gardening at her garden in Meadowbrook Park, south of Urbana, and kayaking. She grew many vegetables at her garden and gave them out amongst her students and colleagues. She would also spend a summer month on an island in Lake Vermillion every year. Jones was very much a nature enthusiast along with being a physicist. Education and career Jones was interested in the application of computers to physics education and championed the cause of women in physics. She wrote an essay entitled "Intellectual Contributions of Women in Physics" in Women of Science: Righting the Record. She studied at Harvard, concentrating on mathematics, and graduated magna cum laude in 1964. From Harvard she moved on to Caltech receiving an M.Sc. in 1966 and Ph.D. in 1968. She became an associate professor of physics at Illinois in 1974, later becoming a full professor. Her research was in high-energy physics, particularly the force binding nuclear particles to quarks. Her career in research on theoretical high-energy focused on four things: Regge pole theory, phenomenological models of photomeson production, jet calculus in quantum chromodynamics (QCD), and use of Grassmann coordinates to describe internal symmetries. She took a sabbatical in 1981–1982 to work at CERN, becoming a fellow of the American Physical Society in the division of particles and fields in 1982. She became director of the university's Education Research Laboratory in the year 1992, remaining at the University of Illinois for her whole career and publishing a total of 64 papers based on her research. Research in Illinois She investigated photoproduction of energized mesons with H.W. Wyld. She examined the truth of the A1 state through watchful fractional wave investigation of the response and affirmed the presence of that pivotal meson by means of the supposed Deck mechanism. This drove her to work out the outcomes of hadronic enchant creation through a "gluon combination" demonstrate and to anticipate the principal highlights of bound quark-antiquark generation, especially the vitality reliance. She then began research in the domain of partons and QCD. As a team with Migneron and others, she could create equations in the jet calculus of partons, where the viable propagators developed down instead of up, as in the Altarelli–Parisi plot. She in this way reproduced jets through Monte Carlo strategies, indicating how QCD falls from quarks and gluons can, on a basic level, be recognized. She then classified jets through the longitudinal connections between hard particles produced in the hadronic jets. She ended up being intrigued by the utilization of concealed Grassmann factors for comprehension and ordering molecule symmetries. In work with Delbourgo and White, she explained the anharmonic Grassmann oscillator, which is the fermionic simple of the common anharmonic oscillator. She went ahead to create Dirac-like equations of movement, which incorporated the Grassmann variables in a general sense, and acquired quantized mass spectra, showing the helpfulness of such thoughts. Death Lorella Jones died on February 9, 1995, aged 51. She died in a nursing home in Champaign, Illinois. The cause of her death was cancer. References External links Scientific publications of Lorella Jones on INSPIRE-HEP 1943 births 1995 deaths Particle physicists Scientists from Toronto Physics educators American women physicists Computer-based Education Research Laboratory People associated with CERN Radcliffe College alumni California Institute of Technology alumni University of Illinois Urbana-Champaign faculty Fellows of the American Physical Society 20th-century American physicists 20th-century American women scientists
Lorella Jones
[ "Physics" ]
905
[ "Particle physicists", "Particle physics" ]
42,746,040
https://en.wikipedia.org/wiki/Madeleine%20M.%20Joulli%C3%A9
Madeleine M. Joullié (born March 29, 1927) is an American-Brazilian organic chemist. She was the first woman to join the University of Pennsylvania chemistry faculty as well as the first female organic chemist to be appointed to a tenure track position in a major American university. She was one of the first affirmative action officers at the University of Pennsylvania. She has a distinguished record as a teacher of both undergraduate and graduate-level organic chemistry, and as a mentor of students. Joullié is also an active researcher in organic chemistry who has published three textbooks of organic chemistry, more than 18 review articles, and more than 300 scientific papers. Her work in synthesizing organic compounds such as tilorone, furanomycin, and numerous cyclopeptides has led to the development of antibiotic and antiviral drugs. Joullié has received numerous awards, including the 1978 Garvan Medal from the American Chemical Society, in recognition of her accomplishments in teaching and research. Early life Madeleine Joullié was born in Paris, France. Her father, an international businessman, soon moved to Rio de Janeiro, Brazil, where she attended the Lycée Français. The family also lived briefly in São Paulo. There she attended a private school, the . Joullié moved to the United States to study in 1946. She obtained a B.S. degree in chemistry from Simmons College, a women's college in Boston, in 1949. Then she moved to Philadelphia, where she was the only full-time female graduate student in chemistry at the University of Pennsylvania. There weren't even bathrooms for women in the chemistry building. She earned an M.S. from the University of Pennsylvania in 1950 and a Ph.D. in 1953. She worked with Allan R. Day, who inspired Joullié as both a researcher and a teacher. Also at the university, Joullié met Richard E. Prange (1932–2008), a condensed matter theorist in the physics department. They married in 1959. Career In 1953, Madeleine Joullié joined the University of Pennsylvania chemistry faculty, the first woman to do so. Originally in a non-tenure-track position, Joullié taught undergraduate organic chemistry five days a week and ran the lab. In her first five years, none of the graduate students would work with her, so she carried out research in collaboration with undergraduates. As more women entered the department, first female and later male graduate students began to work with her. Joullié received a Fulbright scholarship to lecture at the University of Brazil (1965). While there, she wrote a textbook in Portuguese on heterocyclic chemistry. She has also been a visiting professor at Columbia University (1968), CRNS (Grenoble, France, 1987), the University of California at Santa Barbara (1989), and Cambridge, England (1997), but the majority of her career has been spent at the University of Pennsylvania. Joullié became a full professor in 1974. Community involvement Joullié was active in the safety committee at UPenn, helping to identify and enforce safe work guidelines for the chemistry department. In 1970, Joullié and Mildred Cohn worked on the Committee on the Status of Women, which gathered data and documented the second-class status of women at Penn. The percentage of women holding faculty positions was far below the percentage of qualified women Ph.D.'s, and those women who had positions held lower rank, received lower salaries, and waited longer for promotion. The committee developed affirmative action guidelines, supporting the university's efforts to recruit more women and minority faculty. Later in the 1970s, dean Vartan Gregorian appointed her as one of the first affirmative action officers at the University of Pennsylvania. Between 1976 and 1980, Joullié reviewed the hiring and promotions processes of the School of Arts and Science, comparing resumes of male and female candidates. On some occasions, when she felt that qualified women had been ignored in the hiring process, she flatly refused to sign off on new hires. Her effectiveness in the position led Provost Eliot Stellar to appoint her as the chair of the university's Council for Equal Opportunity, overseeing affirmative action in all departments. Of her affirmative action activities, Joullié says: "I served in that role for seven years, without help, extra pay, or teaching relief... It was not a pleasant job, but it did produce results. The School of Arts and Sciences was so successful with their affirmative action program that I was then asked by the provost to chair the Council for Equal Opportunity to oversee the affirmative action programs of all the schools at Penn." Described by professor Helen Davies as "fearless and formidable", Joullié is credited as having played a key role in creating a culture of equality for women at the university. Joullié also helped the American Chemical Society to develop professional guidelines for chemists. Scientific achievements Early in her career, her Ph.D. advisor, Allan R. Day, interested her in heterocyclic compounds, and she did early work with aromatic and heterocyclic scaffolds and fluorinated heterocycles. She also did significant research on heterocyclic ketones in the 1970s, collaborating with Peter Yates of the University of Toronto, and receiving the 1972 ACS Philadelphia Section Award. In the early 1970s, she successfully synthesized tilorone, an interferon inducer which helps to protect cells. Since then, much of Joullié's research has focused on the synthesis of natural products. In 1980, she reported the first asymmetric total synthesis of the antibiotic (+)-furanomycin, the first use of the Ugi 4CC in the synthesis of a non-proteinogenic amino acid. She helped to develop methodologies for aromatic substitution, and introduced the term "chirality transfer". Her subsequent work has led to the total synthesis of several natural products, including muscarine, geiparvarin, ascofuranone, furanomycin, and dihydromauritine A. Working with Judah Folkman at Harvard Medical School and Paul B. Weisz at the University of Pennsylvania, Joullié helped to synthesize beta-cyclodextrin sulfate, a ring-shaped sugar molecule that attaches to the walls of growing blood vessels. By capturing and delivering cortisone molecules, it decreases the growth of new capillaries. Chemotherapies can thus target aberrant angiogenesis, the growth of new blood vessels, and restrict the growth of malignant tumours. Joullié's specialized compounds made Folkman's original treatments 100 to 1000 times more potent. Beta-cyclodextrin sulfate is also useful in limiting restenosis, growth of cells on artery walls that can lead to blockages at the site of surgical procedures. Another particularly interesting area of research has involved the Didemnin class of macrocyclic depsipeptide. Derived from a marine tunicate of the family Didemnidae, didemnins exhibit antitumor, antiviral and immunosuppressive qualities. Joullié's asymmetric total synthesis of didemnin B in 1990 was a landmark event leading to fundamental contributions to both the chemistry and biology of this intriguing class of natural products. Didemnin B was the first marine natural product to be used in clinical trials against cancer. Joullié has produced several didemnin analogs. She has also developed probe molecules that can trace didemnins, allowing researchers to more effectively study their biological activities. Another area of research has been ninhydrin analogs. Joullié's lab has developed compounds with enhanced chromogenic and fluorogenic properties, useful in fingerprinting and forensic science. Joullié was asked by the United States Secret Service to help in developing fingerprint reagents. Fingerprint chemicals must be non-toxic and can't damage sensitive evidence, as well as meeting other criteria. Joullié, with students Olga Petrovskaia and Diane Hauze, developed and patented a class of compounds called indanediones. Like ninhydrins, indanediones react with amino acids from the oil on people's fingertips. They have the advantages of being cheaper to produce, easier to use, and more sensitive, providing more clarity and sharper contrast. Indanediones are standardly used in the first stage of forensic identification of latent fingerprints. Awards Madeleine M. Joullié has received a substantial number of awards, including the following: John Scott Medal (2015) Edward Leete Award (2009) Arthur C. Cope Senior Scholar Award (2002) from the American Chemical Society Distinguished Achievement Award University of Pennsylvania Graduate Student Associate and Phi Lambda Upsilon (1999) ACS Award for Encouraging Women into Careers in the Chemical Sciences (1998) H. Martin Friedmann Lectureship, Rutgers University (1995) Henry Hill Award (1994) Philadelphia Organic Chemist's Club Award (1994) Second Annual Association of American Women in Science, Philadelphia Chapter Award (1991) Lindback Award for Distinguished Teaching (1991) American Institute of Chemists 34th Annual Scroll Award (1988) American Cyanamid Faculty Award (1984) Garvan Medal (1978) References Further reading External links Joullié Group, University of Pennsylvania A Video interview of Professor Joullié 1927 births French women chemists American women chemists American organic chemists 21st-century American chemists University of Pennsylvania alumni Recipients of the Garvan–Olin Medal University of Pennsylvania faculty Scientists from Paris French emigrants to the United States Simmons University alumni Living people American women academics 21st-century American women scientists Graduate Women in Science members
Madeleine M. Joullié
[ "Chemistry" ]
1,997
[ "Organic chemists", "American organic chemists" ]
42,746,762
https://en.wikipedia.org/wiki/Anthony%20Hilton
Anthony J. W. Hilton (born 4 April 1941) is a British mathematician specializing in combinatorics and graph theory. His current positions are as emeritus professor of Combinatorial Mathematics at the University of Reading and professorial research fellow at Queen Mary College, University of London. Education From 1951 to 1959, he attended the Bedford School in Bedford, Bedfordshire, England. From there he attended Reading University, where he earned a bachelor's degree in 1963 and was awarded a PhD in 1967. His dissertation was "Representation Theorems for Integers and Real Numbers" under his advisor David E. Daykin. Work Much of his work has been done in pioneering techniques in graph theory. He has discovered many results involving Latin squares, including, which states that "if cells of an matrix are preassigned with no element repeated in any row or column then the remaining cells can be filled so as to produce a Latin square." Another noteworthy result states that given a k-regular graph with vertices, if then it is 1-factorizable. In 1998, he was awarded the Euler Medal for "a distinguished career in the work he has produced, the people he has trained, and his leadership in the development of combinatorics in Britain." Among the specific things cited for are the creation of two new techniques for solving long standing problems. Through the use of edge colorings in the context of embedding graphs, he was able to settle the Evan's conjecture and the Lindner conjecture. Through the use of graph amalgamations he was able to show many results, including a method for enumerating Hamiltonian decompositions as well as a conjecture about embedding partial triple systems. References 1941 births Living people 20th-century British mathematicians 21st-century British mathematicians Graph theorists
Anthony Hilton
[ "Mathematics" ]
361
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
42,747,055
https://en.wikipedia.org/wiki/Octopart
Octopart.com is a search engine for electronic and industrial parts headquartered in La Jolla, CA. It aggregates parts from distributors and manufacturers online, making them easy to search for and purchase. History Octopart was created by three physics grad-school dropouts, Andres Morey, Sam Wurzel, and Harish Agarwal, in 2007. After coming up with the idea for the site and leaving graduate school, Morey and Wurzel worked with Paul Graham and Jessica Livingston's Y Combinator. Octopart works with large distributors. In 2017, Octopart was acquired by Altium Limited. References Y Combinator companies 2007 establishments in California
Octopart
[ "Engineering" ]
141
[ "Electronics companies", "Engineering companies" ]
42,747,411
https://en.wikipedia.org/wiki/Brushy%20Fork%20Coal%20Impoundment
The Brushy Fork Coal Impoundment, also known as the Brushy Fork Coal Sludge Dam, is a large tailings dam on the Brushy Fork near Marfork in western Raleigh County of West Virginia, United States. It is located northwest of Beckley, the seat of Raleigh County. Brushy Fork flows into Little Marsh Fork, which then enters Marsh Fork, which is a tributary of the Coal River. The purpose of the dam is to store a sludge consisting of tailings and waste from a nearby coal mine. In 1995 Massey Energy received a permit to construct the dam. Over the years additional permits to increase the size and storage volume of the dam have been issued in the midst of local and regional opposition to its structural integrity. Currently at approximately in height, it is the tallest dam in the Western Hemisphere. When complete its designed height will be . Wasted rock from the coal mining process is used as the dam filler. The dam currently withholds about of waste. This capacity will be increased to upon completion. References Dams in West Virginia Tailings dams Buildings and structures in Raleigh County, West Virginia Mining in West Virginia 1995 establishments in West Virginia
Brushy Fork Coal Impoundment
[ "Technology", "Engineering" ]
237
[ "Tailings dams", "Mining engineering", "Hazardous waste", "Mining equipment" ]
47,297,719
https://en.wikipedia.org/wiki/Millard%20H.%20Alexander
Millard Henry Alexander (born February 17, 1943, Boston, Massachusetts) is an American theoretical chemist. He is Distinguished University Professor at the University of Maryland, with appointments in the Department of Chemistry and Biochemistry and the Institute for Physical Science and Technology. He is the author of over 300 publications and an active researcher in the fields of molecular collision dynamics and theoretical chemistry. Research Alexander's research focus is the quantum-mechanical aspects of molecular collisions, in particular those involving open-shell species. More specifically, Alexander's work has focused on understanding chemical reactions where the Born–Oppenheimer approximation can be violated, by means of nonadiabatic coupling, spin–orbit interactions and conical intersections. Alexander's work is particularly important in understanding the and reactions. Organisational affiliations Alexander is a fellow of the American Physical Society and of the American Association for the Advancement of Science and a member of the International Academy of Quantum Molecular Science. In 2015 he received the Herschbach Medal for contributions to the theoretical study of the dynamics of molecular collisions. Since 2012 Alexander has served as the President of the Telluride Science Research Center. Selected publications . . . . . . References External links Millard Alexander's home page at the University of Maryland Hibridon program suite for inelastic scattering, photodissociation, and weakly-bound clusters 1943 births Living people Fellows of the American Physical Society Members of the International Academy of Quantum Molecular Science Harvard College alumni Theoretical chemists 21st-century American chemists University of Maryland, College Park faculty
Millard H. Alexander
[ "Chemistry" ]
311
[ "Theoretical chemists", "American theoretical chemists" ]
47,298,774
https://en.wikipedia.org/wiki/Penicillium%20raistrickii
Penicillium raistrickii is an anamorph species of fungus in the genus Penicillium which produces griseofulvin, patulin and verruculogen. References Further reading raistrickii Fungi described in 1933 Fungus species
Penicillium raistrickii
[ "Biology" ]
56
[ "Fungi", "Fungus species" ]
47,299,893
https://en.wikipedia.org/wiki/Paramural%20body
Paramural bodies are membranous or vesicular structures located between the cell walls and cell membranes of plant and fungal cells. When these are continuous with the cell wall, they are termed lomasomes, while they are referred to as plasmalemmasomes if associated with the plasmalemma. Function While their function has not yet been studied in great detail, it has been speculated that due to the morphological similarity of paramural bodies to the exosomes produced by mammalian cells, they may perform similar functions such as membrane vesicle trafficking between cells. Current evidence suggests that, like exosomes, paramural bodies are derived from multivesicular bodies. See also Exosome Endosome Golgi apparatus References Cell biology Biochemistry
Paramural body
[ "Chemistry", "Biology" ]
157
[ "Biochemistry", "Cell biology", "nan" ]
47,300,509
https://en.wikipedia.org/wiki/%CE%9C%28I%29%20rheology
{{DISPLAYTITLE:μ(I) rheology}} In granular mechanics, the μ(I) rheology is one model of the rheology of a granular flow. Details The inertial number of a granular flow is a dimensionless quantity defined as where is the shear rate tensor, is its magnitude, d is the average particle diameter, P is the isotropic pressure and ρ is the density. It is a local quantity and may take different values at different locations in the flow. The μ(I) rheology asserts a constitutive relationship between the stress tensor of the flow and the rate of strain tensor: where the eponymous μ(I) is a dimensionless function of I. As with Newtonian fluids, the first term -Pδij represents the effect of pressure. The second term represents a shear stress: it acts in the direction of the shear, and its magnitude is equal to the pressure multiplied by a coefficient of friction μ(I). This is therefore a generalisation of the standard Coulomb friction model. The multiplicative term can be interpreted as the effective viscosity of the granular material, which tends to infinity in the limit of vanishing shear flow, ensuring the existence of a yield criterion. One deficiency of the μ(I) rheology is that it does not capture the hysteretic properties of a granular material. Development The μ(I) rheology was developed by Pierre Jop et al. in 2006. Since its initial introduction, many works has been carried out to modify and improve this rheology model. This model provides an alternative approach to the Discrete Element Method (DEM), offering a lower computational cost for simulating granular flows within mixers. See also Dilatancy (granular material) References Rheology Granularity of materials
Μ(I) rheology
[ "Physics", "Chemistry" ]
384
[ "Rheology", "Materials", "Particle technology", "Granularity of materials", "Matter", "Fluid dynamics" ]
47,301,692
https://en.wikipedia.org/wiki/Cray%20Urika-XA
The Cray Urika-XA extreme analytics platform, manufactured by supercomputer maker Cray Inc., was an appliance that analyzes the massive amounts of data—usually called big data—that supercomputers collect. It was introduced in 2015 and discontinued in 2017. Organizations that use supercomputers have traditionally used multiple smaller off-the-shelf systems for data analysis. But as organizations see a dramatic increase in the amount of data they collect—everything from research data to retail transactions—they need data analytics systems that can make sense of it and help them use it strategically. In a nod to organizations that lean toward open-source software, the Urika-XA comes pre-installed with Cloudera Enterprise Hadoop and Apache Spark. References Further reading Nicole Hemsoth (15 Oct 2014) "Cray Launches Hadoop into HPC Airspace." HPCWire. "The Evolution of Data Analytics ." Infographic. Eileen McNulty (22 May 2014). "Understanding Big Data: The Seven V's." Dataconomy. Andy Patrizio (30 Jun 2017). "Cray adds big data software to its supercomputers." NETWORKWORLD. Cray products
Cray Urika-XA
[ "Technology" ]
254
[ "Computing stubs" ]
47,301,819
https://en.wikipedia.org/wiki/Cray%20Urika-GD
The Cray Urika-GD is a graph discovery appliance is a computer application that finds and analyzes relationships and patterns in the data collected by a supercomputer. The Cray Urika-GD generates graphs based on large amounts of data, often from multiple sources, and makes useful connections among those data. Many organizations now have vast stores of information like this—called "big data"—that they can analyze and use to improve their operations, products or services. One example of the appliance in use would be a healthcare organization that helps to find, among its 13 million patient records, information that doctors could use to develop treatment plans. By categorizing records based on illness, age, treatment, and outcome, the appliance can provide insights for treating other patients. “Big data” is also being tapped in professional sports. In 2014, Cray revealed that a Major League Baseball team was using a Urika-GD appliance to graph and analyze its own performance statistics. References External links "Global Supercomputer Leader Cray Inc. Awarded $80 million by King Abdullah University of Science and Technology (KAUST)." Dataconomy. 18 November 2014. "The Evolution of Data Analytics ." Infographic. Eileen McNulty (22 May 2014). "Understanding Big Data: The Seven V's." Dataconomy. Cray products
Cray Urika-GD
[ "Technology" ]
291
[ "Computing stubs" ]
47,303,166
https://en.wikipedia.org/wiki/Stapled%20peptide
A stapled peptide is a modified peptide (class A peptidomimetic), typically in an alpha-helical conformation, that is constrained by a synthetic brace ("staple"). The staple is formed by a covalent linkage between two amino acid side-chains, forming a peptide macrocycle. Staples, generally speaking, refer to a covalent linkage of two previously independent entities. Peptides with multiple, tandem staples are sometimes referred to as stitched peptides. Among other applications, peptide stapling is notably used to enhance the pharmacologic performance of peptides. Introduction The two primary classes of therapeutics are small molecules and protein therapeutics. The design of small molecule inhibitors of protein-protein interactions has been impeded by issues such as the general lack of small-molecule starting points for drug design, the typical flatness of the interface, the difficulty of distinguishing real from artifactual binding, and the size and character of typical small-molecule libraries. Meanwhile, the protein therapeutics that lack these issues are bedeviled by another problem, poor cell penetration due to an insufficient ability to diffuse across the cell membrane. Additionally, proteins and peptides are often subject to proteolytic degradation in vivo or if they do enter the cell. Furthermore, small peptides (such as single alpha-helices or α-helices) can lose helicity in solution due to entropic factors, which diminishes binding affinity. α-Helices are the most common protein secondary structure and play a key role in mediating many protein–protein interactions (PPIs) by serving as recognition motifs. PPIs are frequently misregulated in disease, provides the long-running impetus to create alpha-helical peptides to inhibit disease-state PPIs for clinical applications, as well as for basic science applications. Introducing a synthetic brace (staple) helps to lock a peptide in a specific conformation, reducing conformational entropy. This approach can increase target affinity, increase cell penetration, and protect against proteolytic degradation. Various strategies have been employed for constraining α-helices, including the non-covalent and covalent stabilization techniques; however, the all-hydrocarbon covalent link, termed a peptide staple, has been shown to have improved stability and cell penetrability, making this stabilization strategy particularly relevant for clinical applications. Invention Staples synthesized using ring-closing metathesis (RCM) are common. This variation of olefin metathesis and its application to stapled peptides was developed by Nobel laureate Robert H. Grubbs and Helen Blackwell in the late 1990s, who used the Grubbs catalyst to cross-link O-allylserine residues in a covalent bond. In 2000, Gregory Verdine and colleagues reported the first synthesis of an all-hydrocarbon cross-link for peptide α-helix stabilization, combining the principles of RCM with α,α-disubstitution of the amino acid chiral carbon and on-resin peptide synthesis. In collaboration with Edward Taylor of Princeton University, Loren Walensky, who was then a post-doc in Verdine's lab, subsequently demonstrated that stapling BH3 peptides enabled the synthetic peptides to retain their α-helical conformation, further demonstrating that these peptides were taken up by cancer cells and bound their physiologic BCL-2 family targets, which correlated with the induction of cell death. It was discovered that the peptides side-stepped the membrane diffusion issue by crossing the membrane through active endosomal uptake, which deposited the peptides inside of the cell. Since this first proof of principle, peptide stapling technology has been applied to numerous peptide templates, allowing the study of many other PPIs using stapled peptides including cancer targets such as p53, MCL-1 BH3, PUMA BH3, Notch, and beta-Catenin, as well as other therapeutic targets ranging from infectious diseases to metabolism. Clinical applications In 2013, Aileron Therapeutics, which was co-founded by Verdine, Walensky and Taylor, completed the first stapled peptide clinical trial with their growth-hormone-releasing hormone agonist ALRN-5281. As of 2019, Aileron Therapeutics is developing another candidate, sulanemadlin (ALRN-6924), in a Phase 2a trial that assesses the combination of sulanemadlin and Pfizer’s palbociclib for the treatment of patients with MDM2-amplified cancers, and a Phase 1b/2 clinical trial to evaluate sulanemadlin as a myelopreservative agent to protect against chemotherapy-induced toxicities. See also Beta-peptide Druggability Non-proteinogenic amino acids Peptide synthesis Peptidomimetic Peptoid References Peptides
Stapled peptide
[ "Chemistry" ]
1,002
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
47,303,276
https://en.wikipedia.org/wiki/Emergency%20response%20system
Emergency response systems are means for emergency response teams to locate and move resources to emergency sites. The Russian Federation ERA-GLONASS is the modern Russian system of emergency response, similar to the European standard eCall/E112. The system is designed for use with the Russian global satellite navigation system GLONASS on behalf of the Government of the Russian Federation. Since 2018, Russian federation is a member of the UNECE Regulation 144 related to accident emergency call components (AECC), accident emergency call devices (AECD) and accident emergency call systems (AECS). United States Since 2001, authorities have implemented project E911, which tries to automatically associate a location with the origin of calls to 9-1-1 emergency services. In 2006, the Next Generation 9-1-1 (NG 9-1-1) initiative was introduced. The purpose of the initiative is to afford any emergency caller the opportunity to use any communication means for connection to the emergency services operator, which in turn can receive location data from fixed and mobile phones, as well as automatic sensor-activated devices in case of accidents. In 2010, the system was tested and has become widely implemented. The planning and progress is underway fr a more digital technology under the National 911 Program, under supervision of the National Highway Traffic Safety Administration Office of Emergency Medical Services. Several states are deploying several response systems for various issues. Georgia has Peer2Peer Warm Lines that offer support from trained specialists to people facing challenges who may not require severe emergency response. Oklahoma shares first responders with tablets equipped with crisis de-escalation tools. Florida is offering a full care service for mental health crisis intervention that includes treatment, from home, drug addiction services and child care. European Union In 2001, countries within the European Union implemented the eCall program. eCall is an initiative to bring rapid assistance to motorists involved in collisions and is not designed to allow vehicle tracking outside of emergencies. Some European countries equip trucks with similar devices, containing navigational and communication components. In 2005, Germany began installing eCall devices on trucks with a carrying capacity exceeding 12 tonnes. Trucks in Sweden greater than 3.5 tonnes install the automatic connection devices. The European Commission proposals for legislative acts predicted eCall would be seamlessly functioning in most European vehicles by end of 2015. The deadlines for implementation will most likely be delayed to the end of 2017 or early 2018, as the adoption procedure of these legislative acts by the European Parliament and the Council is not complete. An eCall equipped car transmits a 1-1-2 emergency call through the Private GSM frequency, to the closest radio tower, to ensure the signal is sent to the appropriate PSAP, as fast as possible. If none of the passengers involved in the collision are able to speak, a minimum data set is sent, including the coordinates of the vehicle. Since 2018, European Union is a member of the UNECE regulation 144 related to accident emergency call components (AECC), accident emergency call devices (AECD) and accident emergency call systems (AECS). Kazakhstan Kazakhstan has developed an analog system ERA GLONASS called "EVAK" – an emergency call in case of emergencies and disasters. It operates using signals from navigation satellite systems GLONASS and GPS. It is expected in 2016 to equip the system board passenger vehicles weighing over 2.5 tonnes, buses, trucks and special vehicles for the transport of dangerous goods, and in 2017 – all other vehicles. Since 2018, Kazakhstan is a member of the UNECE regulation 144 related to accident emergency call components (AECC), accident emergency call devices (AECD) and accident emergency call systems (AECS). References External links ERA-GLONASS Emergency communication Emergency telephone numbers GLONASS N11 codes
Emergency response system
[ "Technology" ]
770
[ "GLONASS", "Wireless locating" ]
47,303,943
https://en.wikipedia.org/wiki/Podpiwek
Podpiwek is a Polish and Lithuanian non-alcoholic beverage (even though it contains a small amount of alcohol, about 0.5%). It is usually made from grain coffee, hops, yeast, water and sugar, which undergo fermentation. Often created as a byproduct during beer production, it was a common drink of women and children. Famous brands Podpiwek kujawski Podpiwek Jędrzej Podpiwek Lubuski Podpiwek Obołoń Podpiwek warmiński See also Kvass Hardaliye Malt beer References References Non-alcoholic drinks Polish drinks Fermented drinks Soft drinks Lithuanian drinks
Podpiwek
[ "Biology" ]
141
[ "Fermented drinks", "Biotechnology products" ]
65,615,059
https://en.wikipedia.org/wiki/Iron%20Man%27s%20armor%20%28Marvel%20Cinematic%20Universe%29
Tony Stark has worn different versions of the Iron Man armor throughout the Marvel Cinematic Universe (MCU). He has also built armor for James Rhodes (which became the War Machine armor), the Iron Spider suit for Peter Parker, and Pepper Potts' Rescue armor. In Iron Man (2008), physical armor was built by Stan Winston Studios, with the digital version and other visual effects done by Industrial Light & Magic. Further appearances of the armor in the MCU were mainly created through visual effects. Iron Man comic book artist Adi Granov designed the Mark III, with further armors also being inspired by the armors from the comics. Design and creation Iron Man (2008) director Jon Favreau wanted the film to be believable by showing the eventual construction of the Mark III suit in its three stages. Stan Winston and his company were hired to build metal and rubber versions of the armors. The Mark I design was intended to look like it was built from spare parts: particularly, the back is less armored than the front, as Tony Stark would use his resources to make a forward attack. It also foreshadows the design of Obadiah Stane's Iron Monger armor. A single version was built and was designed to only have its top half worn at times. Stan Winston Studios built a , animatronic version of the Iron Monger suit. The animatronic required five operators for the arm, and was built on a gimbal to simulate walking. A scale model was used for the shots of it being made. The Mark II resembles an airplane prototype with visible flaps. Iron Man comic book artist Adi Granov designed the Mark III with illustrator Phil Saunders. Granov's designs were the primary inspiration for the film's, and he came on board the film after he recognized his work on Jon Favreau's MySpace page. Saunders streamlined Granov's concept art, making it stealthier and less cartoonish in its proportions, and also designed the War Machine armor, but it was "cut from the script about halfway through pre-production." He explained that the War Machine armor "was going to be called the Mark IV armor and would have had weaponized swap-out parts that would be worn over the original Mark III armor," and that it "would have been worn by Tony Stark in the final battle sequence." Concerned with the transition between the computer-generated and practical costumes, Favreau hired Industrial Light & Magic (ILM) to create the bulk of the visual effects for the film after seeing Pirates of the Caribbean: At World's End (2007) and Transformers (2007). The Orphanage and The Embassy did additional work. To help with animating the more refined suits, information was sometimes captured by having Downey wear only the helmet, sleeves and chest of the costume over a motion capture suit. For Iron Man 2 (2010), ILM again did the majority of the effects, as it did on the first film. ILM's visual effects supervisor on the film, Ben Snow, said their work on the film was "harder" than their work on the first, stating that Favreau asked more of them this time around. Snow described the process of digitally creating the suits: Because of how form-fitting the Mark V suitcase suit was required to be, the production team researched some of the classic comics armors, since they were seen as essentially variations on muscle suits. One specific aspect of an earlier armor was the color scheme from the Silver Centurion armor. The Mark VI armor was designed by Granov and Saunders to be sleeker than the Mark III, while retaining many of the Mark III's qualities. For The Avengers (2012), Saunders stated that "director Joss Whedon was looking for something that had the 'cool' factor of the suitcase suit" from Iron Man 2, but would be tough enough to survive the alien army from the film's climax. Saunders reworked concepts from the first two films into the Mark VII, a design with "big ammo packets on the arms and a backpack". The chest piece, which had been triangular in the Mark VI, was changed back to the classic circular shape of the Mark III. Weta Digital also took over duties for animating Iron Man during the forest duel from ILM. Guy Williams, Weta's visual effects supervisor, said, "We shared assets back and forth with ILM, but our pipelines are unique and it's hard for other assets to plug into it. But in this case, we got their models and we had to redo the texture spaces because the way we texture maps is different." Williams said the most difficult part was re-creating Iron Man's reflective metal surfaces. For Iron Man 3 (2013), Chris Townsend served as visual effects supervisor. The film featured over 2,000 visual effects shots and was worked on by 17 studios: Weta Digital, Digital Domain, Scanline VFX, Trixter, Framestore, Luma Pictures, Fuel VFX, Cantina Creative, Cinesite, The Embassy Visual Effects, Lola, Capital T, Prologue, and Rise FX. Digital Domain, Scanline VFX, and Trixter each worked on separate shots featuring the Mark XLII armor, working with different digital models. The studios shared some of their files to ensure consistency between the shots. For the Mark XLII and Iron Patriot armors, Legacy Effects constructed partial suits that were worn on set. Townsend explained that "Invariably we'd shoot a soft-suit with Robert [Downey Jr.] then we'd also put tracking markers on his trousers. He would also wear lifts in his shoes or be up in a box so he'd be the correct height – Iron Man is 6'5". The art department at Marvel worked closely with a team from Digital Domain, which created realistically-proportioned 3D versions of suits, including textures and lighting, from Marvel's 2D concept art. Those models were then used by Marvel and Weta Digital. The heads-up display features of the helmet were inspired by visualization techniques from MRI diagnostic pattern recognition and graph theory, particularly by the connectogram, a circular graph that maps all of the white-matter connections of the human brain. Concept art released in March 2014 for Avengers: Age of Ultron (2015), revealed the inclusion of a "Hulkbuster"–like armor. Iron Man's armor in Spider-Man: Homecoming (2017), the Mark XLVII, is a recolored version of the Mark XLVI armor introduced in Captain America: Civil War (2016); this was done because Sony Pictures did not have the budget to create a new Iron Man suit. Feige requested the color scheme resemble the Ultimate Iron Man armor from the comics. For Avengers: Infinity War (2018), visual effects vendor Framestore created Iron Man's Mark 50 suit, based on the Bleeding Edge armor from the comics, which is made up of singular nanobots which move around his body to form a suit, and was developed alongside Marvel for about two years. List of armors Main armor Iron Legion These armors were created before the beginning of Iron Man 3 by Stark, where they were introduced, to help in different types of situations he might encounter. They are first referred to as the "Iron Legion" in Iron Man 3 Prelude #2 (April 2013). The first Iron Legion is a set of specialized armors built for various situations that he might encounter such. Built due to his insomnia, he eventually destroys them due to the friction they cause between him and Pepper Potts. They appeared in Iron Man 3 and consisted of armors Mark VIII through Mark XLI. The second Iron Legion is a set of drones built by Tony Stark in order to aid the Avengers. However, after the creation of the AI Ultron, it builds itself a body from a destroyed drone, and takes control of the rest. Hulkbuster armor Related armors War Machine armor Non-Iron Man armors Avengers Campus Avengers Campus has an exclusive Iron Man armor for Disney Parks, known as the Mark 80. References External links A Guide on Every Armor Worn by Iron Man in the MCU on Marvel.com Iron Man's Armor Evolution video, from Disney+ Fictional armour Iron Man (film series) Iron Man in other media Marvel Cinematic Universe features Marvel Comics weapons Fiction about nanotechnology Fiction about drones Fictional elements introduced in 2008
Iron Man's armor (Marvel Cinematic Universe)
[ "Materials_science" ]
1,737
[ "Fiction about nanotechnology", "Nanotechnology" ]
65,617,192
https://en.wikipedia.org/wiki/Ying%20Ge
Ying Ge is a Chinese-American chemist who is a Professor of Cell and Regenerative Biology at the University of Wisconsin–Madison. Her research considers the molecular mechanisms that underpin cardiac disease. She has previously served on the board of directors of the American Society for Mass Spectrometry. In 2020 Ge was named on the Analytical Scientist Power List. Early life and education Ge was born in China. She attended Peking University for her undergraduate studies, where she studied chemistry. After graduating in 1997 Ge moved to the United States, where she joined Cornell University as a doctoral student. Here she started to work on mass spectrometry, using electron-capture dissociation to study proteins. She worked under the supervision of Tadhg Begley and Fred McLafferty. After completing her doctorate, Ge worked as a research scientist at Wyeth. Research and career Ge joined the University of Wisconsin–Madison as an assistant scientist, where she oversaw the mass spectrometry programme. She became an Associate Professor in 2015, and full Professor in 2019. Ge develops high-resolution mass spectrometry proteomics to better understand cardiac disease. To image the very large proteins of human heart tissue, Ge combines fourier-transform ion cyclotron resonance (FT–ICR) mass spectrometry with electron-capture dissociation. She has worked to create a top-down disease proteomic platform that allows for the separation, detection and characterisation of the biomarkers of heart damage. Nanoproteomics, a technique developed by Ge and co-workers, makes use of nanoparticles and high resolution mass spectrometry to capture and characterise cardiac troponins, including troponin I. Being able to test for and characterise troponin I would help with the early detection and diagnosis of heart disease. The peptide-functionalised superparamagnetic nanoparticles are combined with top-down mass spectrometry to identify the molecular fingerprints of troponins. Rather than just detecting cardiac troponins, which is possible using ELISA-based antibody testing, this higher level of characterisation will allow Ge to identify various forms of modified troponins, allowing a personalised understanding of cardiac disease. Ge served on the board of the Top-Down Proteomics Consortium, on the editorial board of the Journal of Muscle Research and Cell Motility, as treasurer for the American Society for Mass Spectrometry (2016-2018). Ying Ge publications indexed by Google Scholar. Awards and honours 2016 Georges Guiochon Faculty Fellowship 2018 H. I. Romnes Faculty Fellowship 2019 Analytical Scientist Power List 2020 American Society for Mass Spectrometry Biemann Medal 2020 Analytical Scientist Power List 2021 Human Proteome Organization (HUPO) Clinical and Translational Proteomics Award 2021 Analytical Scientist Power List 2024 Analytical Scientist Power List, ranked #8 in "Human Health Heroes" field Selected publications References American people of Chinese descent University of Wisconsin–Madison faculty Peking University alumni Cornell University alumni Chinese chemists Living people Year of birth missing (living people) Mass spectrometrists Cardiovascular researchers 20th-century Chinese chemists 21st-century American chemists
Ying Ge
[ "Physics", "Chemistry" ]
656
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
65,617,301
https://en.wikipedia.org/wiki/Genotype%E2%80%93phenotype%20map
The genotype–phenotype map is a conceptual model in genetic architecture. Coined in a 1991 paper by Pere Alberch, it models the interdependency of genotype (an organism's full hereditary information) with phenotype (an organism's actual observed properties). Application The map visualises a relationship between genotype & phenotype which, crucially: is of greater complexity than a straightforward one-to-one mapping of genotype to/from phenotype. accommodates a parameter space, along which at different points a given phenotype is said to be more or less stable. accommodates transformational boundaries in the parameter space, which divide phenotype states from one another. accounts for different polymorphism and/or polyphenism in populations, depending on their area of parameter space they occupy. See also Genotype–phenotype distinction References Genetics 1991 introductions
Genotype–phenotype map
[ "Biology" ]
188
[ "Genetics" ]
65,617,852
https://en.wikipedia.org/wiki/Zlatko%20Tesanovic
Zlatko Boško Tešanović (August 1, 1956 – July 26, 2012) was a Yugoslav-American theoretical condensed-matter physicist, whose work focused mainly on the high-temperature superconductors (HTS) and related materials. His particular research interests were in the areas of theoretical condensed matter physics, revolving primarily around iron- and copper-based high-temperature superconductors, quantum Hall effects (QHE), superconductivity and strongly correlated electron materials. His broad knowledge of condensed matter physics, his deep understanding of the effects of strong magnetic fields, and his talent for exposition were influential. Biography He was born in Sarajevo, former Yugoslavia (present Bosnia and Herzegovina). In 1979, he received a B.Sci. in physics from the University of Sarajevo. He then received a Fulbright Fellowship and attended the University of Minnesota, where he earned a Ph.D. in physics in 1985. He became a naturalized American citizen. He worked as a professor of physics at Johns Hopkins University (JHU) in the Henry A. Rowland Department of Physics and Astronomy in Baltimore from July 1987 until his death on July 26, 2012. Previously, he served as director of the TIPAC Theory Center at JHU. He was a foreign member of the Royal Norwegian Society of Sciences and Letters and a fellow of the APS Division of Condensed Matter Physics (DCMP). He served as a member of the committee to Assess the Current Status and Future Direction of High Magnetic Field Science in the United States, and contributed strongly to it, until his death. Students Among his graduate students are: Lei Xing (Jacob Haimson Professor, Stanford University) Igor F. Herbut (Professor, Simon Fraser University) Anton Andreev (Associate Professor, University of Washington) Sasha Dukan (Professor and Chair of Physics, Goucher College) Oskar Vafek (Associate Professor, Florida State University and NHMFL) Ashot Melikyan (Editor, Physical Review B) Andrés Concha (Postdoctoral Fellow, Harvard SEAS) Valentin Stanev (Postdoctoral Fellow, Argonne National Laboratory) Jian Kang (Grad student, Johns Hopkins University) Works He gave more than 100 invited talks at scientific meetings, including major international conferences. He has authored and published more than 125 scientific papers, and a book entitled: Honors and awards Fulbright Fellowship, U.S. Institute of International Education (1980) Shevlin Fellowship, University of Minnesota (1983) Stanwood Johnston Memorial Fellowship, University of Minnesota (1984) J. R. Oppenheimer Fellowship, Los Alamos National Laboratory, 1985 (declined) David and Lucile Packard Foundation Fellowship (1988-1994) Inaugural Speaker, J. R. Schrieffer Tutorial Lecture Series, National High Magnetic Field Laboratory (1997) Foreign Member, The Royal Norwegian Society of Sciences and Letters Fellow, The American Physical Society, Division of Condensed Matter Physics He received grants from the Department of Energy, and the National Science Foundation awarded him a post-doctoral fellowship that enabled him to spend two years studying at Harvard University. Death He died on July 26, 2012, at the age of 55 of an "apparent" heart attack at the George Washington University Hospital in Washington, D.C., after collapsing at Reagan National Airport. On March 23, 2013, the Johns Hopkins University Department of Physics and Astronomy organised a memorial symposium as a tribute to him. A number of distinguished speakers have been invited to highlight Zlatko's scientific accomplishments. See also List of American Physical Society Fellows (2011–) List of theoretical physicists Piers Coleman Alexei Alexeyevich Abrikosov Edward Witten Joseph Polchinski Notes References External links Are iron pnictides new cuprates? by Zlatko Tesanovic — American Physical Society Profile on Blogger — Blogger.com Zlatko Tesanovic: What is the theory of the Fe-pnictides? Curriculum vitae of Dr. Zlatko B. Tešanović 1956 births 2012 deaths Scientists from Sarajevo American string theorists American condensed matter physicists Yugoslav emigrants to the United States Bosniaks of Bosnia and Herzegovina Serbs of Bosnia and Herzegovina Johns Hopkins University faculty Fellows of the American Physical Society Superconductivity Death in Washington, D.C.
Zlatko Tesanovic
[ "Physics", "Materials_science", "Engineering" ]
864
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
65,618,367
https://en.wikipedia.org/wiki/Cadorna%20Line
The Cadorna Line, officially the Northern Frontier, () was the Italian defensive system on the northern border facing Switzerland, designed and built between 1899 and 1918. Its purpose was to protect the Po Valley and its main industrial centres from an attack by France, Germany or Austria-Hungary violating Swiss neutrality. Background In 1862, shortly after the birth of the Kingdom of Italy, the Army General Staff first considered the need to fortify its borders with Switzerland to prevent an invasion through the Alpine passes - the Great St Bernard, the Simplon, the Gotthard, the Spluga, the Maloja, the Bernina, the Stelvio and the Tonale. A plan was developed to build a series of forts and batteries linking the Ossola Valley, Lake Maggiore, Ceresio and Lake Como. Because of the costs involved, the plan was not implemented for a number of years. In 1871 a renewed effort was made to include the plan in Italy's defence budget. However, in 1882 the General Staff Committee declared its opposition to the idea, considering an Austrian violation of Swiss territory unlikely, and a German attack unrealistic. By this time the Triple Alliance had in any case neutralised these threats of invasion. Nonetheless, work on the projects resumed, and carried on haltingly until 1911, when the State Defense Office brought forward a scheme along the Bergamasque Alps and the Ticino salient. On April 18, 1911, the General Staff entrusted the work to the Milan Military Engineering Works Management, who began work on the Mera - Adda barrier with the construction of Fort Montecchio-Lusardi. Work continued intermittently until the outbreak of the Great War and was completed urgently when hostilities began. In September 1915, shortly after Italy entered the First World War. General :it:Carlo Porro warned Chief of Staff Luigi Cadorna that an invasion of Lombardy by the Central Powers, through neutral Switzerland, could lead to an attack on the area of Milan and thus on the heartland of Italian industrial production. Apart from a few border guards Italy had only eight battalions of the Territorial Militia on this frontier. This prompted the Italian government to restart the full-scale construction of the defensive line. Cadorna therefore decided to revive the 1882 plan, and ordered the building of an imposing fortified line from the Ossola Valley up to the Bergamesque Alps. It included roads, mule tracks, paths, trenches, artillery positions, observatories, field hospitals, command centers and logistics structures, all built at high altitudes from 600 to over 2,000 meters. The project plan provided for 72 km of trenches, 88 artillery positions (including 11 built in caves), 25,000 square meters of barracks, 296 kilometers of roads and 398 kilometers of mule tracks, at a cost of over 105m lire (about 150m euros today), requiring 40,000 men to build it. This complex of works was never used. The fortifications were garrisoned at the beginning of the war but abandoned after the defeat at Caporetto. The construction of the line The work was contracted out to several companies, including many from Varese, which worked so well that they also obtained orders for the fortifications in the Veneto region. By the declaration of war on Germany, Italy had completed the work, and created a special Command for them. The Italian-Swiss border was divided into 6 sectors: Val d’Aosta: The nineteenth-century Fort Bard was integrated with some positions in the Etroubles basin in order to prevent the passage from the Great St Bernard Pass, but the small likelihood of enemy maneuver in the sector limited the work. Toce-Verbano: (Simplon Pass to Lake Maggiore) The Ornavasso barrier was strengthened by providing a final retreat line at the Candoglia quarries in order to take advantage of the natural defense offered by the mountains of the Val Grande. The barrier of the Simplon railway was not modified, because it was assumed that enemy occupation was certain. Half a 75 mm artillery battery (two guns) was assigned to the Iselle cave post, with the task of closing the tunnel in case of emergency. When the risk of invasion passed, this post remained the only one operating until the end of the war. Verbano-Ceresio: (Luino to Porto Ceresio) the defense was built along two lines; initially it was the positions of the Varese entrenched camp that were equipped, and only later was it decided to move up to the Luino-Ponte Tresa line. This stretch of the Cadorna Line passes through the Cinque Vette Park. Ceresio-Lario: (Viggiù to Menaggio) The importance of this area was such that all plans had, as their first objective, the occupation of the entire Mendrisio District of Switzerland up to Capolago. For this reason it was decided to concentrate fire on the Melide dam-bridge, the only means of connection with Lugano. This action would have allowed the easy occupation of Monte Generoso to protect and support the strategic point of the entire sector, the Sighignola. From Porlezza to Menaggio, the massive mountain range south of the valley offered a sufficient natural barrier. S.Lucio-S.Jorio: The occupation of the border barracks was only planned in the event of an offensive. Mera-Adda: This sector used the Orobie Alps as a final defensive line. The Colico barrage was considered insufficient since its location, at the level of the lake, could allow the enemy to fight back from higher ground with artillery stationed on the nearby hills. High positions were then established further up on :it:Monte Legnoncino. Design and build The Cadorna Line was innovative - traditional structures such as isolated garrisons, vulnerable to heavy artillery, were abandoned in favor of steel armored domes, semi-permanent field works, barbette posts for mortars, howitzers and cannons, and cave positions for machine guns and medium caliber artillery. Machine gun nests were designed to ensure coordinated covering fire. The designers relied most heavily on trenches. These were very different to the largely improvised structures of the Western Front. The Cadorna Line trenches were designed in great detail, and equipped with parapets, loopholes and shelters. Due to the scarcity of soldiers, the barrages were built along a rearward line that exploited the terrain following the ridges and depressions along the border. The military doctrine of the time still relied on the impact of massed troops rather than on new technologies. Thus the line was built mainly with concrete front-line trenches, accompanied by platforms and niches as vantage points for shooting. The entrenchments were a succession of broken lines, often with sharp angles to ensure the greatest possible protection against the explosion of grenades, and at regular intervals they presented "bell" niches for the shelter of sentries in case of bad weather. Numerous tracts of trenches were equipped with small reductions, and ladders to allow the infantryman to exit in the event of a counterattack. There were also numerous machine gun positions underground. The gun batteries in the trenches were of three types: barbettes (outdoor and semi-raised positions protected by a wall), protected concrete bunkers, and caves where large-calibre artillery were housed, with magazines and barracks for the garrison. The Line enters service The fortified system was entrusted to the commander of the 5th Army from Varese, Lt. Gen. :it:Ettore Mambretti who had the task of protecting the left flank of the Italian defensive front. Due to the lack of troops, which were almost entirely employed at the front, the posts and barriers were built in more rearward positions, in order to exploit the terrain. The 5th Army would have 4 Army Corps, (each of two divisions), two Cavalry Divisions, a division deployed in Valle d'Aosta and 56 medium-caliber batteries. On January 16, 1917, the "Northern Frontier Advanced Occupation Command" ( was established in Varese, under the 5th Army, aimed at "surveilling local conditions and studying the concrete implementation of the plans developed"; these plans provided for the border defense plan, with the support of the allied countries, as was decided during the Third Chantilly Conference in December 1916. Three battle plans were developed by the 5th Army Command. "Plan A" was defensive and assumed French support in the Arona-Gallarate area. “Plan B” involved an offensive "leap" up to the passes of Monte Ceneri and Bernina and occupying the northern border ridges of the Adda river. “Plan C” was for an offensive to eliminate the Ticino salient in Switzerland. Following rumors that Switzerland had entered into a secret pact with Germany to attack Italy, the plans were modified to assume Switzerland was hostile rather than a neutral country that had been invaded from the north. In the first months of 1917 the works were almost complete, but by this time their garrison troops had already been sent to Veneto together with the units of the Territorial Militia. The fortified system then passed under the control of 6 battalions of the Regia Guardia di Finanza. After Caporetto, these 6 battalions were also sent to defend the Piave line and after this the Cadorna Line remained unguarded until the end of the conflict. General Mambretti, dismissed by Cadorna, was placed in charge of the OAFN on 20 July 1917, replacing General Lequio. The command of the 5th Army was dissolved as that of the OAFN was considered more than sufficient. In May 1918 Mambretti handed over command to General Novelli. On January 10, 1919, the OAFN was dissolved and the Cadorna Line was abandoned. After the First World War In the thirties the Fascist regime began construction of the Alpine Wall and approved maintenance work on the Cadorna Line. The Cadorna Line was briefly the focus of attention in 1938, when Mussolini thought about invading Switzerland, perhaps to flex his muscles with the Germans who had recently annexed Austria. The "Camicie Nere Como" battalion was sent to the border, but the order was revoked and the invasion abandoned. The only war action on the line was on November 13, 1943, when the first battle of the resistance took place in the bunkers of San Martino in Valcuvia, when fascist government forces defeated a group of partisans led by Colonel Carlo Croce. After the Second World War the works were completely abandoned, and were mostly neglected. Because of the excellent quality of their construction, many of Cadorna Line trenches and structures remain in good physical condition. The trenches of Ornavasso, Cassano Valcuvia and Monte Marzio in the province of Varese are particularly in good condition. In the province of Como the following structures have been restored and can be visited: Fortino Monte Sasso (Fortino di Cavallasca) Monte Bisbino La Crocetta di Menaggio Cardina battery Cave batteries are located at Plan Puitz in Saint-Rhémy-en-Bosses in the Aosta Valley, Monte Orsa near Viggiù, at Monte Piambello, Varese and at Locco Tocco in the province of Lecco. See also Alpine Wall Austro-Hungarian fortifications on the Italian border Italian fortifications on the Austro-Hungarian border Further reading Corbella, Roberts: Le fortificazioni della linea Cadorna tra Maggiore e Ceresio, Macchione Editore Viviani, Ambrogio & Corbella, Roberto: La Linea Cadorna Storia e Itinerari Val d'Ossola - Val d'Intelvi - Lago di Como - Valtellina, Macchione Editore Minola, Mauro & Ronco, Beppe: Fortificazioni di montagna Macchione Editore Vaschetto, Diego: Strade e sentieri della linea Cadorna. Itinerari storico-escursionistici dalla Valle d'Aosta alle Alpi Orobie, Edizioni del Capricorno, 2015 References External links short film (in Italian) about the Cadorna Line Italy in World War I Forts in Italy World War I defensive lines
Cadorna Line
[ "Engineering" ]
2,535
[ "Fortification lines", "World War I defensive lines" ]
65,619,168
https://en.wikipedia.org/wiki/Sagittarius%20A%2A%20cluster
The Sagittarius A* cluster is the cluster of stars in close orbit around Sagittarius A*, the supermassive black hole at the center of the Milky Way (in the Galactic Center). The individual stars are often listed as "S-stars", but their names and IDs are not formalized, and stars can have different numbers in different catalogues. One of the most studied stars is S2, a relatively bright star that also passes close by Sgr A*. , S4714 is the current record holder of closest approach to Sagittarius A*, at about , almost as close as Saturn gets to the Sun, traveling at about 8% of the speed of light. These figures given are approximate, the formal uncertainties being and . Its orbital period is 12 years, but an extreme eccentricity of 0.985 gives it the close approach and high velocity. List of stars The inferred orbits of stars around the supermassive black hole Sagittarius A* at the Milky Way's center are according to Gillessen et al. 2017, with the exception of S2 which is from GRAVITY 2019, S62 which is from Peißker et al. Jan 2020, and S4711 up to S4715, which are also from Peißker et al., Aug 2020. Here id1 is the star's name in the Gillessen catalog and id2 in the catalog of the University of California, Los Angeles. a, e, i, Ω and ω are standard orbital elements, with a measured in arcseconds. Tp is the epoch of pericenter passage, P is the orbital period in years and Kmag is the K-band apparent magnitude of the star. q and v are the pericenter distance in AU and pericenter speed in percent of the speed of light, and Δ indicates the standard deviation of the associated quantities. References Galactic Center Star clusters
Sagittarius A* cluster
[ "Astronomy" ]
406
[ "Astronomical objects", "Star clusters" ]
65,620,414
https://en.wikipedia.org/wiki/HD%20116029
HD 116029 is a binary star system about away. The primary subgiant star HD 116029 A belongs to the spectral class K1. Its age is younger than the Sun`s at 2.7 billion years. The primary star is slightly enriched by heavy elements, having 130% of solar abundance. The primary star does not have detectable flare activity. In 2016 the co-moving binary stellar companion HD 116029 B was detected. It is a red dwarf star of visual magnitude 16. The companion was confirmed orbiting the primary at a projected separation of 171 AU in 2017. Planetary system In 2011 one superjovian planet, HD 116029 b, on a mildly eccentric orbit around star HD 116029 A was discovered utilizing the radial velocity method. One more planet on a wider orbit was detected in 2016. The planets b and c are orbiting in a 2:3 orbital resonance. References Coma Berenices Planetary systems with two confirmed planets Multi-star planetary systems J13203954+2438555 065117 Durchmusterung objects 116029 K-type subgiants M-type main-sequence stars
HD 116029
[ "Astronomy" ]
235
[ "Coma Berenices", "Constellations" ]
65,620,708
https://en.wikipedia.org/wiki/Budget-feasible%20mechanism
In mechanism design, a branch of economics, a budget-feasible mechanism is a mechanism in which the total payment made by the auctioneer is upper-bounded by a fixed pre-specified budget. They were first presented by Yaron Singer, and studied by several others. References Mechanism design Auction theory
Budget-feasible mechanism
[ "Mathematics" ]
61
[ "Game theory", "Mechanism design", "Auction theory" ]
65,621,570
https://en.wikipedia.org/wiki/Marl%20Chemical%20Park
Marl Chemical Park () is an industrial park in Marl, North Rhine-Westphalia, Germany. It is the third largest industrial cluster in Germany and among the largest chemical production facilities in Europe. The site occupies over 6 square kilometers, hosts 100 chemical plants, employs 10,000 people, and produces 4 million metric tons of chemicals annually. 18 companies are based in the Park, including primary tenant Evonik Industries AG, which also owns and operates the infrastructure through its subsidiary Infracor GmbH. Originally named Chemische Werke Hüls, the complex was built in 1938 by a consortium led by IG Farben to produce synthetic rubber and other war materials for the Third Reich. By 1942 over 5000 workers' families had relocated into new housing which transformed Marl into a company town. At the height of World War II, the Germans also used slave laborers and prisoners of war at the plant. Allied bombing heavily damaged the site in mid-1943 although full production had resumed by 1944. Near the end of the war, employees saved the plant from complete destruction under Hitler's Nero Decree and the US Army occupied it in March, 1945. After the war, the plant operated under restrictions imposed by the Allied Control Council and by 1953 turned over to new German owners. New products such as plastics and intermediate chemicals began to be produced. Coal-mining conglomerate RAG AG became majority owner in 2007 and created a new entity Evonik Industries, with a focus on specialty and fine chemicals. In 2009, Marl Chemical Park received its current name. In 2012 a fire halted production of cyclododecatriene (CDT) for several months. The plant manufactures a substantial proportion of the world supply of CDT, a precursor to Nylon 12, which in turn led to a shortage which impacted global production of finished goods particularly in the automotive industry. Marl Chemical Park is an anchor point on the Ruhr Industrial Heritage Trail and can be visited. Location Marl Chemical Park is located on the northern edge of the Ruhr area in the southern foothills of the Münster region. Both the Lippe River and Wesel-Datteln Canal run through the northern part of the site. To the south is Bundesautobahn 52 with a connection to Bundesautobahn 43. In addition to freight rail links with Deutsche Bahn, an alternative connection leads to the Gelsenkirchen-Buer Nord–Marl Lippe railway. The national Ethene Pipeline System running from Gelsenkirchen to Wilhelmshaven travels through the site, and the Rhine-Ruhr Hydrogen Pipeline is owned and operated from the site. Facilities Including Evonik, Marl Chemical Park hosts 18 companies and 100 production plants in over 900 buildings operating through a shared infrastructure. It is the third largest integrated industrial park, known as a Verbund site, in Germany. It is also the largest filling center for hydrogen in Europe. Shared services include: Energy cogeneration: two gas-fired power plants and one coal-fired power station provide 300MW of electrical power in different voltages (110kV, 10kV, 6kV, 500V and 400/230V) and 1000 tons of steam per hour in various pressures (4, 20, 70 and 120 bar). In 2019 construction began on a replacement of the coal station with a new 180MW natural gas facility, to be opened in 2022. Street grid: 55km long and numbered east-west (100, 200, 1200) and south-north (20, 40, 60), giving buildings unique numbers that indicate their position in the facility (i.e. building 145 is near the intersection of streets 100 and 40). Raw materials: via pipeline, rail, truck and ship, such as ethylene, propene, C4 hydrocarbons, benzene, methanol, brine and natural gas. This includes storage areas, high rack and tank storage facilities. Air separation plant: generates liquefied argon and other gases based on the Linde process. Internal pipeline network: 1200 km long on 30 km of pipeline bridges transporting reaction intermediates, end products, and various gases including hydrogen, nitrogen, and oxygen. Industrial railway: 100km-long and freight station with two connections to the Deutsche Bahn, in one of the largest private electronically-monitored train stations in Europe. Wastewater system: 70km long sewer network separated into rain/cooling and dark water channels, processed by two sewage treatment plants before reaching the Lippe River. On the north end of the site is a sludge incineration plant. Fire department: handling hazardous materials, industrial fires and other emergencies. History Construction In 1936, the Nazi government launched a Four Year Plan which identified strategic materials critical to German rearmament, with a goal to make Germany self-sufficient in preparation for war. Replacing natural rubber with synthetic rubber in the manufacture of tires and continuous track for the Wehrmacht became a priority. The solution was Buna-S, a polymer derived from coal, initially developed by Bayer in 1928 and first manufactured commercially by parent company IG Farben in 1937. Prior to World War II Germany had become the world leader in the development of synthetic rubber technology. To build a plant needed for mass production of Buna-S, a new company Chemische Werke Hüls GmbH was created as a joint venture between majority owner IG Farben and coal-mining company Hibernia AG, a subsidiary of Prussian state-owned holding company VEBA AG. The plant would use a new electric arc manufacturing method developed in a research alliance with American company Standard Oil of New Jersey in 1935. IG Farben provided patents to the joint venture free of charge, and in return the joint venture was to provide IG Farben all new developments in the technology and proceeds of future sales. The factory site, adjacent to the August-Victoria coal mine at Hüls near the village of Marl, was strategically located on the northern edge of the Ruhr industrial basin along the Wesel-Datteln Canal. The Hibernia coking and hydrogenation plants in Scholven, recently completed in 1936, were to the southwest. This created a highly efficient production cycle wherein exhaust gases from Hibernia were piped to Hüls, which were converted into acetylene and ethylene using the electric arc process. Acetylene was then used to make butadiene into buna, while ethylene was processed via ethylene oxide into antifreeze and other products. The excess hydrogen produced was returned to Hibernia to make synthetic gasoline from coal liquefaction. The Hüls factory complex was inaugurated on May 9, 1938. Managers and foremen were relocated to Marl exclusively from other IG Farben plants across Germany, such as Ludwigshafen am Rhein, Schkopau and Leverkusen, while skilled workers came from the surrounding Münster area. Housing became critical and workers lived in temporary camps as new homes were built south of the plant. The neighborhood, known as the Bereitsschaftssiedlung (literally "standby settlement"), was built by IG Farben architect Clemens Anders in the traditionalist Stuttgart school style favored in the Third Reich. From 1938 to 1942, more than 5,000 employees and their families moved in, transforming Marl into a company town. A Feierabendhaus (social center) was built in 1940 with a company restaurant, cinema, theater, and training school for National Socialist concepts. Robert Ley, director of the German Labor Front, laid the foundation stone. World War II At the outbreak of war, the plant was still being fitted for full production and the first commercial buna bales were delivered on August 29, 1940. By 1942 the plant was producing 50,000 tons of Buna-S annually along with chlorine, solvents, softening agents, resins and other chemicals needed for the war effort. In addition to the 5000 German employees, between 10,000-15,000 prisoners of war and forced laborers were locked up in 30 camps around Marl to provide workers for the plant and mines which supplied it. Records from 1944 show a special prison camp on the company site controlled by the Gestapo, and Polish workers transferred between Hüls and the Buna plant at Auschwitz. The effects of war reached Hüls in mid-1943. Raw materials had become increasingly difficult to obtain and the plants were targeted by allied bombing. On June 11, a heavy daylight raid dropped 1,560 bombs which killed 186 people and wounded 752. Another raid by the USAAF 100th bomber group was carried out on June 22 from 25,000 ft. The site was attacked again in daylight by 235 bombers from the USAAF on June 25, with 16 bombers lost. These raids halted all production for three months. More heavy bombing targeted the Hibernia hydrogenation plants to stop the flow of raw materials, however the Hüls works managed to reach maximum output again by 1944. On March 29, 1945 a German Army special unit appeared with orders under Hitler's Nero Decree to destroy everything in Hüls. Plant employees and particularly plant director Paul Baumann persuaded the unit to disobey the orders and protect the plant until the arrival of the Americans. The United States 8th Armored Division occupied the factories on March 31, 1945. At the end of the war, the worker population had dropped from over 10,000 to about 500. Postwar Immediately after the war, the site was placed under British administration. On the breakup of owner IG Farben, the Allies initially placed tight limits on what could be produced and had plans to dismantle the plant, although rubber shortages in Europe soon meant that great efforts were made to restart buna production. By 1949, the company recognized their existing synthetic rubber production methods were not competitive in world markets and American development aid became critical in re-establishing the plant's former importance. In 1953, the works were released from Allied control and ownership converted into a stock corporation. The complex was named Chemische Werke Hüls AG and began manufacturing plastics, raw materials for detergents and a new synthetic rubber process developed by the Americans. During the Wirtschaftswunder, the chemical works were continuously redeveloped with new product lines under the management of VEBA AG. In 1985, the company began trading under the name Hüls AG and had moved away from basic industries towards more complex chemicals. Hüls AG and Degussa AG merged in 1999 to form Degussa-Hüls, and in 2001 Degussa-Hüls and SKW Trostberg AG merged to form the new Degussa AG, the third largest chemical group in Germany. Recent In 2006, Essen-based coal mining conglomerate RAG AG took a controlling interest in the plant. The chemicals, energy and real estate business of RAG were then combined to form a new industrial group Evonik Industries. In 2009, Evonik repositioned itself entirely into specialty chemicals and became owner/operator of the newly named Marl Chemical Park through its subsidiarity Infracor GmbH. Today, in addition to Evonik and its affiliates, 17 other companies are based in the Park. Resident companies Evonik Industries and subsidiaries: Nutrition & Care Performance Materials Ressource Efficiency Materials Creavis Technology and Infrastructure Logistics Service Catering Services Operations Real Estate CPM Netz TÜV Nord InfraChem Umschlag Terminal Marl Westgas Companies independent of Evonik Air Liquide GmbH Air Products GmbH Alba Group plc & Co. KG C+S Chlorgas GmbH Dow Deutschland Anlagengesellschaft mbH Eastman Chemical Company HTF GmbH Goodman Germany GmbH Ineos Solvents Marl GmbH Ineos Styrenics GmbH (before 2005 part of BP) Karl Schmidt Spedition GmbH & Co. KG Linde plc Metro Logistics Germany Natural Energy West GmbH OQ Chemicals GmbH & Co. KG Sasol Germany GmbH Synthomer Deutschland GmbH Vestolit GmbH Products Marls Chemical Park produces 4 million metric tons of chemicals annually. More than 4,000 chemical products are manufactured, the largest quantities being: Acetylene, acrylic acid, alkanolamines, alkylphenols Benzene, butadiene, butane, butanediol, butanol, butene, butyl acetates, butyl acrylate, butyl chloride, butyraldehyde Chlorine, copolyamides, copolyesters, cumene Dichlorobutane, dichloroethane Ethoxylate, ethylbenzene, ethyl chloride, ethylene, ethylene glycol, ethylene oxide, 2-ethylhexanol Formaldehyde Glycols Resins Isobutene Latex MAC/MAS, methanol, methyl chloride, MTBE Sodium hydroxide Polyamides, polyesters, polyethylene glycols, polyoctenamer, polystyrene, propylene, PVC Hydrochloric acid, sulfuric acid, styrene Surfactants, tetrahydrofuran Plasticizers Emergency management The chemical industry in Germany and Austria jointly maintain the Transport-Unfall-Informations- und Hilfeleistungssystem, acronym TUIS (English: Transport Accident Information and Assistance System). Experts can be reached by phone around the clock to provide information on how to handle chemicals in the event of a transport accident. The Marl Chemical Park fire brigade is one of the ten nationwide TUIS emergency call centers and also provides vehicles and equipment. Accidents January 30, 1995: After a previous safety shutdown, a connecting elbow in a reactor at the ethanolamine factory tore off when starting up and about two tons of ammonia and 400 kg of ethanolamine leaked. Since this accident happened after the day shift, only property damage occurred. The release of the substances is registered as ZEMA event 9501. July 19, 1998: Operator error in the vinyl chloride plant triggered an unexpected exothermic reaction. This led to the bursting of pipes, escape of hydrogen chloride and an open fire. The fire brigade was able to protect neighboring systems with cooling, suppress the hydrogen chloride with spray mist and let escaping gases burn off in a controlled manner. There was considerable property damage. The release of the substance is recorded by ZEMA as event 9815. May 28, 1999: A pipe bend in a vinyl chloride plant tore open and a mixture of 1,2-dichloroethane, vinyl chloride and hydrogen chloride leaked out. Six employees were injured, some emergency services also suffered minor injuries. No people were affected outside the Chemical Park. Because of the release of the substances, this was a reportable accident registered as ZEMA event 9918. October 10, 2006: At around 10:40am there was a deflagration in a production building of the intermediate product factory. As a result, the Marlotherm with which products are heated up to approximately 300 °C ignited. As a result of the oil fire, a huge black column of smoke rose into the sky clearly visible even in the neighboring towns. After a few hours, the plant fire brigade was able to put out the fire. This incident is recorded by ZEMA as event 0621. 2012 cyclododecatriene plant fire: On March 31, 2012 at around 1:35 p.m. there was damage to the cyclododecatriene (CDT) system of the Evonik company, which was accompanied by a 100-meter-high jet flame and heavy smoke. Residents reported a severe explosion and a cloud of smoke moved south over the A 2. One worker died at the scene of the accident, another died from serious injuries later in hospital. Measurements by the fire brigade showed no health risk for the population. According to initial investigations, material fatigue is assumed to be the cause. The damage stopped production of cyclododecatriene (CDT) for several months. The plant produced a substantial proportion of the world's production of CDT, particularly that needed to produce laurolactam, a precursor to the polyamide Nylon 12. This shortage in turn led to concerns for global production of finished goods, particularly in the automotive industry. Other biobased polyamides, not dependent on laurolactam or CDT, have been put forward as possible alternative materials. References External links Evonik Industries website in English Marl Chemical Park in English 1938 establishments in Germany Buildings and structures in Germany destroyed during World War II Buildings and structures in Krefeld Chemical industry in Germany Chemical plants Companies based in North Rhine-Westphalia Industrial buildings completed in 1938 Manufacturing plants in Germany Rubber industry World War II strategic bombing conducted by the United States German Industrial Heritage Trail sites Companies of Nazi Germany
Marl Chemical Park
[ "Chemistry" ]
3,444
[ "Chemical process engineering", "Chemical plants" ]
65,624,066
https://en.wikipedia.org/wiki/Satyavati%20Motiram%20Sirsat
Satyavati Motiram Sirsat (7 October 1925 – 10 July 2010) was an Indian cancer researcher. Early life Sirsat was born in Karachi, but lived in various cities as a girl. Her Gujarati parents were Theosophists, and she attended Kalakshetra, a theosophy-based school in Chennai, run by George Arundale and Rukmini Arundale. She earned a bachelor's degree in microbiology at St. Xavier's College in 1947, and completed doctoral studies in pathology at Tata Memorial Hospital for Cancer in 1958. She pursued further studies in electron microscopy in London in 1958. Career Sirsat began the first electron microscopy laboratory to study cancer in India. She was founder and president of the Electron Microscope Society of India. Her research, which focused on oral submucous fibrosis, was published in Nature, Carcinogenesis, Tumori Journal, Journal of Cell Science, Journal of Investigative Dermatology, and other scholarly journals. She served on the editorial boards of other journals, including the Indian Journal of Experimental Biology Education, and Journal of Biosciences. She became a fellow of the Indian Academy of Sciences in 1975. Sirsat retired from research in 1985, and became a social worker and medical ethicist, and took an interest in ayurvedic interventions as cancer treatment. She was active in hospice work, and wrote Death, the Final Freedom (1998) about this work. Sirsat advised aspiring scientists, "Be honest to your work and true to yourself. Be disciplined. Never disparage the work of your fellow scientists. Be observant — never distort your log or show records to fit a preconceived theory. Above all, life is to learn — so learn, learn and learn!" Personal life Satyavati Motiram married fellow cancer researcher M. V. Sirsat. She died from cancer in 2010, aged 84 years. References 1925 births 2010 deaths Indian women scientists Cancer researchers Electron microscopy People from Karachi Gujarati people
Satyavati Motiram Sirsat
[ "Chemistry" ]
414
[ "Electron", "Electron microscopy", "Microscopy" ]
65,625,135
https://en.wikipedia.org/wiki/Androgen%20backdoor%20pathway
The androgen backdoor pathway is responsible for the synthesis of physiologically relevant androgens. This process starts with 21-carbon () steroids, also known as pregnanes, and involves a step called "5α-reduction". Notably, this pathway does not require the intermediate formation of testosterone, hence the term "bypassing testosterone" is sometimes used in medical literature as the hallmark feature of this way of androgen biosynthesis. This feature is a key distinction from the conventional, canonical androgenic pathway, which necessitates the involvement of testosterone as an intermediate in the synthesis of androgens. These alternate androgen pathways play a crucial role in early male sexual development. In individuals with congenital adrenal hyperplasia due to enzyme deficiencies like 21-hydroxylase or cytochrome P450 oxidoreductase deficiency, these pathways can activate at any age with increased levels of precursors like progesterone or 17α-hydroxyprogesterone. This activation can lead to symptoms of hyperandrogenism such as acne, hirsutism, polycystic ovarian syndrome, or prostate enlargement. In the canonical pathway, dihydrotestosterone is directly synthesized from testosterone by the enzyme 5α-reductase, primarily in tissues like the prostate gland, hair follicles, and skin. Both pathways rely on 5α-reductase, but in the androgen backdoor pathway, this enzyme acts on steroids (pregnanes), initiating a series of chemical reactions that eventually lead to dihydrotestosterone production. In contrast, in the canonical pathway, 5α-reductase targets the 4,5-double bond in testosterone, producing dihydrotestosterone directly. The backdoor pathway was initially described as a biosynthetic route where 5α-reduction of 17α-hydroxyprogesterone ultimately leads to dihydrotestosterone. Since then, several other pathways have been discovered that lead to 11-oxygenated androgens which are also physiologically significant. Function Androgens that bind to and activate the androgen receptor have numerous physiological functions which can broadly divided into androgenic (male sexual development) and anabolic (building muscle and bone). The anabolic effects are important in both males and females, although females have lower circulating levels of androgens. The physiologically most important androgens are testosterone (T) and dihydrotestosterone (DHT), which are considered classical androgens because their role in human health was discovered in 1930s. However, much later, in 2010s, the role in human health of 11-oxygenated androgens was established, namely, of 11-ketotestosterone (11KT) and 11-ketodihydrotestosterone (11KDHT), that both bind and activate the human androgen receptor with affinities, potencies, and efficacies that are similar to that of testosterone (T) and DHT, respectively, although 11-oxygenated androgens were long known to be principal androgens in teleost fishes. The main biochemical route to T and DHT is the canonical (classical) pathway that proceeds from pregnenolone (P5). Alternatively, DHT but not T can be produced through a backdoor pathway that proceeds from 17α-hydroxyprogesterone (17OHP) or progesterone (P4). The function of androgen backdoor pathways is to produce physiologically significant androgens in normal conditions where the conventional pathway is insufficient, such as in male early sexual differentiation. Sexual differentiation is a process by which hormones determine anatomic phenotype, mainly the development of the reproductive organs. DHT is the most important androgenic hormone and is a product of both canonical and backdoor pathways. Additionally, 11KDHT but not 11KT can be biosynthesized from the C11-oxy backdoor pathway starting from progesterone (P4). These C11-oxy androgens can contribute to the pathology of congenital adrenal hyperplasia, polycystic ovarian syndrome, and prostate cancer. The androgen backdoor route is activated during normal prenatal development and leads to early male sexual differentiation. Dihydrotestosterone synthesized by this route plays a critical role in the development of male sexual characteristics, including the differentiation and maturation of the male external genitalia, the prostate gland, and other male reproductive structures. By bypassing the conventional intermediates (A4 and T), this pathway ensures the timely and appropriate development of male sexual traits in early embryonic and fetal stages. Both canonical and backdoor pathways are essential in normal male embryonic development. A disruption in the backdoor pathway can lead to incomplete or altered male sexual differentiation. This disruption may result in abnormalities or underdevelopment of the male external genitalia, prostate gland, and other male reproductive structures. The specific consequences can vary depending on the nature and extent of the disruption and may lead to conditions such as ambiguous genitalia or other disorders of sexual development (DSD), where the individual's physical and sexual characteristics do not align clearly with typical male, i.e., undervirilization of male infants. Undervirilization refers to insufficient development of male characteristics due to below-normal effects of androgens during prenatal development. After birth, it may manifest as markedly underdeveloped male genitalia. The backdoor pathway of DHT biosynthesis from 17OHP to DHT was first described in the marsupials and later confirmed in humans. Both the canonical and backdoor pathways of DHT biosynthesis are required for normal development of male genitalia in humans. As such, defects in the backdoor pathway from 17α-hydroxyprogesterone (17OHP) or progesterone (P4) to DHT lead to undervirilization in male fetuses because placental P4 is the precursor of DHT via the backdoor pathway. In 21-hydroxylase deficiency or cytochrome P450 oxidoreductase deficiency, even a mild increase in circulating 17-OHP levels may activate this pathway, regardless of the patient's age and sex. Mechanism Androgen signaling The androgen response mechanism involves androgens binding to androgen receptors in the cytoplasm, which then move into the nucleus and control gene transcription by interacting with specific DNA regions called androgen response elements. This response mechanism plays a crucial role in male sexual differentiation and puberty, as well as other tissue types and processes, such as the prostate gland (regulate secretory functions), hair follicles (androgens influence hair growth patterns), skin (androgens regulate sebum production and the thickening and maturation of the skin), and muscle (contribute to the development and maintenance of muscle mass and strength). Different androgens have different effects on androgen receptors because they have different degrees of binding and activating the receptors. Physiologically significant androgens are those androgens that have a strong influence on the development and functioning of male sexual characteristics, unlike physiologically insignificant androgens, which have low biological activity or are quickly metabolized into other steroids. Physiologically insignificant androgens do not have a notable influence on the development and functioning of male or female sexual characteristics, they can be products of the metabolism of more active androgens, such as testosterone (T), or their precursors. Androgen biosynthesis The androgen backdoor pathways are vital for creating androgens from 21-carbon () steroids, known as pregnanes. A 21-carbon steroid is a steroid molecule with 21 carbon atoms, hence, their chemical formula contains . For example, the chemical formula of progesterone is . That's why 21-carbon steroids are denoted as -steroids, 19-carbon steroids are denoted as steroids, and so on. The androgen backdoor pathways occur without the involvement of testosterone (T) and/or androstenedione (A4), which are part of the conventional, canonical (classic) androgenic pathway. In the canonical pathways of androgen biosynthesis, DHT is synthesized from T via 5α-reduction, so that 5α-reduction of T, a steroid, is the last step of the pathway (see Dihydrotestosterone § Biosynthesis). In the backdoor pathways, to the contrary, 5α-reduction of steroids is the first step. The 5α-reduction is a chemical reaction where a functional group attached to the carbon in position 5α of the steroid nucleus is reduced, and a double bond between carbon atoms numbered 4 and 5 (see § Figure 2) in the steroid molecule is replaced to the single bond in a chemical reaction catalyzed by the SRD5A1 enzyme (see examples on the § Figure 3 denoted by arrows marked by "SRD5A1" in the square box). The androgen backdoor pathways can be also activated in pathologic conditions (diseases), such as congenital adrenal hyperplasia (CAH), leading to hyperandrogenism. Biochemistry Canonical biosynthesis In the canonical androgen biosynthesis pathway, dihydrotestosterone (DHT) is synthesized irreversibly from testosterone (T) by the enzyme 5α-reductase, while T is synthesized from androstenediol (A5) or androstenedione (A4), which all are steroids (androgens). The 5α-reduction of T occurs in various tissues including the genitals (penis, scrotum, clitoris, labia majora), prostate gland, skin, hair follicles, liver, and brain. Around 5 to 7% of T undergoes 5α-reduction into DHT in male adults. Most DHT is produced in peripheral tissues like the skin and liver (called target tissues), whereas most circulating DHT originates specifically from the liver. The testes and prostate gland contribute relatively little to concentrations of DHT in circulation. Backdoor biosynthesis What distinguishes the androgen backdoor from the classical pathway is whether 5α-reduction initiates or terminates the pathway. In the backdoor pathway, 5α-reduction of progesterone (P4) or 17α-hydroxyprogesterone (17OHP) occurs at or near the beginning of the pathway respectively. Conversely, in the classical pathway, 5α-reduction is the final step, where testosterone is converted into dihydrotestosterone (DHT). The backdoor pathway splits into two subpathways at P4, proceeding through either 17OHP or 5α-DHP before merging again at 5α-Pdiol. The biosynthetic intermediate 5α-Pdiol in turn is converted into DHT in two chemical steps. 17OHP subpathway The first step of this pathway is the 5α-reduction of 17OHP to 5α-pregnan-17α-ol-3,20-dione (referred to as 17OHDHP or 17α-hydroxy-dihydroprogesterone). The reaction is catalyzed by SRD5A1. 17OHDHP is then converted to 5α-pregnane-3α,17α-diol-20-one (5α-Pdiol) via 3α-reduction by a 3α-hydroxysteroid dehydrogenase isozyme (AKR1C2 and AKR1C4) or HSD17B6, that also has 3α-reduction activity. The pathway then proceeds from 5α-Pdiol the same way as the pathway that starts from P4, i.e. 5α-Pdiol → AST → 3α-diol → DHT. The pathway can be summarized as: 17OHP → 17OHDHP → 5α-Pdiol → AST → 3α-diol → DHT. 5α-DHP subpathway The pathway from progesterone (P4) to DHT is similar to that described above from 17OHP to DHT, but the initial substrate for 5α-reductase is P4 rather than 17OHP. Placental P4 in the male fetus is the feedstock, that is, a starting point, the initial substrate, for the backdoor pathway found operating in multiple non-gonadal tissues. The first step in this pathway is 5α-reduction of P4 toward 5α-dihydroprogesterone (5α-DHP) by SRD5A1. 5α-DHP is then converted to allopregnanolone (AlloP5) via 3α-reduction by AKR1C2 or AKR1C4. AlloP5 is then converted to 5α-Pdiol by the 17α-hydroxylase activity of CYP17A1. 5α-Pdiol is also known as 17α-hydroxyallopregnanolone or 17OH-allopregnanolone. 5α-Pdiol is then converted to 5α-androstan-3α-ol-17-one, also known as androsterone (AST) by 17,20-lyase activity of CYP17A1 which cleaves a side-chain (C17-C20 bond) from the steroid nucleus, converting a steroid (a pregnane) to a steroid (an androstane or androgen). AST is 17β-reduced to 5α-androstane-3α,17β-diol (3α-diol) by HSD17B3 or AKR1C3. The final step is 3α-oxidation of 3α-diol in target tissues to DHT by an enzyme that has 3α-hydroxysteroid oxidase activity, such as AKR1C2, HSD17B6, HSD17B10, RDH16, RDH5, and DHRS9. This oxidation is not required in the classical androgen pathway. The pathway can be summarized as: P4 → 5α-DHP → AlloP5 → 5α-Pdiol → AST → 3α-diol → DHT. 11-Oxygenated androgen backdoor biosynthesis There are two known physiologically and clinically significant 11-oxygenated androgens, 11-ketotestosterone (11KT) and 11-ketodihydrotestosterone (11KDHT), which both bind and activate the androgen receptor with affinities, potencies, and efficacies that are similar to that of testosterone (T) and DHT, respectively. As for 11β-hydroxytestosterone (11OHT) and 11β-hydroxydihydrotestosterone (11OHDHT), the androgenicity of these steroids is a point of research. Although some studies suggest that though 11β-hydroxytestosterone (11OHT) and 11β-hydroxydihydrotestosterone (11OHDHT) may not have significant androgenic activity as they were once thought to possess, they may still be important precursors to androgenic molecules. The relative importance of the androgens depends on their activity, circulating levels and stability. The steroids 11β-hydroxyandrostenedione (11OHA4) and 11-ketoandrostenedione (11KA4) have been established as having minimal androgen activity, but remain important molecules in this context since they act as androgen precursors. Still, of all physiologically and clinically significant 11-oxygenated androgens, only 11KDHT (but not 11KT) is biosynthesized via a backdoor pathway. The backdoor pathways to 11-oxygenated androgens can be broadly defined as two Δ4 steroid entry points (17OHP and P4, see Figure 4) that can undergo a common sequence of several transformations: 11β-hydroxylation of 17OHP or P4 by CYP11B1 in the adrenal cortex into 21dF or 11OHP4, respectively, 5α-reduction by SRD5A1/SRD5A2, cleavage of a side-chain (C17-C20 bond) from the steroid nucleus by 17,20-lyase activity of CYP17A1 which converts a steroid to a steroid, 17β-reduction by AKR1C3 (an oxo (=O) functional group in position 17β replaced to the hydroxyl (−OH) functional group), reversible 11β-reduction/oxidation of the ketone/alcohol (an oxo (=O) functional group or hydroxyl (−OH) functional group, respectively) by HSD11B1/HSD11B2. reversible 3β-reduction/oxidation of the ketone/alcohol (an oxo (=O) functional group or hydroxyl (−OH) functional group, respectively) by AKR1C2 or AKR1C4. Clinical significance Congenital adrenal hyperplasia In congenital adrenal hyperplasia (CAH) due to deficiency of 21-hydroxylase or cytochrome P450 oxidoreductase (POR), the associated elevated 17OHP levels result in flux through the backdoor pathway to DHT that begins with 5α-reduction of 17OHP. This pathway may be activated regardless of age and sex and cause symptoms of androgen excess In adult females, excess androgens can cause hirsutism (excessive hair growth), alopecia (hair loss), menstrual irregularities, infertility, and polycystic ovarian syndrome. In adult males, excess androgens can cause prostate enlargement, prostate cancer, and reduced sperm quality. In adults of both sexes, excess androgens can also cause metabolic disturbances, such as insulin resistance, dyslipidemia, hypertension, and cardiovascular disease. In fetus, excess of androgens due to excess of fetal 17OHP in CAH may contribute to DHT synthesis that leads to external genital virilization in newborn girls with CAH. P4 levels may also be elevated in CAH, leading to androgen excess via the backdoor pathway from P4 to DHT. 17OHP and P4 may also serve as substrates to 11-oxygenated androgens in CAH. Masculinization of female external genitalia in a fetus due to the mother's intake of certain exogenous hormones—the so-called progestin-induced virilization—is usually less noticeable than in congenital adrenal hyperplasia (CAH), and unlike CAH, it does not cause progressive virilization. Serum levels of the 11-oxygenated steroids: 21-deoxycorticosterone, also known as 11β-hydroxyprogesterone (11OHP4) and 21-deoxycortisol (21dF), have been known to be elevated in both non-classical and classical forms of CAH, and liquid chromatography–mass spectrometry profiles that include these steroids have been proposed for clinical applications, including newborn screening. Classical CAH patients receiving glucocorticoid therapy had 11-oxygenated steroid serum levels that were elevated compared to healthy controls. In CAH patients with poor disease control, 11-oxygenated androgens remain elevated for longer than 17OHP, thus serving as a better biomarker for the effectiveness of the disease control. In males with CAH, 11-oxygenated androgen levels may indicate the presence of testicular adrenal rest tumors. Development of the reproductive system In order for the male genitalia to develop properly in humans, both the classical and backdoor pathways are essential as means of DHT biosynthesis. Deficiencies in the backdoor pathway that converts 17OHP or P4 into DHT can result in undervirilization of the male fetus. This underviriliztion may happen because placental P4 acts as an important precursor to fetal DHT specifically within the backdoor pathway that should not be disrupted. Undervirilization refers to an incomplete masculinization of the male fetus. It can have consequences such as ambiguous genitalia or underdeveloped reproductive organs including the penis and testes. These conditions may impact fertility, sexual function, and can also affect an individual's overall gender identity later in life. A case study involving five individuals with a 46,XY (male) chromosomal pattern from two families revealed that their DSD, manifested in unusual genital appearance, was caused by mutations in the AKR1C2 and/or AKR1C4 genes. These genes are exclusively involved in the backdoor pathway of dihydrotestosterone (DHT) production. Mutations in the AKR1C3 and genes involved in the classical androgen pathway were excluded as the causes for the atypical genital appearance. Interestingly, their female relatives with a 46,XX chromosomal pattern who had the same mutations exhibited normal physical characteristics and fertility. Although both AKR1C2 and AKR1C4 enzymes are needed for DHT synthesis in a backdoor pathway, the study found that mutations in AKR1C2 only were sufficient for disruption. However, these AKR1C2/AKR1C4 variants leading to DSD are rare and have been only so far reported in just those two families. This case study highlights the role of AKR1C2/4 in the alternative androgen pathways. Isolated 17,20-lyase deficiency syndrome due to variants in CYP17A1, cytochrome b5, and POR may also disrupt the backdoor pathway to DHT, as the 17,20-lyase activity of CYP17A1 is required for both classical and backdoor androgen pathways. This rare deficiency can lead to DSD in both sexes, with affected girls being asymptomatic until puberty, when they show amenorrhea. 11-oxygenated androgens may play important roles in DSDs. 11-oxygenated androgen fetal biosynthesis may coincide with the key stages of production of cortisol — at weeks 8–9, 13–24, and from 31 and onwards. In these stages, impaired CYP17A1 and CYP21A2 activity lead to increased ACTH due to cortisol deficiency and the accumulation of substrates for CYP11B1 in pathways to 11-oxygenated androgens and could cause abnormal female fetal development (virilization). Benign prostatic hyperplasia and prostatitis Androgens are known to play a crucial role in prostate-related conditions such as benign prostatic hyperplasia (BPH), chronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) and prostate cancer. In BPH, 11-oxygenated steroids (pregnanes) have been identified are precursors to androgens. Specifically, steroids like 11β-hydroxyprogesterone (11OHP4) and 11-ketoprogesterone (11KP4) can be converted to 11-ketodihydrotestosterone (11KDHT), an 11-oxo form of DHT with the same potency. These precursors have also been detected in tissue biopsy samples from patients with BPH, as well as in their serum levels. The relationship between steroid serum levels and CP/CPPS suggests that deficiencies in the enzyme CYP21A2 may contribute to the development of this condition. Non-classical congenital adrenal hyperplasia (CAH) resulting from CYP21A2 deficiency is typically considered asymptomatic in men. However, non-classical CAH could be a comorbidity associated with CP/CPPS. Prostate cancer The backdoor pathway to DHT plays a role in the development of androgen-sensitive cancers, such as prostate cancer. In some cases, tumor cells have been found to possess higher levels of enzymes involved in this pathway, resulting in increased production of DHT. Androgen deprivation therapy (ADT) is a common treatment for prostate cancer, which involves reducing the levels of androgens, specifically T and DHT, in the body. This treatment is done through the use of medications that aim to block the production or action of these hormones. While ADT can be effective in slowing the growth of prostate cancer, it also has several drawbacks, one of which is the potential for increased production of P4 and activation of the backdoor pathway of DHT biosynthesis where P4 serves as a substrate. Normally, this pathway is not very active in healthy adult males, as the majority of DHT is produced through the classical pathway, which involves the direct conversion of T into DHT by one of the SRD5A isozymes. However, when T levels are reduced through ADT, the body may compensate by increasing the production of P4, which can then serve as a substrate for the backdoor pathway. One of the main drawbacks of this increased production of P4 leads to an increase in DHT levels, which fuel the growth of prostate cancer cells. This increased production of P4 and DHT can result in the cancer becoming resistant to ADT and continuing to grow and spread. Additionally, the increased levels of P4 can also cause side effects such as weight gain, fatigue, and mood swings (extreme or rapid changes in mood). In prostate cancer, removal of testicular T through castration (surgical or chemical removal or inactivation of testicles) helps eliminate the growth-promoting effects of androgens. However, in some cases, metastatic tumors can develop into castration-resistant prostate cancer (CRPC). While castration reduces serum T levels by 90-95%, it only decreases DHT in the prostate gland by 50%. This difference between the magnitude of androgen levels confirms that the prostate has enzymes capable of producing DHT even without testicular T. In addition to DHT production within the prostate, researchers found that 11-oxygenated androgens play a role in maintaining total circulating androgen pool levels which are relevant to the amounts of clinically significant androgens in the body. These 11-oxygenated androgens contribute greatly to reactivating androgen signaling in patients with CRPC. 11-oxygenated androgens make up around 60% of the total active androgen pool in such patients. Unlike T or DHT, these levels of 11-oxygenated androgens remain unaffected by castration therapy. History The backdoor pathway to DHT biosynthesis was discovered in early 2000s in the marsupials and later confirmed in humans. That's why the backdoor pathway of DHT biosynthesis from 17OHP can be called a marsupial pathway. This pathway is also present in other mammals, such as rats, and are studied in the other mammals as a way to better understand these pathways in humans. Marsupials, and in particular, tammar wallabies () are especially useful for studying the processes of sexual differentiation and development in the context of androgen biosynthesis, because sexual differentiation in these species occurs only after birth, with testes beginning to form two days after birth and ovaries only on the eighth day after birth. This feature of post-natal early sexual differentiation allows scholars to study the influence of hormones on the body from the very beginning of the process of sexual differentiation, as well as the pathways of biosynthesis of these hormones. Tammar wallabies are particularly interesting due to the fact that all these hormones, pathways, and the ways in which hormones affect body features and growth of different organs can be studied when the organism is already born, unlike in other mammals such as rats, where sexual differentiation in a fetus occurs inside the placenta before birth. The discovery of the backdoor pathway to DHT biosynthesis in tammar wallaby pouch young prompted research into identifying and characterizing similar pathways in humans, leading to a better understanding of the regulation, metabolism, and therapeutic targeting of androgen biosynthesis in human health and diseases related to excessive or insufficient androgen biosynthesis when the classical androgen pathway could not fully explain the observed conditions in patients. Over the following two decades, several other distinct pathways have been discovered: the pathways that lead to the synthesis of 11-oxygenated androgens. Below is a brief selection of key events in the history of androgen backdoor pathway research: In 2000, Shaw et al. demonstrated that circulating 3α-diol mediates prostate development in tammar wallaby pouch young via conversion to DHT in target tissues. Tammar wallaby pouch young do not show sexually dimorphic circulating levels of T and DHT during prostate development which suggests that another androgenization mechanism was responsible. While 3α-diol's androgen receptor binding affinity is five orders of magnitude lower than DHT (3α-diol is generally described as inactive to the androgen receptor), it was known that 3α-diol can be oxidized back to DHT via the action of a number of dehydrogenases. In 2003, Wilson et al. demonstrated that 5α-reductase expression in target tissues enabled a novel pathway from 17OHP to 3α-diol without T or A4 as an intermediate. In 2004, Mahendroo et al. demonstrated that an overlapping novel pathway is operating in mouse testes, generalizing what had been demonstrated in tammar wallaby. The term "backdoor pathway" was coined by Auchus in 2004 and was described as 5α-reduction of 17α-hydroxyprogesterone (17OHP) which is a first step in a pathway that ultimately leads to the production of dihydrotestosterone (DHT). and defined as a route to DHT that: (1) bypasses conventional intermediates androstenedione (A4) and T; (2) involves 5α-reduction of pregnanes to androstanes; and (3) involves the 3α-oxidation of 3α-diol to DHT. The backdoor pathway explains how androgens are produced under certain normal and pathological conditions in humans when the classical androgen pathway cannot fully explain the observed consequences. The clinical relevance of the results published by Auchus in 2004 was demonstrated in 2012 for the first time when Kamrath et al. attributed the urinary metabolites to the androgen backdoor pathway from 17OHP to DHT in patients with steroid 21-hydroxylase (encoded by the gene CYP21A2) enzyme deficiency. Barnard et al. in 2017 demonstrated metabolic pathways from steroids to 11KDHT that bypasses A4 and T, an aspect that is similar to that of the backdoor pathway to DHT. These newly discovered pathways to 11-oxygenated androgens were also described as "backdoor" pathways due to this similarity, and were further characterized in subsequent studies. List of figures Schematic diagram of the canonical, backdoor, and 11-oxy backdoor pathways of androgen biosynthesis Numbering of carbon atoms in a steroid molecule The backdoor pathways from progesterone or 17α-hydroxyprogesterone to dihydrotestosterone The backdoor pathways from progesterone or 17α-hydroxyprogesterone to 11-oxygenated androgens See also Late onset congenital adrenal hyperplasia Congenital adrenal hyperplasia due to 21-hydroxylase deficiency References Metabolic pathways Wikipedia articles with sections published in WikiJournal of Medicine
Androgen backdoor pathway
[ "Chemistry" ]
6,760
[ "Metabolic pathways", "Metabolism" ]
65,627,070
https://en.wikipedia.org/wiki/Residential%20Design%20Codes%20%28Western%20Australia%29
The Residential Design Codes (R-Codes) provide uniform residential development standards across all Western Australian local government areas. The R-Codes where first gazetted in 1985 with four subsequent editions published in 1991, 2002, 2008 and 2019. The codes are prepared by the Department of Planning, Lands and Heritage for the Western Australian Planning Commission and implemented via reference in local planning schemes. The R-Codes primarily control residential development by limiting the number of dwellings per site area. Background During the nineteenth to mid-twentieth century residential development in Western Australia was regulated via local government by-laws and development standards under town planning schemes. This led to considerable variation between local governments. In 1964 the Town Planning Department commissioned George Clarke and Donald Gazzard to prepare a uniform residential code known as the “General Residential Codes” (also the GR Codes) which was gazetted in 1966. These codes improved matters, but were still implemented via incorporation into local planning schemes which allowed local governments to vary provisions. Ultimately the codes did not lead to the level of uniformity desired. Following a series of reviews, a new Residential Planning Code was gazetted in 1985 as a State Planning Policy. This code was incorporated into all local planning schemes via reference and applied uniformly, allowing the state government to update the codes periodically. A performance-based assessment pathway was introduced in 2002. See also ResCode (Victoria) Green Street Joint Venture References Local government areas of Western Australia Building codes Western Australia
Residential Design Codes (Western Australia)
[ "Engineering" ]
291
[ "Building engineering", "Building codes" ]
65,627,170
https://en.wikipedia.org/wiki/Andrew%20Braddock
Andrew Braddock (born 1978) is a member of the unicameral Australian Capital Territory Legislative Assembly representing the multi-member electorate of Yerrabi since 2020 for the ACT Greens. Early life, education and career before politics Braddock graduated from Griffith University with a Bachelor of Environmental Engineering (Honours) and moved to Canberra in 2002 to join the Public Service. He went on to obtain a Masters of Management Studies from the University of New South Wales. Braddock was a public servant and environmental engineer. Political career Braddock stood for election to the ACT Legislative Assembly at the 2016 ACT Election as a candidate for the ACT Greens in the new electorate of Yerrabi. The Greens were unsuccessful in winning a seat in Yerrabi obtaining 7.1 per cent of the vote. In 2019 he ran for The Greens in the Federal election for the Seat of Fenner where he obtained 14.42 per cent of the vote, a swing to The Greens of 1.42 per cent. Braddock stood as the ACT Greens lead candidate for Yerrabi for 2020 ACT election and secured a seat with primary vote of 10.2 per cent (a swing to the Greens of 3.1%). Parliamentary career Braddock is the ACT Greens Party Whip and ACT Greens spokesperson for Multicultural Affairs, Better Neighbourhoods, Democracy, Integrity and Community Engagement, Police and Emergency Services, Corrections, Workplace Safety and Industrial Relations. References 1978 births Living people Members of the Australian Capital Territory Legislative Assembly Australian Greens members of the Australian Capital Territory Legislative Assembly 21st-century Australian politicians Griffith University alumni University of New South Wales alumni Australian engineers
Andrew Braddock
[ "Chemistry", "Engineering" ]
321
[ "Environmental engineers", "Environmental engineering" ]
65,627,460
https://en.wikipedia.org/wiki/Women%20and%20Families%20for%20Defence
Women and Families for Defence was a Conservative-aligned pressure group originally founded in March 1983 as Women for Defence. It was founded in opposition to the Greenham Common Women's Peace Camp and the Campaign for Nuclear Disarmament, and aimed to oppose arguments in favour of unilateral nuclear disarmament. It was reportedly founded by Lady Olga Maitland, Ann Widdecombe, Virginia Bottomley and Angela Rumbold (who also became vice-chairwoman of the organization). However, Alfred Sherman told the Sunday Times that it was Maitland who 'solely' set up the group, with his help. The Viscount Trenchard, the former Minister for Defence Procurement, became its president. The group had its own magazine, Deter, and received a commendation from the U.S. president, Ronald Reagan. The group held its first public meeting on 1 May 1983 in Trafalgar Square, whereupon 150 members of the group met, sang "Land of Hope and Glory" and argued in favour of a nuclear deterrent as a precursor to multilateral nuclear disarmament. The group also delivered a petition signed by 13,000 people to respond to the proposals of the West for missile reductions. In 1986, it was expelled from a council that was organising events to mark the International Year of Peace that year. Maitland later turned the group into a general anti-Labour political canvassing group, Women and Families for Canvassing. References 1983 establishments in England Protests in England Politics of the United Kingdom Cold War history of the United Kingdom Cold War organizations Nuclear organizations
Women and Families for Defence
[ "Engineering" ]
315
[ "Nuclear organizations", "Energy organizations" ]
65,627,512
https://en.wikipedia.org/wiki/HD%20108863
HD 108863 is a subgiant star, the primary of a binary star system away, belonging to spectral class K0. Its age is younger than the Sun's at billion years. The primary star is slightly enriched in heavy elements, having 115% of solar abundance. The primary star does not have detectable flare activity. In 2014, a poorly characterized co-moving stellar companion HD 108863 B, likely a main sequence star of spectral class between F6 and G4, was discovered at a projected separation of 16.065 AU. Planetary system In 2011 one superjovian planet, HD 108863 b, on a nearly circular orbit around star HD 108863 was discovered utilizing the radial velocity method. The planet does not transit its host star. References Coma Berenices Planetary systems with one confirmed planet J12301991+2156537 061020 Durchmusterung objects 108863 Binary stars K-type subgiants
HD 108863
[ "Astronomy" ]
200
[ "Coma Berenices", "Constellations" ]
65,627,586
https://en.wikipedia.org/wiki/N%C3%BCzhet%20G%C3%B6kdo%C4%9Fan
Hatice Nüzhet Gökdoğan (; 14 August 1910 – 24 April 2003) was a Turkish astronomer, mathematician and academic. After studying mathematics and astronomy in France as a young adult, Gökdoğan joined the faculty of Istanbul University in 1934 and completed her PhD. She was elected Dean of the university's Faculty of Science in 1954, becoming the first Turkish woman to serve as a university dean, and she was later made Chair of the astronomy department, significantly expanding her department's capacity and working to improve national and international collaboration between astronomers. Gökdoğan co-founded the Turkish Mathematical Society, the Turkish Astronomy Association and the Turkish University Women's Association. She was Turkey's first national representative at the International Astronomical Union (IAU), and has been credited as Turkey's first female astronomer. Early life and education Nüzhet Gökdoğan was born on 14 August 1910 in Istanbul (then Constantinople). Her mother was named Nebihe Hanım, while her father was Mehmet Zihni Toydemir, a major general. In her late teens, Gökdoğan received a scholarship to study in France; she enrolled in the University of Lyon and in 1932 she completed her undergraduate degree in mathematics. She had a strong interest in astronomy and subsequently studied physics at the University of Paris, where she received a Diplome d'Etudes Superieures. She then completed an internship at the Paris Observatory. Career Returning to Turkey in 1934, Gökdoğan applied to work at the Kandilli Observatory, but was turned down because the director did not want a woman working there. She instead joined Istanbul University as a faculty member in the Astronomy Department. She was the first woman member of the school's faculty of science. She completed her PhD three years later, submitting a dissertation entitled Contribution aux recherches sur l'existence d'une matière obscure interstellaire homogène autour du soleil (Contribution to research on the existence of homogeneous interstellar dark matter around the sun). Gökdoğan's dissertation was recorded as the first doctoral thesis completed at Istanbul University's faculty of science. In 1948, Gökdoğan was made full professor at the university, and also co-founded the Turkish Mathematical Society. She served as president of the Turkish Union of Soroptimists in the early 1950s. Upon being elected Dean of Istanbul University's science faculty in 1954, Gökdoğan became the first Turkish woman to serve as a university dean. She was a founding member of the Turkish Astronomy Association that same year, and she served as president of the association for the next two decades. In 1958, she was appointed Chair of the Astronomy Department at Istanbul University, and she held the role for the rest of her time as a faculty member. Gökdoğan worked hard to expand her department, gradually increasing the number of staff from 5 to 18, and she developed a number of new collaborative programs with observatories in France, Italy and Switzerland. She wrote introductory textbooks on astronomy and spectroscopy for students in Turkish high schools. She also co-founded the Turkish University Women's Association, and served as its president more than once. Gökdoğan was a member of the International Astronomical Union (IAU), and in 1961 she became the first national representative of Turkey to the IAU. In August 1961, she represented Turkey as a delegate at an IAU conference in Berkeley, California. During her time as a member in the IAU, she participated in two of its commissions on "theory of stellar atmospheres" and "solar radiation and structure". She organized a number of national and international astronomy symposiums in Turkey. One of these events in the late 1970s was credited with solidifying broader interest in building a new national observatory. Gökdoğan retired from Istanbul University in 1980. Personal life Gökdoğan married Mukbil Gökdoğan, who was an architecture professor and former minister of public works. They had two children, both of whom grew up to become university professors. Mukbil died in 1992. Gökdoğan died on 24 April 2003. Legacy A Google Doodle was published on 14 August 2023 celebrating the 113rd birthday of Gökdoğan. References 1910 births 2003 deaths Turkish astronomers Academic staff of Istanbul University Istanbul University alumni University of Lyon alumni University of Paris alumni 20th-century Turkish mathematicians 20th-century Turkish women scientists 20th-century Turkish scientists Scientists from Istanbul Women astronomers Women mathematicians 20th-century astronomers Turkish expatriates in France
Nüzhet Gökdoğan
[ "Astronomy" ]
932
[ "Women astronomers", "Astronomers" ]
65,629,397
https://en.wikipedia.org/wiki/Favourable%20conservation%20status%20of%20wolves%20in%20Europe
The favourable conservation status of wolves is the definition of a wolf population that is no longer threatened with extinction, that is capable of long-term survival. In Europe the favourable conservation status is defined by the Guidelines for Population Level Management Plans for Large Carnivores. It is the minimum viable population, that can be of different numbers of wolves depending on their connectivity with neighbouring populations. According to the IUCN guidelines, at least 1000 adult animals are required for isolated populations. If a wolf population is effectively linked genetically and demographically with other wolf populations, more than 250 mature wolves may be sufficient. Developments Before the entry into force of the Bern Convention of 1979 in 1982, wolves had been decimated to isolated relict populations in some of their formerly extensive distribution areas to end the damage caused by predation to livestock. Populations in Europe have recovered during the four decades of strict protection and are largely in a favourable conservation status. The innate instinctive behaviour of the wolf, with its enormous potential for long-distance migration, favours both its rapid expansion and the genetic connectivity of the various populations. Using satellite telemetry, it has been measured that some wolves travel over 1000 kilometres within a few months. They can colonise new areas relatively quickly. Population genetic analyses by Maris Hindrikson et al. showed a range of 650 to 850 km when using spatial autocorrelation based on three characteristics of genetic diversity. This suggests that the genetic diversity of a wolf population can be influenced by populations up to 850 km away. Population genetics criteria Both the number of individuals and the gene flow between populations are important for the conservation status for which a viable population must be available. Luigi Boitani means a minimum number of individuals in an area of minimum size that provides sufficient resources for the animals so that the population is not at risk of extinction. According to an IUCN guideline for unspecified animal species, at least 1000 sexually mature individuals are required in an isolated population to ensure its survival and avoid possible inbreeding depression. Connectivity with neighbouring populations has the effect that far fewer individuals are required to avoid inbreeding depression. According to the Guidelines for Management Plans for Large Carnivores at Population Level of the Large Carnivore Initiative for Europe, a division of the IUCN, a population of more than 250 adult wolves may be sufficient to be classified as "Least Concern" if the wolf population in question has connectivity with others in such a way that the immigrants have genetic and demographic effects. "Since the object of any conservation planning should be the entire biological unit, i.e. the population, the guidelines recommend an assessment at population level". Ilka Reinhardt and Gesa Kluth write in the BfN Script 201 for an independent population: "If one assumes the conservation of 95% of the genetic variation ... is taken as the target value, this means that at least 100 reproducing wolf packs correspond to a favourable conservation status." For example, the German-Western Polish population as a sub-population of the Baltic Population, with the current population in Poland west of the Vistula of at least 95 packs together with five packs belonging to the same sub-population in Germany, would already be in a favourable conservation status, even without the existing exchange with the Baltic and other neighbouring populations. In the monitoring year 2020/21, there were a total of 157 wolf packs registered in Germany. Wolf monitoring is used to determine the extent to which the genetic exchange between the various wolf populations or subpopulations is taking place again. So today, immigration of wolves from Poland to Germany but also return migration to the east is frequent. Wolves from the Carpathians are migrating into the German-Western Polish population. In Bavaria, there were eight records of wolves migrating from the Alpine population in the period 2009 to 2020. In Baden-Württemberg there were five records of wolves from the Alpine and Italian populations in the period 2015 to 2020. In September 2020, a wolf (GW 1832 m) from the Alps arrived in the Neckar-Odenwald-Kreis. Individual wolves from the Dinarides-Balkans population have also migrated as far as the German Alpine region. In early summer 2020 a male wolf (GW 1706 m) from the Dinaric population was detected at Traunstein. The Eastern-, Central-, Western- and Southern European wolf subpopulations are all parts of a rapidly growing Metapopulation. Situation in Europe Henryk Okarma said in November 2020 at a conference of the European Parliament: "Although the range of the wolf has increased at least sevenfold since 2006 ... by at least seven times and the number of wolves has increased by four to five times, the conservation status ... was still assessed as unfavourable in the most recent report from 2018. Does this mean that we will have to wait until this species is everywhere before designating a favourable conservation status? until this species is everywhere? If the wolf is everywhere, this can cause conflicts and lead to a decrease in the social acceptance of this species and an increase in illegal actions against wolves. Is such a 'favourable conservation status' then our true conservation goal?" Commitments of the EU member states The EU Member States monitor the conservation status of natural habitats with their priority species and set up a monitoring system to record the listing of animal species listed in Annex II, IV and V and illegal and exceptionally legal killings. The wolf monitoring records serve as feedback to the IUCN, where entries in the Red List are made in appropriate categories, and to the European Commission (Natura 2000). The EU member states are obliged to pass on the current data to the European Commission so that the latter can adapt the protection status in the Habitats Directive accordingly. A transfer from the list of strictly protected species in Annex IV of the Habitats Directive to the list of protected species in Annex V requires coordination at federal level with neighbouring countries and requires the approval of the European Commission. The EU plan to make the move from "strictly protected" to "protected" status in March 2025. The Habitats Directive does not prescribe any protection measures for the individual habitats, it prescribes no listing in Annex II with Sites of Community Importance, but it requires the guarantee of a favourable conservation status as defined above. The obligation to provide up-to-date data (beyond the usual six-year cycle for species other than large carnivores derives from Article 16.1.c of the Habitats Directive: "(c) in the interests of public health and public safety, or for other imperative reasons of overriding public interest, including those of a social or economic nature and beneficial consequences of primary importance for the environment;" in this case to save pastoral farming with species-appropriate free-range husbandry, which also fulfils an important function for the preservation of biodiversity in open-land biotopes worthy of protection. References Wolves Wolves Wolves Wolves, Europe
Favourable conservation status of wolves in Europe
[ "Biology" ]
1,403
[ "Biota by conservation status", "Animals", "Animals by conservation status" ]
65,629,505
https://en.wikipedia.org/wiki/Electrolytic%20iron
Electrolytic iron is a form of high purity iron, obtained by electrolysis. It has a high purity greater than 99.95% with trace elements accounting for only a millionth of a decimal. Overview To obtain the best qualities that iron has to offer like high ductility, more corrosion resistance, and better magnetic characteristics; a chemical process must occur to remove impurities. The most effective process is through electrolysis that takes commercial grade iron and minimizes the C, S, Mn, and other trace element levels to become one of the highest grades of iron on the market known as electrolytic iron. Once the iron is in its purest state, it can be then used as a component in alloys. Alloys with high purity elemental makeups have specifically enhanced properties such as ductility, tensile strength, toughness, fatigue strength, heat resistance, and corrosion resistance, on which each element draws from their best inherent properties and collectively contributes to the alloy as a whole. Production method of electrolytic iron Smelting is typically classified into two procedures: the wet process and the dry process. Electrolytic iron is considered the “wet process” since electrolysis requires electric charges to move through a liquid solution. This action causes a chemical reaction called electrolytic refining. The result of electrolytic refining is electrolytic iron. An anode (raw material) and a cathode (base plate) are immersed into an electrolyte including iron ions and other components. Moreover, current flows between the anode and the cathode. As a result, iron is deposited on the surface of the cathode due to a difference in an ionization tendency, and high purity iron can be obtained. TOHO ZINC CO.,LTD. is producing and selling electrolytic iron refined by the wet process on an industrial scale. TOHO ZINC CO.,LTD. accounts for the top market share of high purity iron in wet type process. The purity of the iron sold is from 99.9% to 99.999%, especially including the gas components of O, N, C, and H. Other method High purity iron is also produced in dry process. VOD (vacuum oxygen decarburization), and ESR (electro-slag remelting) are known as dry process. VOD is a process of melting pure iron in a vacuum and degassing. ESR (electro-slag remelting) method is a process of dripping the molten metal refining pure iron as an electrode. Moreover, an ion exchange method is known as wet type process in addition to electrolytic refining. Application Electrolytic iron is utilized by the aerospace sector in areas where components are safety critical. Landing gear, engine shafts in jet aircraft, and gas turbines of generators, are areas that require the use of electrolytic iron. It is also used in research and development, special alloys (maraging steel, Ni-base alloys, Ti alloys), sputtering targets, chemicals (etching liquids), etc. In addition, it is used as a raw material for Japanese swords produced using traditional Japanese techniques. References Iron Electrolysis
Electrolytic iron
[ "Chemistry" ]
645
[ "Electrochemistry", "Electrolysis" ]
65,629,795
https://en.wikipedia.org/wiki/Dimethylbutene
Dimethylbutene is an alkene with a molecular formula C6H12. It has the following possible structural isomers: 2,3-Dimethyl-1-butene 3,3-Dimethyl-1-butene 2,3-Dimethyl-2-butene Alkenes
Dimethylbutene
[ "Chemistry" ]
69
[ "Organic compounds", "Alkenes" ]
50,553,816
https://en.wikipedia.org/wiki/SOGIN
SOGIN (, the Nuclear Plant Management Company, which is also called Sogin) is an Italian state-owned enterprise responsible for nuclear decommissioning as well as management and disposal of radioactive waste produced by industrial, research and medical processes. Founded in 1999 following the 1987 Italian referendums on nuclear power, SOGIN was originally part of state owned ENEL but became independent, but still government owned, in 2000. The company initially took over the Caorso, Enrico Fermi, Garigliano and Latina nuclear power plants, later adding other sites including ENEA's EUREX. The company has commenced the decommissioning of all the plants and is predicted to complete the work in 2036. The company has been involved in environmental remediation, radioactive waste management and nuclear safety work in Armenia, Bulgaria, China, Czech Republic, France, Kazakhstan, Lithuania, Romania, Russia, Slovakia and Ukraine. SOGIN also undertakes other decontamination work and in 2005 started to help to decommission nuclear submarines of the Russian Navy. History Following the 1987 referendums on nuclear power, the Italian government was required to decommission the country's remaining nuclear plants. SOGIN was conceived as the company to undertake this work. SOGIN was created on 1 November 1999 and took ownership of the closed Caorso, Enrico Fermi, Garigliano, Latina nuclear power plants from the state-owned electricity company, ENEL. Initially, SOGIN was created as a part of the ENEL group, but, following the passing of Legislative Decree no. 79, the so-called Bersani decree of 16 March 1999, which marked the beginning of the liberalization of the Italian electricity sector, it was decided to split the group. On 3 November 2000, the SOGIN shares were transferred to the Ministry of Economy and Finance. In 2003, SOGIN also took responsibility decommissioning ENEA sites like EUREX, the OPEC research reactors in Cesano, and the ITREC plant in Rotondella. On 16 September 2004 SOGIN became a corporate group with the acquisition of 60% of the shares in Nucleco SpA (the remaining 40% being owned by ENEA). In 2005, SOGIN acquired the nuclear enrichment plant at Bosco Marengo, and, in 2012, the company started a three-year programme to decontaminate the boxes that had been used to store plutonium-contaminated gloves up to 1986. SOGIN launched the Observatory for the Closure of the Nuclear Cycle () with the Fondazione per lo Sviluppo Sostenibile (Foundation for Sustainable Development) in 2014 as an independent monitor of the social, environmental and technical aspects of nuclear sites. SOGIN was originally tasked to completely decommission the Italian nuclear plants by 2019, but it is likely to be 2036 before the task is complete. Decommissioning activity SOGIN is responsible for decommissioning four nuclear power plants, located in Caorso, Garigliano, Latina and Trino, as well as the operations in Bosco Marengo, Casaccia, Rotondella and Saluggia. The process, agreed in 2001, involves the systematic decontamination and deconstruction of the site with the aim that the area can be returned to normal use. Work initially starts with a pre-decommissioning stage, carried out under a protective storage license, where the plant stops operation but is no action is taken to dismantle the plant. When SOGIN took responsibility for Garigliano, the plant, which had not operated since 1982, was nearing completion of this stage. The seven sites under SOGIN's control have all gone through this stage in the process. After SOGIN has completed this task, the site is decontaminated and deconstructed. As well as radioactive material, other hazardous wastes need to be carefully handled, including asbestos insulation. This can take a long time; for example, at Garigliano, the removal of asbestos from the turbine building was complete by 2007 and yet the full decontamination of the reactor building was not complete until 2010. Only once it has been decontaminated can material safely be removed. Once this stage is complete, SOGIN requests a government license to dismantle the whole site. Full decommissioning then follows, including the removal of all buildings, and the ground is decontaminated. In all, over one million tonnes of material is expected to be recovered from the decommissioning process, of which 95% is non-radioactive. This process can take decades, with estimates for the total decommissioning time varying from 27 years for Garigliano to 32 years for Caorso and Trino. Costs are similarly high, with the total bill for Garigliano expected to reach $432.4 million by the time the site is handed over. The fuel enrichment site at Bosco Marengo was the first to start decommissioning. The process started in 2008 and was completed on 31 December 2021. The first nuclear power plant to gain permission to start full decommissioning was at Trino, the decrees being granted a decree by the Ministry of Economic Development on 2 August 2012. This was followed by a decree authorising the decommissioning of Garigliano on 26 September. Caorso and Latina were granted their licenses in 2014, in January and December respectively. Senior management National repository In September 2008, a high-level discussion took place within the Italian government about a central repository for all nuclear waste. This led to, in 2010, SOGIN being given the responsibility for finding a surface site to store nuclear waste. SOGIN projected the repository to be a structure with engineering barriers and natural barriers to store approximately of low and intermediate level waste permanently, and of high level waste temporarily. SOGIN predicted that of this, 60% will come from decommissioned plants. The remainder were to come from scientific research, medical and industrial applications, both waste produced to date and that which was estimated to be generated over the next 50 years. The creation of the repository was a critical requisite for SOGIN to achieve its decommissioning deadline. The repository was to be hosted in a technology park that also contained research labs which would bring economic benefits to the community, as well as direct payment of compensation administered by SOGIN. Despite this, the search for a repository proved to be difficult. When the first site chosen, the salt mines of Scanzano Jonico, was announced in November 2003, it led to an unprecedented outcry with over 150,000 demonstrating against the decision with residents blocking roads and shutting down businesses. This led directly to the regional council declaring the area a denuclearised zone. Subsequent changes in national legislation have been put in place in an attempt to ensure that any future site can only be agreed by the Council of Ministers after review by a panel of scientists. In 2012, the Italian Parliament passed a law that implied that all nuclear waste would be stored in the repository. However, continued controversy and the lack of progress finding a site has meant that, instead, waste is mainly stored in untreated form at the nuclear facilities themselves. This is also unpopular with neighbouring communities, who fear this will become a permanent solution. International activity As a consequence of the difficulty finding a long term solution in the country, waste material has instead been sent abroad, primarily to France and the UK. Initially, up to 2005, shipments were made to BNFL in the UK. in November 2006 the Italian and the French governments agreed to transfer about of spent fuel to France which led to SOGIN signing a contract with Areva in April 2007. The first shipment under this agreement, of fuel from the Caorso nuclear power plant, was completed in June 2010. In 2015, SOGIN signed a similar contract with JAVYS (Jadrová a Vyraďovacia Spoločnosť), the Slovak nuclear decommissioning company, to send of waste to be processed at their site in Jaslovské Bohunice. SOGIN signed an agreement with the Radioactive Waste Repository Authority (RAWRA) in the Czech republic in 2016 covering the storage of nuclear waste, including collaboration to develop a deep geological repository for spent nuclear fuel and high-level waste. As well as its core business of decommissioning Italian nuclear plants, SOGIN undertakes international consultancy in environmental remediation, radioactive waste management and nuclear safety. The company has undertaken projects at Metsamor in Armenia, Belene and Kozloduy in Bulgaria, Dukovany and Temelin in the Czech Republic, Phénix in France, Aktau in Kazakhstan, Ignalina in Lithuania, Cernavodă in Romania, Beloyarsk, Bilibino, Kalinin and Kola in Russia, Bohunice and Mochovce in Slovakia and Khmelnytskyi and Rivne in Ukraine. SOGIN has been actively involved in the G8 Glocal Partnership programme, launched at the 2002 G8 summit in Kananaskis, to support and accelerate Russia's nuclear disarmament. On 3 August 2005, an agreement was signed between SOGIN and the Ministry of Industry for the company to dismantle Russian nuclear submarines. The programme required a specialist vessel, the Rossita, to be constructed, which was delivered in 2011. In 2014, SOGIN signed an agreement with China General Nuclear Power Group (CGN) to remove parts from the nuclear fuel pool of a Chinese plant. The contract opened the door to the companies sharing expertise on nuclear decommissioning and collaborating on policies and strategies to manage radioactive waste and used fuel in China. Amongst the first projects is a joint study of an innovative process for the minimization, treatment and conditioning of radioactive waste in Italy. Financial performance See also Nuclear power in Italy References Citations Bibliography Governmental nuclear organizations Nuclear power in Italy Nuclear technology companies of Italy
SOGIN
[ "Engineering" ]
2,059
[ "Governmental nuclear organizations", "Nuclear organizations" ]
50,555,006
https://en.wikipedia.org/wiki/Heart%20of%20Europe%20Bio-Crystallography%20Meeting
The Heart of Europe Bio-Crystallography Meeting (short HEC-Meeting) is an annual academic conference on structural biology, in particular protein crystallography. Researchers from universities, other research institutions and industry from Austria, Czech Republic, Germany and Poland meet to present and discuss current topics of their research. The talks are predominantly given by PhD students (doctoral students). An exception is the invited HEC lecture, which is held by a renowned scientist of the research field. The format of the HEC meeting has been adopted from the eleven years older Rhine-Knee Regional Meeting on Structural Biology . History of the HEC-Meeting The HEC-Meeting dates back to an initiative in the year 1998 of Manfred Weiss and Rolf Hilgenfeld, who were researchers at the Institute for Molecular Biotechnology (IMB) in Jena and intended to establish a meeting format similar to the Rhine-Knee Regional Meeting on Structural Biology in the New Länder. Both conferences are regional meetings of German scientists together with scientific research groups of the neighbouring countries. Nine groups from Germany (the new states and West-Berlin), Poland and Czech Republic participated in the first HEC-Meeting from 8 to 10 October 1998. Later also groups from Austria and the Old Federal States participated. Due to the Covid-19 pandemic, no meeting was organized in 2020 and HEC-23 took place as an online meeting. Former HEC-Meetings: References External links Website of HEC-16 at the Attersee, Austria Website of HEC-17 in Berlin-Schmöckwitz, Germany Website of HEC-18 in Kutná Hora, Czech Republic Academic conferences Structural biology
Heart of Europe Bio-Crystallography Meeting
[ "Chemistry", "Biology" ]
340
[ "Biochemistry", "Structural biology" ]
50,555,537
https://en.wikipedia.org/wiki/Thermal%20power%20station%20Regina%20Margherita
The thermal power station Regina Margherita was a large power station for the production of electricity, preserved at the Museo nazionale della scienza e della tecnologia Leonardo da Vinci in Milan, Italy. The station opened in 1895 and was originally installed in the Egidio e Pio Gavazzi silk factory in Desio (Milan), where it operated until 1954. It supplied electricity for lighting and for the operation of 1,800 looms, generating alternating electric current at a voltage of 200 V. History Designed at the Polytechnic University of Milan, it was built by combining a steam engine from the Franco Tosi company of Legnano and a pair of alternators from the Brown Boveri company. The power station opened on November 9, 1895; the ceremony was attended by King Umberto I and Margherita of Savoy, to whom the plant was dedicated. Museum In 1958 Egidio e Pio Gavazzi proposed to donate the power plant to the Museo nazionale della scienza e della tecnologia Leonardo da Vinci. In order to exhibit the large machine, the floor was demolished, a stronger basement was built to support the item and the technical press consultation room was moved. Then the Desio plant was dismantled using maintenance cranes and it was transported with a Riva lorry to the museum, where it was reassembled by hand and connected to an electric motor, coupled with a reduction gear, and set in motion. The furnace and boiler, with their connected steam distribution pipes and pumps, were not transferred to the museum. Description The station contains two parts: thermal, consisting of a steam engine with two horizontal cylinders, and electric, consisting of two alternators and two exciter dynamos. There is also an electric control panel and a lighting system with 8 lamps. The machine is activated by an electric motor, connected to it by a chain which encircles a pulley, and it no longer produces current. Technique This machine is an example of a compound steam engine. Although it relied on the finest nineteenth-century technologies, the "Regina Margherita" was not a cutting-edge piece of machinery. Ten years before its making, the Englishman Sir Charles Algernon Parsons had already invented the steam turbine. In the latter device the force of the steam acts directly on the blades of the wheel, producing the rotation necessary to operate the alternators. The steam turbine is more efficient than a cylinder and piston system because it reduces energy waste from the transformation of alternating motion into rotary motion and from the transmission of movement through connecting rods, cranks and belts. References Dizionario biografico Dizionario biografico degli italiani 1960- Rome Franco Tosi S.p.A. Franco Tosi Società per Azioni 1876–1956 1956 Legnano (MI) Gavazzi G. Non solo Seta. Storia della Famiglia Gavazzi 2003 Milano Curti O. Un Museo per Milano / Un protagonista racconta gli anni della nascita del Museo della Scienza 2000 Garbagnate Milanese (MI) 1895 establishments in Italy 1954 disestablishments in Italy Buildings and structures completed in 1895 Former power stations in Italy Museo Nazionale Scienza e Tecnologia Leonardo da Vinci Franco Tosi Meccanica History of Milan History of technology
Thermal power station Regina Margherita
[ "Technology" ]
702
[ "Science and technology studies", "History of science and technology", "History of technology" ]
50,556,155
https://en.wikipedia.org/wiki/Tobias%20de%20Boer
Pieter Cornelis Tobias de Boer (21 May 1930 – 2 May 2016) was a Dutch scientist. He was a professor at the Sibley School of Mechanical and Aerospace Engineering of Cornell University. His research interest were in the field of thermodynamics and fluid mechanics. Career De Boer was born on 21 May 1930 in Leiden, the Netherlands. He studied mechanical engineering at Delft University of Technology, where he obtained his Bachelor's degree and in 1954 his Master's. He subsequently served in the Dutch Armed Forces for two years. De Boer married in 1956 and a short time later moved to the United States where they settled in Maryland. He continued his studies and obtained his doctorate at the University of Maryland in 1962 under Jan Burgers. He subsequently was an assistant professor at the University until 1964. The de Boer family then moved to Ithaca, New York and he was employed by Cornell University as an assistant professor at the Graduate School of Aeronautical Engineering. In 1968 he became associate professor. In 1972 the Sibley School of Mechanical and Aerospace Engineering was founded and two years later de Boer became a full professor there. He retired in 2000. At the University, de Boer did research on pulse tube cryocoolers, the physics of shock waves, and Stirling engines, among other topics. In 1988 de Boer was elected a corresponding member of the Royal Netherlands Academy of Arts and Sciences. Personal life De Boer married Joan Lieshout in 1956, the couple had three children. Joan recorded the Dutch text on the Voyager Golden Record. De Boer was a sportsman and especially fond of cycling. When he was 48 he cycled in 24 hours, thereby setting a national record for his age category. He died on 2 May 2016 in the retirement community of Kendal at Ithaca, age 85. References External links Interview with Tob de Boer by Francis Moon 1930 births 2016 deaths Cornell University faculty Delft University of Technology alumni Dutch emigrants to the United States 20th-century Dutch engineers Fluid dynamicists Members of the Royal Netherlands Academy of Arts and Sciences Scientists from Leiden University of Maryland, College Park alumni
Tobias de Boer
[ "Chemistry" ]
420
[ "Fluid dynamicists", "Fluid dynamics" ]
50,556,787
https://en.wikipedia.org/wiki/Protognathinus
Protognathinus is an extinct genus of stag beetle from the Eocene of Europe, known from the Messel Pit in Germany. This genus is known from the single species P. spielbergi. Etymology The specific name spielbergi refers to American film director Steven Spielberg, whose film Jurassic Park the authors considered to have contributed to the revival of interest in the earth's ancient past. Description At in total length, Protognathinus is one of the largest known fossil beetles in the superfamily Scarabaeoidea. It is comparable to other large fossil beetles such as Cheirotonus otai and Oryctoantiquus borealis. Like other beetle fossils known from the Messel Pit, Protognathinus fossils retain the color of the exoskeleton. The details of the classification within the family are not well understood, and it is sometimes placed in Lampriminae and sometimes in Lucaninae. Morphological similarities with early Cretaceous species Litholamprima qizhihaoi, described from the Yixian Formation, have been pointed out. See also Maaradactylus spielbergi - a species of pterosaur named after Spielberg References External links † Eocene insects of Europe Fossil taxa described in 2001 Fossil beetle genera Monotypic prehistoric insect genera Monotypic Scarabaeiformia genera Species known from a single specimen
Protognathinus
[ "Biology" ]
273
[ "Individual organisms", "Species known from a single specimen" ]
50,559,168
https://en.wikipedia.org/wiki/Faustovirus
Faustovirus is a genus of giant virus which infects amoebae associated with humans. The virus was first isolated in 2015 and shown to be around 0.2 micrometers in diameter with a double stranded DNA genome of 466 kilobases predicted to encode 451 proteins. Although classified as a nucleocytoplasmic large DNA virus (NCLDV), faustoviruses share less than a quarter of their genes with other NCLDVs; however, ~46% are homologous to bacterial genes and the remainder are orphan genes (ORFans). Specifically, the gene encoding the major capsid protein (MCP) of faustovirus is different than that of its most closely related giant virus, asfivirus, as well as other NCLDVs. In asfivirus, the gene encoding MCP is a single genomic fragment of ~2000 base pairs (bp), however, in faustovirus the MCP is encoded by 13 exons separated by 12 large introns. The exons have a mean length of 149 bp and the introns have a mean length of 1,273 bp. The presence of introns in faustovirus genes is highly unusual for viruses. Replication The replication strategy of faustovirus in amoeba is similar to that of mimivirus. Lasting 18 to 20 hours, the replication cycle begins with the amoeba ingesting individual viral particles through a process known as phagocytosis. After about 2 to 4 hours post infection, virus particles are internalized via phagocytic vacuoles and are detected by the host. While the particles appear near the host's nucleus, there is no evidence that the virus is within the nucleus or has an interaction with the nuclear membrane. Similar to the mimivirus, in which a channel is created for particle proteins and DNA to travel through, the faustovirus particles empty their internal compartments into the amoeba's cytoplasm. In both viruses, the fusion leads to an eclipse phase in which the contents of particles become invisible inside the cytoplasm of the host. However, the eclipse phase of the faustovirus is longer than the mimivirus, taking place from 4 to 6 hours post infection. Characterized by a loss of its spherical shape and a decrease in surface area, the amoeba host cell undergoes reorganization, such that at 8 to 10 hours post infection there are new particles in a region forming a donut shape. This region is the viral factory; it is distinct from the nucleus and is surrounded by mitochondria. Between 12- and 18-hours post infection, the virus factory takes up the entirety of the cytoplasm, which is completely filled with new viral particles. At 18- to 20-hours post infection, the viral particles are released through cell lysis. Pathogenicity Faustovirus affects amoeba associated with the human environment, like Vermamoeba vermiformis; this particular amoeba has been found in hospital water networks, drinking water, human stool samples, and contact lenses of keratitis patients, thus it may be a possible carrying agent for viruses. Faustoviruses have been found in sewage water from various geographical locations, such as Senegal, France, Lebanon, and Saudi Arabia. Isolated strains of the virus have been detected in rodents, cattle, febrile and healthy humans, and well water and rivers. Although faustovirus was found in humans, it is unknown whether it has a pathogenic effect on humans; more research is required to determine the mode of infection and consequences of infection, if any exist. References External links Nucleocytoplasmic large DNA viruses Asfarviridae
Faustovirus
[ "Biology" ]
762
[ "Virus stubs", "Viruses" ]
50,560,634
https://en.wikipedia.org/wiki/Hemp%20protein
Hemp protein is a plant-derived protein from the cannabis plant and is isolated from hemp seeds (a type of nut). Protein content The protein content of whole hemp seeds can vary between 20 and 25% depending on variety and environmental factors. Processing methods such as dehulling or oil fraction removal can increase the protein concentration in products like dehulled seed or hemp seed meal to over 50%. Hemp seeds are comparable with soybeans in terms of nutrition. They are high in protein, low in carbohydrates, and rich in dietary fiber and unsaturated fatty acids. After the oil is extracted from the hemp seeds, the residual mass is a protein-rich material useful for food processing. The protein in hemp seeds is made up of the two highly digestible globular types of proteins, edestin (60–80%) and 2S albumin, with edestin also being rich in the essential amino acids. Amino acid profile Hemp protein is rich in essential amino acids, containing, in sufficient quantities, all essential amino acids required by humans except lysine, which appears at lower than recommended levels for infants aged up to five years old according to Food and agricultural organization (FOA) standards; still, the overall nutritive value of hemp protein remains relatively good, as sulfur-containing amino acids are higher than in casein or soy, while other non-essential amino acids present in hemp protein, such as arginine, provide additional health benefits including cardiovascular support, immune function optimization, and muscle repair. Hemp protein has unique properties that are useful in food processing. Its cysteine-rich amino acid composition and high sulfhydryl (-SH)/disulfide (S-S) ratio offer a glimpse of its distinctive features. These features can facilitate the development of new food materials. Digestibility Hemp protein, when untreated, is more digestible compared to soy protein. Heat pre-treatment at temperatures above 80 °C may improve the digestibility of both hemp and soy protein, but in untreated (unheated) form hemp protein is more readily digested than the soy one. Dehulled hemp seeds (also known as hemp nuts, hemp kernels or hemp hearts) have a protein digestibility corrected amino acid score (PDCAAS) of 0.66, with lysine being the limiting amino acid (digestibility of 92.1%). With its gluten content as low as 4.78 ppm, hemp is attracting attention as a gluten-free (<20 ppm) food material. Hemp protein is sold in bulk as a powder, in various forms, such as hempseed meal, hemp protein concentrates, and hemp protein isolates. It generally has greenish hue due to the natural pigments in the hemp plant, but the color can vary depending on the specific processing methods used. Unflavored hemp protein powder is commonly available, that is, no additional flavoring is added to the hemp protein, which is usually described as earthy or nutty. Functional features Observations of limiting enzymatic hydrolysis elicited by trypsin in a controlled environment have shown an increase in hemp protein isolate (HSI) solubility at various pH and a notable decrease in the recorded emulsifying activity index. Environment benefit Hemp protein is gaining attention in the context of its environment benefit. Hemp is reevaluated as a promising crop in the era of sustainable development goals (SDG) due to its sustainable growth characteristics and versatile industrial usability. The entire hemp plant—its leaves, stalks, roots, and seeds—can be used, reducing waste. The stalk is used for fiber production, the leaves/roots for medicine, and seeds for oil and protein. Hemp has a short cropping period and requires less pesticide or water compared to cotton, a representative fiber material and food plant, that makes hemp a sustainable choice for cultivation. See also Pea protein Soy protein Protein quality References Bodybuilding supplements Cannabis foods Dietary supplements Hemp products Nutrition Proteins
Hemp protein
[ "Chemistry" ]
860
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
50,561,501
https://en.wikipedia.org/wiki/Transparent%20wood%20composite
Transparent wood composites are novel wood materials which have up to 90% transparency. Some have better mechanical properties than wood itself. They were made for the first time in 1992. These materials are significantly more biodegradable than glass and plastics. Transparent wood is also shatterproof, making it suitable for applications like cell phone screens. History A research group led by Professor Lars Berglund from Swedish KTH University along with a University of Maryland research group led by Professor Liangbing Hu have developed a method to remove the color and some chemicals from small blocks of wood, followed by adding polymers, such as poly(methyl methacrylate) (PMMA) and epoxy, at the cellular level, thereby rendering them transparent. As soon as released in between 2015 and 2016, see-through wood had a large press reaction, with articles in ScienceDaily, Wired, The Wall Street Journal, and The New York Times. Actually those research groups rediscovered a work from Siegfried Fink (forest ecologist), a German Researcher, from as early as 1992: with a process very similar to Berglund's and Hu's, the German Researcher turned wood transparent to reveal specific cavities of the wood structure for analytical purpose. In 2021 researchers reported a way to manufacture transparent wood lighter and stronger than glass that requires substantially smaller amounts of chemicals and energy than methods used before. The thin wood produced with "solar-assisted chemical brushing" is claimed to be lighter and about 50 times stronger than wood treated with previous processes. Process In its natural state, wood is not a transparent material because of its scattering and absorption of light. The tannish color in wood is due to its chemical polymer composition of cellulose, hemicellulose, and lignin. The wood's lignin is mostly responsible for the wood's distinctive color. Consequently, the amount of lignin determines the levels of visibility in the wood, around 80–95%. To make wood a visible and transparent material, both absorption and scattering need to be reduced in its production. The manufacturing process of transparent wood is based on removing all of the lignin called the delignification process. Delignification process The production of transparent wood from the delignification process vary study by study. However, the basics behind it are as follows: a wood sample is drenched in heated (80 °C–100 °C) solutions containing sodium chloride, sodium hypochlorite, or sodium hydroxide/sulfite for about 3–12 hours followed by immersion in boiling hydrogen peroxide. Then, the lignin is separated from the cellulose and hemicellulose structure, turning the wood white and allowing the resin penetration to start. Finally, the sample is immersed in a matching resin, usually PMMA, under high temperatures (85 °C) and a vacuum for 12 hours. This process fills the space previously occupied by the lignin and the open wood cellular structure resulting in the final transparent wood composite. While the delignification process is a successful method of production, it is limited to its laboratory and experimental production of a small, and low-thickness material that is unable to meet its practical application requirements. However, at Jiangsu Co-Innovation Center for Efficient Processing and Utilization of Forest Resources in 2018, Xuan Wang and his colleagues developed a new production method of infiltrating a prepolymerized methyl methacrylate (MMA) solution into delignified wood fibers. By utilizing this new technique, large-size transparent wood with any thickness or any measure can be easily made. Yet in spite of this success in the manufacture, challenges still exist with regard to mechanical stability and adjustable optical performance. Properties Wood is a natural growth material that possesses excellent mechanical properties, including high strength, good durability, high moisture content, and high specific gravity. Wood can be classified in two types of wood, softwood and hardwood. While each type is different—e.g., the longitudinal cells in softwood are shorter in length when compared to hardwood—both types have a similar hierarchical structure, meaning the orientation of the cells is identical in the wood. This unique anisotropic structure, the properties with distinctive values when measured in several directions, allows it to pump ions and water for photosynthesis in the wood. Similarly, in transparent wood composites, removing the lignin and maintaining the cellulose fiber tubes it allows it to become a clear wood that can get soaked in a glue-like epoxy that makes it a robust and transparent material. An excellent raw material with high transmittance and enhanced mechanical properties. Researchers have successfully tested an eco-friendly alternative: limonene acrylate, a monomer made from limonene, into an acrylate. Limonene is a common cyclic terpene that can be extracted from industrial waste, via isomerization of α‐pinene (from wood) or from citrus peel oil. The bio-based polymers can offer advantages compared to conventional non‐renewable polymers from fossil resources, and still retain a high mechanical performance and it is lightweight, stemming from its porous and anisotropic cellulosic structure; and is of great interest for large-scale sustainable nanotechnologies. Succinylation of the delignified wood substrate using succinic anhydride results in a nanostructured and mechanically strong biocomposite. The polymer matrix usually accounts for ≈70 vol%, results in nanostructured biocomposites combining an excellent optical transmittance of 90% at 1.2 mm thickness and a remarkably low haze of 30%, with a high mechanical performance (strength 174 MPa, Young's modulus 17 GPa). Mechanical properties Transparent wood derives its mechanical properties and performance primarily from its cellulose fiber content and the geometric orientation of the fiber tube cells (radial and tangential) structure, providing the structural base for the design of advanced materials applications. One aspect of the transparent wood mechanical property is the strength of the material. According to Zhu and his colleagues, transparent wood in the longitudinal direction has an elastic modulus of 2.37 GPa and strength of 45.38 MPa (both which are lower than for pure PMMA) and twice as high as those perpendicular to the longitudinal direction, 1.22 GPa and 23.38 MPa respectively. They conclude that longitudinal to transverse properties decreased for transparent wood, which they expected as the presence of the polymer resin suppresses the cavity space. Also, the plastic nature of transparent wood composite provides advantages compare to other brittle materials like glass, meaning it does not shatter upon impact. Optical transmittance and thermal conductivity The transparent wood, tightly packed and perpendicularly aligned cellulose fibers operate as wideband wave-guides with high transmission scattering losses for light. This unique light management capacity results in a light propagation effect. By measuring its optical properties with an integrated sphere, Li and her colleagues found that transparent wood exhibits a high transmittance of 90% (lower than for pure PMMA) and a high optical haze of 95%. As a result, transparent wood as an energy efficient material could be used to decrease the daytime lighting energy usage by efficiently guiding the sunlight into the house while providing uniform and consistent illumination throughout the day. Similarly, the transparent wood's thermal conductivity is attributed to the alignment of the wood cellulose fibers, which has been preserved after lignin removal and polymer infiltration. Transparent wood has a thermal conductivity of 0.32 W⋅m−1⋅K−1 in the axial direction and 0.15 W⋅m−1⋅K−1 in the radial direction respectably. Based on the study done by Céline Montanari of the KTH Royal Institute of Technology in Stockholm, the transparent wood's thermal conductivity, which transforms from semi-transparent to transparent when heated, could be used to make buildings more energy-efficient by capturing the sun's energy during the day and releasing it later at night into the interior. Future application Although the development of transparent wood composites is still at a lab-scale and prototype level, their potential for energy efficiency and operational savings in the building industry are very promising. An essential advantage with transparent wood is its combination of structural and functional performance for load-bearing structures that combine optical, heat-shielding, or magnetic functionalities. Transparent wood is also researched for potential use for touch-sensitive surfaces. Glazing system Such is the case in building applications where artificial light can be replaced by sunlight through a light transmittance design. Based on research and simulation performed by Joseph Arehart at the University of Colorado Boulder, transparent wood as a glass glazing system replacement could reduce the space conditioning energy consumption by 24.6% to 33.3% in medium (climate zone 3C, San Francisco, CA) and large office spaces (climate zone 4C, Seattle, Washington) respectably. These are relevant insights in transparent wood's potential functionality because it shows lower thermal conductivity and better impact strength compared to popular glass glazing systems. Solar cells Another direction for transparent wood applications is as a high optical transmittance for optoelectronic devices as substrates in photovoltaic solar cells. Li and her colleagues at the KTH Royal Institute of Technology studied the high optical transmittance that makes transparent wood a candidate for substrate in perovskite solar cells. They concluded that transparent wood has high optical transmittance of 86% and long term stability with fracture of toughness 3.2 MPa⋅m1/2 compared to glass substrate fracture of toughness 0.7–0.85 MPa⋅m1/2, which meets the substrate's requirements for solar cells. These are relevant information for transparent wood's possible application because it is a suitable and sustainable solution to the substrate for solar cell assembly with potential in energy-efficient building applications, as well as replacements for glass and lowering the carbon footprint for the devices. Transparent wood could transform the material sciences and building industries by enabling new applications such as load-bearing windows. These components could also generate improvements in energy savings and efficiency over glass or other traditional materials. A lot of work and research is needed to understand the interaction between light and the wood structure further, to tune the optical and mechanical properties, and to take advantage of advanced transparent wood composite applications See also Nanowood Pykrete References Further reading Fink, S. (1992). "Transparent Wood; A New Approach in the Functional Study of Wood Structure". Holzforschung-International Journal of the Biology, Chemistry, Physics and Technology of Wood. 46(5), 403–408. Chicago. Berglund, L., et al. (2018). "Bioinspired Wood Nanotechnology for Functional Materials". Advanced Materials, 30(19), 1704285. Zhu, H., et al. (2014). "Transparent paper: fabrications, properties, and device applications". Energy & Environmental Science, 7(1), 269–287. Materials science Polymer chemistry University of Maryland, College Park research projects Glass Plant products Wood
Transparent wood composite
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,271
[ "nan", "Applied and interdisciplinary physics", "Materials science", "Polymer chemistry" ]
50,562,012
https://en.wikipedia.org/wiki/Aquatic%20toxicology%20databases
Toxicological databases are large compilations of data derived from aquatic and environmental toxicity studies. Data is aggregated from a large number of individual studies in which toxic effects upon aquatic and terrestrial organisms have been determined for different chemicals. These databases are then used by toxicologists, chemists, regulatory agencies and scientists to investigate and predict the likelihood that an organic or inorganic chemical will cause an adverse effect (i.e. toxicity) on exposed organisms. Several such databases have been compiled relating specifically to aquatic toxicology. Utility These databases are invaluable resources in the field of aquatic toxicology because the likelihood that a chemical will cause toxicity is highly variable across the broad spectrum of contaminants in the environment. This is because the likelihood of adverse effects on an organism is dependent on the concentration of that substance in the target tissues of the organism, the physicochemical properties of that chemical and the duration of exposure to the chemical. Tools capable of predicting the toxicity of specific chemicals to particular organisms or groups of organisms are essential to regulators and researchers in the field of toxicology. Available databases In aquatic toxicology multiple databases exist and each generally pertains to a single aspect of aquatic toxicology such as PCBs, tissue residues or sediment toxicity. Other informational and regulatory databases on toxicology in general are maintained by the U.S. EPA, USGS, United States Army Corps of Engineers and the National Oceanic and Atmospheric Administration. In the U.S. there are three major databases pertaining specifically to aquatic toxicology: the Toxicity/Residue Database, the Environmental Residue Effects Database and the ECOTOX database. Toxicity/Residue Database The Toxicity/Residue Database is maintained by the U.S. EPA and is a database for the prediction of toxicity of organic and inorganic chemicals to aquatic organisms. This data base was developed by the EPA Duluth office and became operational in 1999. The data base is derived from more than 500 peer-reviewed references and is a collection of their findings on roughly 200 chemicals and 190 species both marine and fresh water. Data regarding organism response endpoints or effects are measured as the concentration of chemical in the tissue of the test organism at the time which effects such as lethality, metabolic depression, or increased respiration occur. More than 3,000 effects may be queried from a small piece of downloaded software to gather survival, growth or reproductive endpoint effect data. Environmental Residue Effects Database The Environmental Residue Effects Database (ERED) is a database maintained by the U.S. Army Corps of Engineers that pairs data regarding the bioaccumulation of toxicants in tissue (via tissue residue) to endpoint effects such as mortality, growth, or physiological and biochemical responses. Response data also include low effect detected (LOED) and no effect detected (NOED) concentrations. This database is derived from 2329 peer-reviewed references regarding 413 chemicals. The database covers literature from 1964 to the present and includes more than 15,000 records. This database is updated with 300 or more records every year on average. The ERED database is specific to sediment toxicity and the effects of contaminates in dredged materials on freshwater organisms. It is intended to be used in evaluating the potential for contaminate concentrations of dredged sediment to have unacceptable adverse effects on aquatic organisms. Although the ERED database was designed as a tool for the Army Corps of Engineers to manage adverse effects of dredging, it is widely applicable to sediment toxicity studies and management. The ECOTOX database ECOTOX is considered to be more comprehensive in that it holds results from toxicity tests of single chemicals on aquatic and terrestrial plants and animals. Data can be found on both freshwater and marine taxa. ECOTOX collects data from previously EPA established databases , TERRATOX, and PHYTOTOX which individually provide aquatic, terrestrial species and plant data respectively. Data large is collected from peer-reviewed literature however some amount of data is sourced from grey literature. Using the Quick Database Query function enables searches by chemical, taxonomic name, effect, and publication year. Data from ECOTOX is used to provide reference parameters to current toxicity studies and serves as a regulatory guideline. ECOTOX source data screening Data resulting from toxicity studies that is integrated in to the ECOTOX database is subjected to a screening and quality assurance criteria developed by the EPA and the Office of Pesticide Programs (OPP). In order for study results to be accepted by the EPA and OPP the toxicity study must follow or consist of the following: The toxic effects are related to single chemical exposure; The toxic effects are on an aquatic or terrestrial plant or animal species; There is a biological effect on live, whole organisms; A concurrent environmental chemical concentration/dose or application rate is reported; and There is an explicit duration of exposure. In addition to the criteria listed above, the following criteria, which are discussed in further detail in Attachment I, are applied by OPP as a further screen of acceptability: Toxicology information is reported for a chemical of concern to OPP; The article is published in the English language; The study is presented as a full article; The paper is a publicly available document; The paper is the primary source of the data A calculated endpoint is reported; Treatment(s) are compared to an acceptable control; The location of the study (e.g., laboratory vs. field) is reported; and The tested species is reported and verified. Regulatory applications In the United States, the ECOTOX, ERED (sediment) and Toxicity Residue Databases are used by many regulatory agencies such as state environmental quality agencies and the EPA to determine regulatory environmental toxicant concentration levels. Under the Clean Water Act the EPA has used the ECOTOX database among other information to set wastewater toxicant concentration standards for industry as well as water quality standards for all contaminants in surface waters. Under the CWA, individual states must regulate water quality criteria at or below the concentrations set forth by the EPA. Sediment toxicant concentrations, however, are generally not regulated in the same way. The determination of sediment quality criteria and sediment toxicity testing is highly complex and is often regulated by states or some state run environmental agency. Sediment toxicity evaluations of contaminated sediments are very site specific and toxicant effect levels are often much more variable than those of surface waters. For this reason it may be nearly impossible to develop feasible acceptable sediment concentration regulations that apply to all aquatic systems or regions. Acceptable concentrations or sediment quality guidelines have been developed and are used in risk assessments and the management of dredged materials. "Sediment quality guidelines" (SQGs), as defined at the 2002 Society of Environmental Toxicology and Chemistry (SETAC) Pellston Workshop, are numerical chemical concentrations intended to be either protective of biological resources, or predictive of adverse effects to those resources, or both. SQGs for assessing sediment quality relative to the potential for adverse effects on sediment-dwelling organisms have been derived using both mechanistic and empirical approaches. References External links The Toxicity/Residue Database EPA database USGS database United States Army Corps of Engineers database National Oceanic and Atmospheric Administration database Office of Pesticide Programs Biochemistry databases Toxicology Toxins Aquatic ecology Environmental science databases Environmental toxicology Water pollution
Aquatic toxicology databases
[ "Chemistry", "Biology", "Environmental_science" ]
1,459
[ "Toxicology", "Biochemistry databases", "Environmental toxicology", "Water pollution", "Ecosystems", "Environmental science databases", "Biochemistry", "Aquatic ecology", "Toxins" ]
60,638,834
https://en.wikipedia.org/wiki/Topological%20polymers
Topological polymers may refer to a polymeric molecule that possesses unique spatial features, such as linear, branched, or cyclic architectures. It could also refer to polymer networks that exhibit distinct topologies owing to special crosslinkers. When self-assembling or crosslinking in a certain way, polymeric species with simple topological identity could also demonstrate complicated topological structures in a larger spatial scale. Topological structures, along with the chemical composition, determine the macroscopic physical properties of polymeric materials. Definition Topological polymers, or polymer topology, could refer to a single polymeric chain with topological information or a polymer network with special junctions or connections. When the topology of a polymeric chain or network is investigated, the exact chemical composition is usually neglected, but the way of junctions and connections is more considered. Various topological structures, on one hand, could potentially change the interactions (van der Waals interaction, hydrogen bonding, etc.) between each of the polymer chain. On the other hand, topology also determines the hierarchical structures within a polymer network, from a microscopic level (<1 nm) to a macroscopic level (10-100 nm), which eventually affords polymeric materials with completely different physical properties, such as mechanical property, glass transition temperature, gelation concentration. Topological polymer classification In early 1950s, Paul J. Flory was the pioneer who developed theories to explain topology within a polymer network, and the structure-property relationships between the topology and the mechanical property, like elasticity, was initially established afterwards. Later in 1980s, Bertrand Duplantier developed theories to describe any polymer network topologies using statistical mechanics, which could help to derive topology-dependent critical exponents in a polymer network. In early 2000s, Yasuyuki Tezuka and coworkers were the first ones that systematically described a single molecular chain with topological information. Adapted from Y. Tezuka and coworker's description of a topological polymer chain with more generalized rules, the topology notation rules are to be introduced first, followed by three classical classifications, including linear, branched and cyclic polymer topologies, and they are classified in a table reorganized and redrawn from Y. Tezuka and coworker (Copyright, 2001 by American Chemical Society). A general polymer chain could be generalized into an undirected graph with nodes (vertices or points) and edges (lines or links) based on graph theory. In a graph theory topology, two sets of nodes are present, termini and junctions. The quantity ‘degree’ represents the number of edges linked to each node, if the degree of a certain node is larger than 3 (including 3), the node is a junction, while the degree of a node is 1, the node is a terminus. There are no nodes with a degree of 2 since they could be generalized into their adjacent nodes. As for a certain polymer, as long as the topology is fixed, a specific topology notation could be generated using the following rules: A general polymer chain notation could be expressed as: Capitalized letter represents the main topology within a polymer, represents linear or branched topology, and Roman numerals are used to represent the number of rings in the polymer chain, represents one cycle, represents two cycles, , , , etc. represents three, four, five cycles and so on. represents the number of nodes in the graph theory topology, represents the number of termini and represents the number of junctions, and is always true. If  could represent an exclusive topology, there is no need to add more information to specify the notation. However, if multiple possibilities are present, extra information is needed. i. For branched topology, a main chain is first selected, and the degree of each junction nodes along the chain should be noted as connected by a hyphen. If there is a side chain on any of the main chain node, should be noted with a bracket following the main chain notation. ii. For monocyclic topology, the outward branch should be firstly identified with the number of branches at each of the junctions as connected by a hyphen. Then the topology of each branch should be identified using the rule in i as using a bracket following the notations. iii. For multicyclic topology, superscript letter (, , and so on) is used to describe internal connections within an existing ring. Linear Linear topology is a special topological structure that exclusively has two nodes as the termini without any junction nodes. High-density polyethylene (HDPE) could be regarded as a linear polymer chain with very small amount of branching, the linear topology has been listed below: Linear chains capable of forming intra-chain interactions can fold into a wide range of circuit topologies. Examples include biopolymers such as proteins and nucleic acids. Branched When side chains are introduced into a linear polymer chain, a branched topology forms. Linear polymers are special types of branched polymers with zero junction nodes, but they are cataloged into two classifications to distinguish their special macroscopic properties. Branched polymers with the same molecular weight usually demonstrate different physical properties due to that branching could generally decrease the van der Waals interactions between each of the polymer chain. Several well-known branched polymers have been synthesized, such as star-shape polymer, comb polymer and dendrimer. Selected branched topologies have been listed below: Cyclic Cyclic structures are of interest topologically because there are no termini in this topology and the physical property could be dramatically different as a result of the restriction of the termini. Monocyclic Monocyclic topology is a topological structure with only one cycle in the polymer chain, and it could be coupled with outward branching structures. Selected monocyclic topologies have been listed below: Bicyclic Bicyclic topology refers to a structure that two cycles connected internally or externally are present in a polymer chain. Selected bicyclic topologies are listed below: Polycyclic Similar to monocyclic and bicyclic topologies, polycyclic topologies possess more cycles in a polymer chain and are more synthetically challenging. Selected polycyclic (tricyclic) topologies are listed below: Polymer network topology Unlike single chain polymeric species, polymer network topology is more complicated as a result of the amorphous feature so that a simple notation is usually not feasible. To analyze the topology of a network, the crosslinkers, including the branched crosslinker and cyclic crosslinker, are considered. Branched crosslinking Branched crosslinkers are entities that do not form cyclic topologies, which could be simply understood by branched topological polymer chain above. The ‘degree’ of branched demonstrates the theoretical number of polymer strands at the junctions of the crosslinker, also known as branch functionality (f). Combining monomers with different degree of branch functionality could generate various topological network with distinct elastic property. Meanwhile, amphiphilic polymers, such as block copolymers, when forming micelle structures, could also be treated as a branched crosslinker with high degree of branch functionality. Cyclic crosslinking Branched crosslinkers should in principle form branched polymer network, but in practice, they could also generate loops and cycles. Cyclic crosslinkers are more sophisticated and show multiple possibilities. Loops or cycles could form in a smaller scale between two polymer chains or in a larger scale among multiple polymer strands. Besides, bicyclic topology is likely to form if two loops are catenated or linked internally or externally. Special cyclic crosslinking is more attractive within rotaxanes or catenanes since cycles are already present in those molecules. The characterization of cyclic topologies within a polymer network, compared to branched crosslinker, is relatively harder to perform. Conventional techniques such as rheology and tensile strength analysis are used to offer semiquantitative insights into the polymer topologies. Recently, the development of multiple quantum nuclear magnetic resonance (NMR) and network disassembly spectrometry (NDS) techniques provides quantitative characterizations of loops or cycles in a polymer network. Topological polymer/network synthesis Topological polymer single chain The synthesis of branched polymers (grafted polymer, comb polymer, star-shape polymer and dendrimer) has been well developed using well-known polymerization methodology such as cationic/anionic polymerization. Unlike branched polymer chain synthesis, the synthesis of cyclic polymer is more challenging. General cyclic species involve the combination between two fragments or among several fragments. Electrostatic self-assembly and covalent fixation is one of the most effective strategies to synthesize cyclic topological polymer. The reaction is driven by the electrostatic interactions between telechelic polytetrahydrofurans with cyclic ammonium salt and pluricaboxylate counterions. Upon dilution, the anions and cations could self-assemble into a cyclic structure, followed by a covalent fixation by heat or other external stimuli to undergo ring-opening reaction and close the chain into a cycle. Topological polymer network Polymer networks intrinsically have various spatial features due to their amorphous property within a three-dimensional network. There are generally two ways to introduce spatially unique entities into a polymer network: The construction of a topological polymer network using monomer and crosslinker building blocks for a spatial uniqueness. The introduction of crosslinkers that possess topological merits, such as polyrotaxane, polycatenane, daisy chain and so on. Examples The topology of a polymer chain or a polymer network is crucial in determining the macroscopic properties of a polymeric material, especially mechanical properties like elasticity and physical properties involving phase transitions. To date, several polymers with topological interest have been developed, which have been used for many applications, such as mechanical elastomer, energy, and so on. Below are some of the representative topological polymers or polymer networks. Interpenetrating polymer Interpenetration polymers are polymer networks involving two and more polymer strands which are spatially intertwining with each other to form unique spatial topologies. Dendrimer Dendrimer is a special branched polymers with a larger fraction of terminal nodes compared to the junction nodes and could be used for applications in drug delivery or catalysis. Polyrotaxane Polyrotaxane is a polymer chain or a polymer network with mechanical interlock structures between ring-like molecules and polymer chain, where both the rings and the linear polymer chain could serve as the crosslinker to form a polymer network. References Polymers
Topological polymers
[ "Chemistry", "Materials_science" ]
2,139
[ "Polymers", "Polymer chemistry" ]
60,639,760
https://en.wikipedia.org/wiki/Artificial%20intelligence%20art
Artificial intelligence art is visual artwork created or enhanced through the use of artificial intelligence (AI) programs. Artists began to create artificial intelligence art in the mid to late 20th century when the discipline was founded. Throughout its history, artificial intelligence art has raised many philosophical concerns related to the human mind, artificial beings, and what can be considered art in a human–AI collaboration. Since the 20th century, artists have used AI to create art, some of which has been exhibited in museums and won awards. During the AI boom of the early 2020s, text-to-image models such as Midjourney, DALL-E, Stable Diffusion, and FLUX.1 became widely available to the public, allowing non-artists to quickly generate imagery with little effort. Commentary about AI art in the 2020s has often focused on issues related to copyright, deception, defamation, and its impact on more traditional artists, including technological unemployment. Opinions have also risen on the possible effect AI generated art might have on creativity. History Early history The concept of automated art dates back at least to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria were described as having designed machines capable of writing text, generating sounds, and playing music. Early experiments were driven by the idea that computers, beyond performing logical operations, could generate aesthetically pleasing works, offering a new dimension to creativity. The tradition of creative automatons has flourished throughout history, such as Maillardet's automaton, created around 1800 and capable of creating multiple drawings and poems stored in its "cams”, the brass disks that hold memory. Along with this, Ada Lovelace, typically known for her work on the analytical engine, in her notes, begins to conceptualize the idea "computing operations" could be used to generate music and poems. This concept resulted in what is now referred to as "The Lovelace Effect," which gives a concrete set of tools to analyze situations where a computer's behavior is viewed by users as creative. However, Lovelace also discusses a concept in her notes that is known as "The Lovelace Objection," where she argues that machines have "no pretensions whatever to originate anything," which is a direct contradiction to the idea of artificial intelligence and creative machines. In 1950, with the publication of Alan Turing's paper Computing Machinery and Intelligence, there was a shift from defining intelligence in regards to machines in abstract terms to evaluating whether a machine can mimic human behavior and responses convincingly. Shortly after, the academic discipline of artificial intelligence was founded at a research workshop at Dartmouth College in 1956 and has experienced several waves of advancement and optimism in the decades since. Since its founding, researchers in the field have raised philosophical and ethical arguments about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored by myth, fiction, and philosophy since antiquity. 1950s to 2000s: Early implementations Since the founding of AI in the 1950s, artists and researchers have used artificial intelligence to create artistic works. These works were sometimes referred to as algorithmic art, computer art, digital art, or New media art. One of the first significant AI art systems is AARON, developed by Harold Cohen beginning in the late 1960s at the University of California at San Diego. AARON uses a symbolic rule-based approach to generate technical images in the era of GOFAI programming, and it was developed by Cohen with the goal of being able to code the act of drawing. In its earliest form, AARON created abstract black-and-white drawings which would later be finished by Cohen painting them. Throughout the years, he also began to develop a way for AARON to paint as well, using special brushes and dyes that were chosen by the program itself without mediation from Cohen. After years of work, AARON was exhibited in 1972 at the Los Angeles County Museum of Art. From 1973 to 1975, Cohen refined AARON during a residency at the Artificial Intelligence Laboratory at Stanford University. In 2024, the Whitney Museum of American Art exhibited AI art from throughout Cohen's career, including re-created versions of his early robotic drawing machines. Karl Sims has exhibited art created with artificial life since the 1980s. He received an M.S. in computer graphics from the MIT Media Lab in 1987 and was artist-in-residence from 1990 to 1996 at the supercomputer manufacturer and artificial intelligence company Thinking Machines. In both 1991 and 1992, Sims won the Golden Nica award at Prix Ars Electronica for his 3D AI animated videos using artificial evolution. In 1997, Sims created the interactive installation Galápagos for the NTT InterCommunication Center in Tokyo. In this installation, viewers help evolve 3D animated creatures by selecting which ones will be allowed to live and produce new, mutated offspring. Furthermore, Sims received an Emmy Award in 2019 for outstanding achievement in engineering development. Eric Millikin has been creating animated films using artificial intelligence since the 1980s, and began posting art on the internet using CompuServe in the early 1980s. In 1999, Scott Draves and a team of several engineers created and released Electric Sheep as a free software screensaver. Electric Sheep is a volunteer computing project for animating and evolving fractal flames, which are in turn distributed to the networked computers, which display them as a screensaver. The screensaver used AI to create an infinite animation by learning from its audience. In 2001, Draves won the Fundacion Telefónica Life 4.0 prize for Electric Sheep. 2010s: Deep learning Deep learning, characterized by its multi-layer structure that attempts to mimic the human brain, first came about in the 2010s and causing a significant shift in the world of AI art. During the deep learning era, there are mainly these types of designs for generative art: autoregressive models, diffusion models, GANs, normalizing flows. In 2014, Ian Goodfellow and colleagues at Université de Montréal developed the generative adversarial network (GAN), a type of deep neural network capable of learning to mimic the statistical distribution of input data such as images. The GAN uses a "generator" to create new images and a "discriminator" to decide which created images are considered successful. Unlike previous algorithmic art that followed hand-coded rules, generative adversarial networks could learn a specific aesthetic by analyzing a dataset of example images. In 2015, a team at Google released DeepDream, a program that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia. The process creates deliberately over-processed images with a dream-like appearance reminiscent of a psychedelic experience. Later, in 2017, a conditional GAN learned to generate 1000 image classes of ImageNet, a large visual database designed for use in visual object recognition software research. By conditioning the GAN on both random noise and a specific class label, this approach enhanced the quality of image synthesis for class-conditional models. Autoregressive models were used for image generation, such as PixelRNN (2016), which autoregressively generates one pixel after another with a recurrent neural network. Immediately after the Transformer architecture was proposed in Attention Is All You Need (2018), it was used for autoregressive generation of images, but without text conditioning. In 2018, an auction sale of artificial intelligence art was held at Christie's in New York where the AI artwork Edmond de Belamy (a pun on Goodfellow's name) sold for , which was almost 45 times higher than its estimate of –10,000. The artwork was created by Obvious, a Paris-based collective. Furthermore, the website Artbreeder, launched in 2018, uses the models StyleGAN and BigGAN to allow users to generate and modify images such as faces, landscapes, and paintings. In 2019, Stephanie Dinkins won the Creative Capital award for her creation of an evolving artificial intelligence based on the "interests and culture(s) of people of color." Also in 2019, Sougwen Chung won the Lumen Prize for her performances with a robotic arm that uses AI to attempt to draw in a manner similar to Chung. 2020s: Text-to-image and diffusion models In the 2020s, text-to-image models, which generate images based on prompts, became widely used, marking yet another shift in the creation of AI generated artworks. In 2021, using the influential large language generative pre-trained transformer models that are used in GPT-2 and GPT-3, OpenAI released a series of images created with the text-to-image AI model DALL-E 1. It was an autoregressive generative model with essentially the same architecture as GPT-3. Along with this, later in 2021, EleutherAI released the open source VQGAN-CLIP based on OpenAI's CLIP model. Diffusion models, generative models used to create synthetic data based on existing data, were first proposed in 2015, but they only became better than GANs in early 2021. Latent diffusion model was published in December 2021 and became the basis for the later Stable Diffusion (August 2022). In 2022, Midjourney was released, followed by Google Brain's Imagen and Parti, which were announced in May 2022, Microsoft's NUWA-Infinity, and the source-available Stable Diffusion, which was released in August 2022. DALL-E 2, a successor to DALL-E, was beta-tested and released. Stability AI has a Stable Diffusion web interface called DreamStudio, plugins for Krita, Photoshop, Blender, and GIMP, and the Automatic1111 web-based open source user interface. Stable Diffusion's main pre-trained model is shared on the Hugging Face Hub. In 2023, Eric Millikin released The Dance of the Nain Rouge, a documentary film created using AI deepfake technology about the Detroit folklore legend of the Nain Rouge. The film is described as "an experimental decolonial Detroit demonology deepfake dream dance documentary." It was awarded the "Best Innovative Technologies Award" ("Premio Migliori Tecnologie Innovative") at the 2024 Pisa Robot Film Festival in Italy and "Best Animation Film" at the 2024 Absurd Film Festival in Italy. Ideogram was released in August 2023, this model is known for its ability to generate legible text. In 2024, Flux was released, this model can generate realistic images with consistent results and was integrated into Grok, the chatbot used on X (formerly Twitter), and Le Chat, the chatbot of Mistral AI. Flux was developed by Black Forest Labs, founded by the researchers behind Stable Diffusion. Grok later switched to its own text-to-image model Aurora in December 2024. Along with this, some examples of text-to-video models of the mid-2020s are Runway's Gen-2, Google's VideoPoet, and OpenAI's Sora, which released in December 2024. Tools and processes Imagery There are many tools available to the artist when working with diffusion models. They can define both positive and negative prompts, but they are also afforded a choice in using (or omitting the use of) VAEs, LorAs, hypernetworks, ipadapter, and embeddings/textual inversions. Variables, including CFG, seed, steps, sampler, scheduler, denoise, upscaler, and encoder, are sometimes available for adjustment. Additional influence can be exerted during pre-inference by means of noise manipulation, while traditional post-processing techniques are frequently used post-inference. Artists can also train their own models. In addition, procedural "rule-based" generation of images using mathematical patterns, algorithms that simulate brush strokes and other painted effects, and deep learning algorithms such as generative adversarial networks (GANs) and transformers have been developed. Several companies have released apps and websites that allow one to forego all the options mentioned entirely while solely focusing on the positive prompt. There also exist programs which transform photos into art-like images in the style of well-known sets of paintings. There are many options, ranging from simple consumer-facing mobile apps to Jupyter notebooks and webUIs that require powerful GPUs to run effectively. Additional functionalities include "textual inversion," which refers to enabling the use of user-provided concepts (like an object or a style) learned from a few images. Novel art can then be generated from the associated word(s) (the text that has been assigned to the learned, often abstract, concept) and model extensions or fine-tuning (such as DreamBooth). Impact and applications AI has the potential for a societal transformation, which may include enabling the expansion of noncommercial niche genres (such as cyberpunk derivatives like solarpunk) by amateurs, novel entertainment, fast prototyping, increasing art-making accessibility, and artistic output per effort and/or expenses and/or time—e.g., via generating drafts, draft-refinitions, and image components (inpainting). Generated images are sometimes used as sketches, low-cost experiments, inspiration, or illustrations of proof-of-concept-stage ideas. Additional functionalities or improvements may also relate to post-generation manual editing (i.e., polishing), such as subsequent tweaking with an image editor. Prompt engineering and sharing Prompts for some text-to-image models can also include images and keywords and configurable parameters, such as artistic style, which is often used via keyphrases like "in the style of [name of an artist]" in the prompt and/or selection of a broad aesthetic/art style. There are platforms for sharing, trading, searching, forking/refining, and/or collaborating on prompts for generating specific imagery from image generators. Prompts are often shared along with images on image-sharing websites such as Reddit and AI art-dedicated websites. A prompt is not the complete input needed for the generation of an image; additional inputs that determine the generated image include the output resolution, random seed, and random sampling parameters. Related terminology Synthetic media, which includes AI art, was described in 2022 as a major technology-driven trend that will affect business in the coming years. Harvard Kennedy School researchers voiced concerns about synthetic media serving as a vector for political misinformation soon after studying the proliferation of AI art on the X platform. Synthography is a proposed term for the practice of generating images that are similar to photographs using AI. Impact Bias A major concern raised about AI-generated images and art is sampling bias within model training data leading towards discriminatory output from AI art models. In 2023, University of Washington researchers found evidence of racial bias within the Stable Diffusion model, with images of a "person" corresponding most frequently with images of males from Europe or North America. Looking more into the sampling bias found within AI training data, in 2017, researchers at Princeton University used AI software to link over 2 million words, finding that European names were viewed as more "pleasant" than African-Americans names, and that the words "woman" and "girl" were more likely to be associated with the arts instead of science and math, "which were most likely connected to males." Generative AI models typically work based on user-entered word-based prompts, especially in the case of diffusion models, and this word-related bias may lead to biased results. Along with this, generative AI can perpetuate harmful stereotypes regarding women. For example, Lensa, an AI app that trended on TikTok in 2023, was known to lighten black skin, make users thinner, and generate hypersexualized images of women. Melissa Heikkilä, a senior reporter at MIT Technology Review, shared the findings of an experiment using Lensa, noting that the generated avatars did not resemble her and often depicted her in a hypersexualized manner. Experts suggest that such outcomes can result from biases in the datasets used to train AI models, which can sometimes contain imbalanced representations, including hypersexual or nude imagery. In 2024, Google's chatbot Gemini's AI image generator was criticized for perceived racial bias, with claims that Gemini deliberately underrepresented white people in its results. Users reported that it generated images of white historical figures like the Founding Fathers, Nazi soldiers, and Vikings as other races, and that it refused to process prompts such as "happy white people" and "ideal nuclear family". Google later apologized for "missing the mark" and took Gemini's image generator offline for updates. This prompted discussions about the ethical implications of representing historical figures through a contemporary lens, leading critics to argue that these outputs could mislead audiences regarding actual historical contexts. Copyright Legal scholars, artists, and media corporations have considered the legal and ethical implications of artificial intelligence art since the 20th century. Some artists use AI art to critique and explore the ethics of using gathered data to produce new artwork. In 1985, intellectual property law professor Pamela Samuelson argued that US copyright should allocate algorithmically generated artworks to the user of the computer program. A 2019 Florida Law Review article presented three perspectives on the issue. In the first, artificial intelligence itself would become the copyright owner; to do this, Section 101 of the US Copyright Act would need to be amended to define "author" as a computer. In the second, following Samuelson's argument, the user, programmer, or artificial intelligence company would be the copyright owner. This would be an expansion of the "work for hire" doctrine, under which ownership of a copyright is transferred to the "employer." In the third situation, copyright assignments would never take place, and such works would be in the public domain, as copyright assignments require an act of authorship. In 2022, coinciding with the rising availability of consumer-grade AI image generation services, popular discussion renewed over the legality and ethics of AI-generated art. A particular topic is the inclusion of copyrighted artwork and images in AI training datasets, with artists objecting to commercial AI products using their works without consent, credit, or financial compensation. In September 2022, Reema Selhi, of the Design and Artists Copyright Society, stated that "there are no safeguards for artists to be able to identify works in databases that are being used and opt out." Some have claimed that images generated with these models can bear resemblance to extant artwork, sometimes including the remains of the original artist's signature. In December 2022, users of the portfolio platform ArtStation staged an online protest against non-consensual use of their artwork within datasets; this resulted in opt-out services, such as "Have I Been Trained?" increasing in profile, as well as some online art platforms promising to offer their own opt-out options. According to the US Copyright Office, artificial intelligence programs are unable to hold copyright, a decision upheld at the Federal District level as of August 2023 followed the reasoning from the monkey selfie copyright dispute. OpenAI, the developer of DALL-E, has its own policy on who owns generated art. They assign the right and title of a generated image to the creator, meaning the user who inputted the prompt owns the image generated, along with the right to sell, reprint, and merchandise it. In January 2023, three artists—Sarah Andersen, Kelly McKernan, and Karla Ortiz—filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that it is legally required to obtain the consent of artists before training neural nets on their work and that these companies infringed on the rights of millions of artists by doing so on five billion images scraped from the web. In July 2023, U.S. District Judge William Orrick was inclined to dismiss most of the lawsuits filed by Andersen, McKernan, and Ortiz, but allowed them to file a new complaint. Also in 2023, Stability AI was sued by Getty Images for using its images in the training data. A tool built by Simon Willison allowed people to search 0.5% of the training data for Stable Diffusion V1.1, i.e., 12 million of the 2.3 billion instances from LAION 2B. Artist Karen Hallion discovered that her copyrighted images were used as training data without their consent. In March 2024, Tennessee enacted the ELVIS Act, which prohibits the use of AI to mimic a musician's voice without permission. A month later in that year, Adam Schiff introduced the Generative AI Copyright Disclosure Act which, if passed, would require that AI companies to submit copyrighted works in their datasets to the Register of Copyrights before releasing new generative AI systems. Deception As with other types of photo manipulation since the early 19th century, some people in the early 21st century have been concerned that AI could be used to create content that is misleading and can be made to damage a person's reputation, such as deepfakes. Artist Sarah Andersen, who previously had her art copied and edited to depict Neo-Nazi beliefs, stated that the spread of hate speech online can be worsened by the use of image generators. Some also generate images or videos for the purpose of catfishing. AI systems have the ability to create deepfake content, which is often viewed as harmful and offensive. The creation of deepfakes poses a risk to individuals who have not consented to it. This mainly refers to deepfake pornography which is used as revenge porn, where sexually explicit material is disseminated to humiliate or harm another person. AI-generated child pornography has been deemed a potential danger to society due to its unlawful nature. To mitigate some deceptions, OpenAI developed a tool in 2024 to detect images that were generated by DALL-E 3. In testing, this tool accurately identified DALL-E 3-generated images approximately 98% of the time. The tool is also fairly capable of recognizing images that have been visually modified by users post-generation.After winning the 2023 "Creative" "Open competition" Sony World Photography Awards, Boris Eldagsen stated that his entry was actually created with artificial intelligence. Photographer Feroz Khan commented to the BBC that Eldagsen had "clearly shown that even experienced photographers and art experts can be fooled". Smaller contests have been affected as well; in 2023, a contest run by author Mark Lawrence as Self-Published Fantasy Blog-Off was cancelled after the winning entry was allegedly exposed to be a collage of images generated with Midjourney. In May 2023, on social media sites such as Reddit and Twitter, attention was given to a Midjourney-generated image of Pope Francis wearing a white puffer coat. Additionally, an AI-generated image of an attack on the Pentagon went viral as part of a hoax news story on Twitter. In the days before March 2023 indictment of Donald Trump as part of the Stormy Daniels–Donald Trump scandal, several AI-generated images allegedly depicting Trump's arrest went viral online. On March 20, British journalist Eliot Higgins generated various images of Donald Trump being arrested or imprisoned using Midjourney v5 and posted them on Twitter; two images of Trump struggling against arresting officers went viral under the mistaken impression that they were genuine, accruing more than 5 million views in three days. According to Higgins, the images were not meant to mislead, but he was banned from using Midjourney services as a result. As of April 2024, the tweet had garnered more than 6.8 million views. In February 2024, the paper Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway was published using AI-generated images. It was later retracted from Frontiers in Cell and Developmental Biology because the paper "does not meet the standards". Income and employment stability As generative AI image software such as Stable Diffusion and DALL-E continue to advance, the potential problems and concerns that these systems pose for creativity and artistry have risen. In 2022, artists working in various media raised concerns about the impact that generative artificial intelligence could have on their ability to earn money, particularly if AI-based images started replacing artists working in the illustration and design industries. In August 2022, digital artist R. J. Palmer stated that "I could easily envision a scenario where using AI, a single artist or art director could take the place of 5-10 entry level artists... I have seen a lot of self-published authors and such say how great it will be that they don’t have to hire an artist." Scholars Jiang et al. state that "Leaders of companies like Open AI and Stability AI have openly stated that they expect generative AI systems to replace creatives imminently." A 2022 case study found that AI-produced images created by technology like DALL-E caused some traditional artists to be concerned about losing work, while others use it to their advantage and view it as a tool. AI-based images have become more commonplace in art markets and search engines because AI-based text-to-image systems are trained from pre-existing artistic images, sometimes without the original artist's consent, allowing the software to mimic specific artists' styles. For example, Polish digital artist Greg Rutkowski has stated that it is more difficult to search for his work online because many of the images in the results are AI-generated specifically to mimic his style. Furthermore, some training databases on which AI systems are based are not accessible to the public. The ability of AI-based art software to mimic or forge artistic style also raises concerns of malice or greed. Works of AI-generated art, such as Théâtre D'opéra Spatial, a text-to-image AI illustration that won the grand prize in the August 2022 digital art competition at the Colorado State Fair, have begun to overwhelm art contests and other submission forums meant for small artists. The Netflix short film The Dog & the Boy, released in January 2023, received backlash online for its use of artificial intelligence art to create the film's background artwork. Within the same vein, Disney released Secret Invasion, a Marvel TV show with an AI-generated intro, on Disney+ in 2023, causing concern and backlash regarding the idea that artists could be made obsolete by machine-learning tools. AI art has sometimes been deemed to be able to replace traditional stock images. In 2023, Shutterstock announced a beta test of an AI tool that can regenerate partial content of other Shutterstock's images. Getty Images and Nvidia have partnered with the launch of Generative AI by iStock, a model trained on Getty's library and iStock's photo library using Nvidia's Picasso model. Power usage Researchers from Hugging Face and Carnegie Mellon University reported in a 2023 paper that generating one thousand 1024×1024 images using Stable Diffusion's XL 1.0 base model requires 11.49 kWh of energy and generates of carbon dioxide, which is roughly equivalent to driving an average gas-powered car a distance of . Comparing 88 different models, the paper concluded that image-generation models used on average around 2.9kWh of energy per 1,000 inferences. Analysis of existing art using AI In addition to the creation of original art, research methods that use AI have been generated to quantitatively analyze digital art collections. This has been made possible due to the large-scale digitization of artwork in the past few decades. According to CETINIC and SHE (2022), using artificial intelligence to analyze already-existing art collections can provide new perspectives on the development of artistic styles and the identification of artistic influences. Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art. Close reading focuses on specific visual aspects of one piece. Some tasks performed by machines in close reading methods include computational artist authentication and analysis of brushstrokes or texture properties. In contrast, through distant viewing methods, the similarity across an entire collection for a specific feature can be statistically visualized. Common tasks relating to this method include automatic classification, object detection, multimodal tasks, knowledge discovery in art history, and computational aesthetics. Synthetic images can also be used to train AI algorithms for art authentication and to detect forgeries. Researchers have also introduced models that predict emotional responses to art. One such model is ArtEmis, a large-scale dataset paired with machine learning models. ArtEmis includes emotional annotations from over 6,500 participants along with textual explanations. By analyzing both visual inputs and the accompanying text descriptions from this dataset, ArtEmis enables the generation of nuanced emotional predictions. Other forms of art AI has also been used in arts outside of visual arts. Generative AI has been used in video game production beyond imagery, especially for level design (e.g., for custom maps) and creating new content (e.g., quests or dialogue) or interactive stories in video games. AI has also been used in the literary arts, such as helping with writer's block, inspiration, or rewriting segments. In the culinary arts, some prototype cooking robots can dynamically taste, which can assist chefs in analyzing the content and flavor of dishes during the cooking process. See also References Generative artificial intelligence Visual arts Digital art Computer art Art controversies 20th-century introductions Works involved in plagiarism controversies
Artificial intelligence art
[ "Engineering" ]
6,106
[ "Artificial intelligence engineering", "Generative artificial intelligence" ]
60,641,727
https://en.wikipedia.org/wiki/FAM71E2
FAM71E2, also known as Family With Sequence Similarity 71 Member E2, is a protein that, in humans, is encoded by the FAM71E2 gene. Aliases include C19orf16, Protein FAM71E2, Chromosome 19 open reading frame 16, and Putative Protein FAM71E2. The gene is primarily conserved in mammals, but it is also conserved in two reptile species. Gene Location FAM71E2 is located on the minus strand at 9q13.42 and extends from 55,354,908 bp to 55,363,252 bp. The gene is 8,353 bp long, and has 11 exons. Gene Neighborhood These genes are closest to FAM71E2 on the human genome: TMEM190: protein with unknown function. COX6B2: protein that connects the two COX monomers into the physiological dimeric form. KMT5B: protein with unknown function. IL11: cytokine with multiple functions. Transcript mRNA variants Two alternatively spliced mRNA variants are produced during transcription: aAUG10 and bAUG10. They are both validated alternative polyadenylation sites. However, there are no isoforms of FAM71E2. Stem loops Conserved stem loop regions were found on both the 5' and 3' UTR in closely related orthologs. There were no conserved stem loops for distantly related orthologs. Protein Properties FAM71E2 is 922 amino acids long and has a molecular weight of 10/100,000 pI/Mw. The protein has four different domains: DUF3699, PRK14951, PHA03247, and BASP1. The structure consists of 8 alpha helixes and 1 beta sheet. Localization This protein is localized in the nucleus. Localization in the nucleus is conserved in all orthologs. Gene regulation Promoter The promoter of FAM71E2 is located between 55363152 and 55364260 on the minus strand and is 1,109 bp long. This promoter was selected based on its main expression in the testes and high CAGE values. Transcription factor binding sites Multiple transcription factor binding sites were found for FAM71E2. They were selected based on relatedness to potential gene function such as SOX11 and estrogen response elements. Expression FAM71E2 is primarily expressed in male tissues, particularly the testis. There is also lower expression in the brain, mammary gland, prostate, and thymus. FAM71E2 has also been expressed in breast (mammary gland) tumor and normal tissues. Metaphase II stage oocytes matured in vivo The graph on the right is from a study analyzing the Metaphase II stage oocytes matured in vivo. The goal of this study was to identify genes and deduced pathways from human oocyte that can help us understand oogenesis, folliculogenesis, fertilization, and embryonic development. The control consisted of RNA from 10 different normal human tissues: skeletal muscle, kidney, lung, colon, liver, spleen, breast, brain, heart, and stomach. The results from this study indicate that expression of FAM71E2 in oocytes is very low compared to that of normal adult tissue from various parts of the body. Human protein atlas supports these observations since there was no expression during the earliest phase of development (embryoid body). However, Human protein atlas also showed there was very minimal expression in the fetus. Estrogen receptor alpha-silenced MCF7 breast cancer cells This study indicates that there is a very slight decrease in FAM71E2 expression in estrogen receptor knockdown samples. This study may also support the Human protein atlas information stating FAM71E2 has slight expression in Breast (mammary glad) tumors. Neural transcription factor SOX11 depletion effect on mantle cell lymphoma cell line This study was conducted by looking at mantle cell lymphoma cells depleted for the transcription factor SOX11. What is interesting is that FAM71E2 is expressed higher in the SOX11 depleted cells than the control, even though there are SOX11 transcription factors in FAM71E2. It may be possible that these transcription factors exist but are simply not transcribed. Further research on this topic should be conducted. Homology Paralogs FAM71 has many paralogs, especially from FAM71. The paralogs are sorted by similarity. The paralogs in the table were selected based on their e-value and relevance to the FAM71 family. E-value range: 0 to 3e^-11. Similarity range: 100% to 51%. Orthologs Interacting proteins There are several interacting proteins with FAM71E2. One protein interaction program predicted NOTCH2NL, P60369, ALB, and MTUS2 interact with FAM71E2. NOTCH2NL might have a role in the Notch signaling pathway as well as regulating neutrophil differentiation. P60369 is a hair keratin-associated protein. ALB functions as a regulator of colloidal osmotic pressure of blood, as well as a major zinc transporter. MTUS2 main function is to bind microtubules. Another protein interaction program predicted BOD1L2, FAM200A, CCT8L2, OR9G1, and AMPD3 interact with FAM71E2. BOD1L2 may have a role in biorientation via mitotic spindles. CCT8L2 assists folding proteins after ATP hydrolysis. OR9G1 functions as an odorant receptor. AMPD3 functions in energy metabolism. FAM200A has no known function. Future research Based on expression data, there are several topics that can be explored to learn more about the exact function of FAM71E2. Further studies should look at expression analysis during developmental stages. There was minimal expression in fetus on Human protein atlas. Another study should be conducted to determine if this expression is true or if the results are from an error. The information gathered from the expression analysis for oocyte indicate that FAM71E2 is expressed far lower during embryotic development than in adult humans. This cannot be equated with expression in fetuses, but additional studies should be conducted to determine if, how much, and what stage of development expression exists. Additional research should indicate if FAM71E2 is expressed more, less or equally in breast tissue and breast tumors. Further research on SOX11 transcription factor expression. References Proteins Human genes
FAM71E2
[ "Chemistry" ]
1,398
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
60,643,089
https://en.wikipedia.org/wiki/Byte%20Sieve
The Byte Sieve is a computer-based implementation of the Sieve of Eratosthenes published by Byte as a programming language performance benchmark. It first appeared in the September 1981 edition of the magazine and was revisited on occasion. Although intended to compare the performance of different languages on the same computers, it quickly became a widely used machine benchmark. The Sieve was one of the more popular benchmarks of the home computer era, another being the Creative Computing Benchmark of 1983, and the Rugg/Feldman benchmarks, mostly seen in the UK in this era. Byte later published the more thorough NBench in 1995 to replace it. History Origins Jim Gilbreath of the Naval Ocean System Center had been considering the concept of writing a small language benchmarking program for some time, desiring one that would be portable across languages, small enough that the program code would fit on a single printed page, and did not rely on specific features like hardware multiplication or division. The solution was inspired by a meeting with Chuck Forsberg at the January 1980 USENIX meeting in Boulder, CO, where Forsberg mentioned an implementation of the sieve written by Donald Knuth. Gilbreath felt the sieve would be an ideal benchmark as it avoided indirect tests on arithmetic performance, which varied widely between systems. The algorithm mostly stresses array lookup performance and basic logic and branching capabilities. Nor does it require any advanced language features like recursion or advanced collection types. The only modification from Knuth’s original version was to remove a multiplication by two and replace it with an addition instead. With the original version, machines with hardware multipliers would otherwise run so much faster that the rest of the performance would be hidden. After six months of effort porting it to as many platforms as he had access to, the first results were introduced in the September 1981 edition of Byte in an article entitled "A High-Level Language Benchmark". Gilbreath was quick to point out that: The article provided reference implementations in ten languages, including more popular selections like BASIC, C, Pascal, COBOL, and FORTRAN, and some less well-known examples like Forth, ZSPL, Ratfor, PL/1 and PLMX. Example runs were provided for a variety of machines, mostly Zilog Z80 or MOS 6502-based. The best time was initially 16.5 seconds, turned in by Ratfor on a 4 MHz Z80 machine, but Gary Kildall personally provided a version in Digital Research's prototype version of PL/1 that ran in 14 seconds and set the mark for this first collection. The slowest was Microsoft COBOL on the same machine, which took a whopping 5115 seconds (almost one and a half hours), longer even than interpreted languages like BASIC. A notable feature of this first run was that C, Pascal and PL/1 all turned in a roughly similar performance that easily beat the various interpreters. A second set of tests was carried out on more powerful machines, with Motorola 68000 assembly language turning in the fastest times at 1.12 seconds, slightly besting C on a PDP-11/70 and almost twice as fast as 8086 assembler. Most PDP-11 and HP-3000 times were much slower, on the order of 10 to 50 seconds. Tests on these machines using only high-level languages was led by NBS Pascal on the PDP-11, at 2.6 seconds. UCSD Pascal provided another interesting set of results as the same program can be run on multiple machines. Running on the dedicated Ithaca InterSystems Pascal-100 machine, a Pascal MicroEngine based computer, it ran in 54 seconds, while on the Z80 it was 239, and 516 on the Apple II. Spread Gilbreath, this time along with his brother Gary, revisited the code in the January 1983 edition of Byte. This version removed most of the less popular languages, leaving Pascal, C, FORTRAN IV, and COBOL, while adding Ada and Modula-2. Thanks to readers providing additional samples, the number of machines, operating systems and languages compared in the resulting tables was greatly expanded. Motorola 68000 (68k) assembly remained the fastest, almost three times the speed of the Intel 8086 running at the same 8 MHz clock. Using high-level languages the two were closer in performance, with the 8086 generally better than half the speed of the 68k and often much closer. A wider variety of minicomputers and mainframes was also included, with times that the 68k generally beat except for the very fastest machines like the IBM 3033 and high-end models of the VAX. Older machines like the Data General Nova, PDP-11 and HP-1000 were nowhere near as fast as the 68k. Gilbreath's second article appeared as the benchmark was becoming quite common as a way to compare the performance of various machines, let alone languages. In spite of his original warning not to do so, it soon began appearing in magazine advertisements as a way to compare performance against the competition, and as a general benchmark. Byte once again revisited the sieve later in August 1983 as part of a whole-magazine series of articles on the C language. In this case the use was more in keeping with the original intent, using a single source code and running it on a single machine to compare the performance of C compilers on the CP/M-86 operating system, on CP/M-80, and for the IBM PC. In spite of Gilbreath's concern in the original article, by this time the code had become almost universal for testing, and one of the articles remarked that "The Sieve of Eratosthenes is a mandatory benchmark". It was included in the Byte UNIX Benchmark Suite introduced in August 1984. Today New versions of the code continue to appear for new languages, eg Rosetta Code and GitHub has many versions available. It is often used as an example of functional programming in spite of the common version not actually using the sieve algorithm. Implementation The provided implementation calculated odd primes only, so the 8191 element array actually represented primes less than 16385. As shown in a sidebar table, the 0th element represented 3, 1st element 5, 2nd element 7, and so on. This is the original BASIC version of the code presented in 1981. The dialect is not specified, but a number of details mean it does not run under early versions of Microsoft BASIC (4.x and earlier), among these the use of long variable names like and . The lack of line numbers may suggest a minicomputer variety that reads source from a text file, but may have also been a printing error. REM Eratosthenes Sieve Prime Number Program in BASIC 1 SIZE = 8190 2 DIM FLAGS(8191) 3 PRINT "Only 1 iteration" 5 COUNT = 0 6 FOR I = 0 TO SIZE 7 FLAGS (I) = 1 8 NEXT I 9 FOR I = 0 TO SIZE 10 IF FLAGS (I) = 0 THEN 18 11 PRIME = I+I + 3 12 K = I + PRIME 13 IF K > SIZE THEN 17 14 FLAGS (K) = 0 15 K = K + PRIME 16 GOTO 13 17 COUNT = COUNT + 1 18 NEXT I 19 PRINT COUNT," PRIMES" And in C, with some whitespace adjustments from the original: #define true 1 #define false 0 #define size 8190 #define sizepl 8191 char flags[sizepl]; main() { int i, prime, k, count, iter; printf("10 iterations\n"); for (iter = 1; iter <= 10; iter ++) { count=0 ; for (i = 0; i <= size; i++) flags[i] = true; for (i = 0; i <= size; i++) { if (flags[i]) { prime = i + i + 3; k = i + prime; while (k <= size) { flags[k] = false; k += prime; } count = count + 1; } } } printf("\n%d primes", count); } Notes References Citations Bibliography Benchmarks (computing) History of computing Articles with example BASIC code Articles with example C code
Byte Sieve
[ "Technology" ]
1,732
[ "Computing comparisons", "Computer performance", "Benchmarks (computing)", "Computers", "History of computing" ]
60,643,388
https://en.wikipedia.org/wiki/Hertz%20vector
Hertz vectors, or the Hertz vector potentials, are an alternative formulation of the electromagnetic potentials. They are most often introduced in electromagnetic theory textbooks as practice problems for students to solve. There are multiple cases where they have a practical use, including antennas and waveguides. Though they are sometimes used in such practice problems, they are still rarely mentioned in most electromagnetic theory courses, and when they are they are often not practiced in a manner that demonstrates when they may be useful or provide a simpler method to solving a problem than more commonly practiced methods. Overview Hertz vectors can be advantageous when solving for the electric and magnetic fields in certain scenarios, as they provide an alternative way to define the scalar potential and the vector potential which are used to find the fields as is commonly done. Considering cases of electric and magnetic polarization separately for simplicity, each can be defined in terms of the scalar and vector potentials which then allows for the electric and magnetic fields to be found. For cases of just electric polarization the following relations are used. And for cases of solely magnetic polarization they are defined as: To apply these, the polarizations need to be defined so that the form of the Hertz vectors can be obtained. Considering the case of simple electric polarization provides the path to finding this form via the wave equation. Assuming the space is uniform and non-conducting, and the charge and current distributions are given by , define a vector such that and . Using these to solve for the vectors is similar to how the auxiliary fields and can be found, however here the Hertz vectors treat the electric and magnetic polarizations as sources. The Hertz vector potentials from these sources, for the electric Hertz potential, and for the magnetic Hertz potential can be derived using the wave equation for each. This is simply done by applying the d'Alembert operator to both vectors, keeping in mind that , and the result is non-zero due to the polarizations that are present. This provides a direct pathway between easily determined properties such as current density to fields via the Hertz vectors and their relations to the scalar and vector potentials. These wave equations yield the following solutions for the Hertz vectors: where and should be evaluated at the retarded time . The electric and magnetic fields can then be found using the Hertz vectors. For simplicity in observing the relationship between polarization, the Hertz vectors, and the fields, only one source of polarization (electric or magnetic) will be considered at a time. In the absence of any magnetic polarization, the vector is used to find the fields as follows: Similarly, in the case of only magnetic polarization being present, the fields are determined via the previously stated relations to the scalar and vector potentials. For the case of both electric and magnetic polarization being present, the fields become Examples Oscillating dipole Consider a one dimensional, uniformly oscillating current. The current is aligned along the z-axis in some length of conducting material with an oscillation frequency . We will define the polarization vector where is evaluated at the retarded time . Inserting this into the electric Hertz vector equation knowing that the length is small and the polarization is in one dimension it can be approximated in spherical coordinates as follows Continuing directly to taking the divergence quickly becomes messy due to the denominator. This is readily resolved by using Legendre Polynomials for expanding a potential: It is important to note that in the above equation, and are vectors, while and are the lengths of those vectors. is the angle between the vectors and . The Hertz vector is now written as follows. Taking the divergence Then the gradient of the result Finally finding the second partial with respect to time Allows for finding the electric field Simulation Using the appropriate conversions to Cartesian coordinates, this field can be simulated in a 3D grid. Viewing the X-Y plane at the origin shows the two-lobed field in one plane we expect from a dipole, and it oscillates in time. The image below shows the shape of this field and how the polarity reverses in time due to the cosine term, however it does not currently show the amplitude change due to the time varying strength of the current. Regardless, its shape alone shows the effectiveness of using the electric Hertz vector in this scenario. This approach is significantly more straightforward than finding the electric field in terms of charges within the infinitely thin wire, especially as they vary with time. This is just one of several examples of when the use of Hertz vectors is advantageous compared to more common methods. Current loop Consider a small loop of area carrying a time varying current . With current flow, a magnetic field perpendicular to the direction of flow as a result of the right hand rule will be present. Due to this field being generated in a loop, it is expected that the field would look similar to that of an electric dipole. This can be proven quickly using Hertz vectors. First the magnetic polarization is determined by its relation to magnetic moment . The magnetic moment of a current loop is defined as , so if the loop lies in the x-y plane and has the previously defined time-varying current, the magnetic moment is . Inserting this into , and then into Equation (), the magnetic Hertz vector is found in a simple form. As in the electric dipole example, the Legendre polynomials can be used to simplify the derivatives necessary to obtain and . The electric field is then found through Due to the dependence on , it is significantly simpler to express the Hertz vector in spherical coordinates by transforming from the sole component vector to the and components. Simulation This field was simulated using Python by converting the spherical component to x and y components. The result is as expected. Due to the changing current, there is a time dependent magnetic field which induces an electric field. Due to the shape, the field appears as if it were a dipole. See also Dipole antenna References Electromagnetism
Hertz vector
[ "Physics" ]
1,226
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
60,643,887
https://en.wikipedia.org/wiki/Thermally%20activated%20delayed%20fluorescence
Thermally activated delayed fluorescence (TADF) is a process through which surrounding thermal energy changes population of excited states of molecular compounds and thus, alters light emission. The TADF process usually involves an excited molecular species in a triplet state, which commonly has a forbidden transition to the singlet ground state, termed phosphorescence. By absorbing nearby thermal energy, the triplet state can undergo reverse intersystem crossing (RISC) converting the triplet state population to an excited singlet state, which then emits light to the singlet ground state in a delayed process termed delayed fluorescence. Accordingly, in many cases, the TADF molecules show two types of emission, a delayed fluorescence and a prompt fluorescence. This is found for specific organic molecules, but also for selected organo-transition metal compounds, such as Cu(I) complexes. Along with traditional fluorescent molecules and phosphorescent molecules, TADF compounds belong to the three main light-emitting material groups used in organic light-emitting diodes (OLEDs). History The first evidence of thermally activated delayed fluorescence in a fully organic molecule was discovered in 1961 using the compound eosin. The emission detected was termed "E-type" delayed fluorescence, but the mechanism was not completely understood. In 1986, this mechanism was further investigated and described in detail using aromatic thiones, but a practical use was only identified much later. Application of the TADF mechanism for efficient light generation in OLEDs was proposed in 2008 and subsequently studied by Yersin and coworkers (originally designated as "singlet harvesting mechanism"). Since 2009, the mechanism was extensively investigated by Chihaya Adachi and coworkers as well as by other research groups. A series of papers were published, reporting effective TADF molecular design strategies focusing on different TADF compounds. Extensive studies of green, orange, and blue emitting OLEDs based on organic TADF materials spiked interest in the TADF field. This mechanism was soon considered as possible high efficiency alternative to traditional fluorescent and also phosphorescent compounds used in lighting displays so far. TADF materials are being considered the third generation of OLEDs following fluorescent and phosphorescent based devices. Mechanism The steps of the TADF mechanism are displayed in the figure at right (where it is assumed that the ground state is a singlet state, which is usually but not always the case). In the electroluminescent process, which is observed in OLEDs, an electrical excitation leads to population of singlet and triplet states of the TADF molecules. From the singlet state an allowed transition can occur to the electronic singlet ground state on a time scale of 10 to 100 nanoseconds for organic TADF molecules. This emission represents the prompt fluorescence. On the other hand, from the excited triplet state, the electron can undergo a forbidden de-excitation to the ground state as radiative transition, called phosphorescence, or as non-radiative process. However, this occurs on a much slower time scale, being on the order of microseconds to seconds. Thus, usually, thermal activation from the triplet to the excited singlet state, the reverse intersystem crossing, populates the singlet state in a fast process and quenches the triplet state population. As a consequence, delayed fluorescence is observed. Accordingly, when a TADF material becomes electronically excited, it exhibits prompt fluorescence and delayed fluorescence, usually occurring at (almost) the same wavelength. Selected organo-transition metal compounds can show both TADF and relatively fast phosphorescence. In an OLED based on traditional fluorescent materials only harvesting of the singlet state population is possible. Thus, due to spin statistics only 25% of the excitation can be exploited. On the other hand, both phosphorescent and TADF materials have the ability to harvest the excitation from both singlet and triplet states, theoretically allowing these materials to convert close to 100% of the electrically generated excitons, giving them a large advantage over traditional fluorescent-based materials. However, due to light out-coupling losses in OLED devices, the external quantum efficiency (EQE) is, without employing specific out-coupling enhancement strategies, substantially lower, lying roughly around 5% and 22%, respectively. Spin statistics Electronic states of materials used in light emitting devices typically contain some type of spin coupling. In phosphorescent materials, for example, heavy transition metals are used to take advantage of spin-orbit coupling. In most TADF materials, the excited and ground state electrons couple to have not only a combined total spin quantum number S, but also a combined z-component of the spin Sz. When this spin coupling phenomenon is considered, in a random situation, three possible electron combinations of total spin S=1 and one combination of total spin S=0 are occurring. This corresponds to the observed 75% triplet states and 25% singlet states generated under electrical excitation. Factors affecting TADF Several key kinetic properties of TADF materials determine their ability to efficiently generate light through delayed fluorescence, while minimizing thermal loss pathways. The rates of reverse intersystem crossing, referred to as kRISC, and of reverse intersystem crossing, given by kRISC, both determined by spin-orbit coupling, should be as fast as possible. In particular, kRISC should be faster than the rates of non-radiative triplet relaxation pathways. Most non-radiative triplet pathways like triplet-triplet annihilation, triplet quenching, or thermal decay occur on the order of microseconds or longer, which usually is long compared to the kRISC time. Thus, singlet state population is faster. Another key property is the energy difference between the singlet and triplet state energy levels, called ΔEST. In particular, as kRISC depends exponentially on this energy gap, it should be small, that is smaller than a few times the available thermal energy (≈25.6 meV at room temperature) to effectively allow for fast reverse intersystem crossing. Minimization of this energy gap is thus, considered to be one of the most important strategies in synthesizing potential TADF materials. The most effective strategies employed so far to synthesize molecules with donor and acceptor moieties spaced apart or twisted with respect to each other. This effectively reduces the energy gap ΔEST. Moreover, the TADF decay time, representing another key parameter, should be as short as possible in order to reduce unwanted chemical reactions during excited state population. This represents a further challenge and requires specific molecular design strategies. When the ground state is not a singlet state, different strategies of improving TADF performance may exist that have no counterpart in the singlet ground state case. For example, in the doublet copper(II) porphyrin molecule, the emissive state is a doublet state formed by antiferromagnetic coupling between a triplet excited porphyrin ligand and a ground state Cu(II) ion, and a quartet state formed by their ferromagnetic coupling lies slightly below the emissive state. In this case, the doublet-quartet gap ΔEDQ is mainly determined by the distance between the ligand and the metal, rather than the distance between the donor and the acceptor (both the porphyrin ligand in this case). Chemical structure The chemical structures of many commonly used TADF materials reflect the requirement to minimize the ΔEST by displaying a twisted structure. One of the most commonly used and successful TADF materials 2,4,5,6-Tetra(9H-carbazol-9-yl)isophthalonitrile (4CzIPN), contains this type of structure as the bottom and top carbazole groups can be viewed as flat and coplanar while the bottom left and bottom right carbazole groups can be thought of as coming into and out of the page. This type of molecule contains electron donating and electron accepting moieties showing small orbital overlap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO). As a consequence, small singlet-triplet splitting, small ΔEST, is resulting. Many highly efficient TADF materials contain multiple carbazole groups as electron donors and, for example, incorporate electron acceptors, like triazines, sulfoxides, benzophenones, and spiro-based groups. The table below shows several examples of these compounds that have been reported to yield high efficiencies and relatively small ΔEST. In a recent design strategy, electron donating and accepting moieties are separated by two bridges, leading to the DSH molecule. In this situation, very small orbital overlap between HOMO and LUMO is resulting. Thus, ultra-small energy gap ΔEST between the lowest excited singlet and triplet states of only about 1 meV is obtained. For this specific molecule, an ultra-short emission decay time, lying in the sub-microsecond time range, is attained. An OLED device fabricated, shows EQE of ≈ 19%. A number of organo-transition metal complexes, for example, based on Cu(I), Ag(I), Au(I), Au(III) metal centers, exhibit also distinct TADF behavior. In particular, Cu(I) compounds synthesized with different ligands display a wide range of ΔEST values, extending from around 33 to 160 meV. The depicted Cu(I) compound shows an example. Systematic photophysical including theoretical studies of a large number of Cu(I) compounds result in a detailed understanding of TADF properties. In particular, it is shown that the TADF decay time is not only given by the energy gap ΔEST, but also by the singlet excited state to the singlet ground state transition rate. Moreover, variation of spin-orbit coupling as realized by chemical change enables to modify ISC and reverse ISC rates as well as tuning in of phosphorescence in addition to TADF emission. Furthermore, it is referred to recent investigations with two-coordinate carbene-M(I)-amide complexes with M(I) = Cu(I), Ag(I), and Au(I). These compounds exhibit short-lived TADF at high emission quantum yield. Even robust materials for OLEDs showing long operational device live time (LT90 > 1000 hours at 1000 cd m−2) were reported. Applications Organic LEDs The vast majority of research on TADF-based materials is focused on improving the efficiency and lifetime of TADF-based OLEDs. Organic light-emitting diodes or OLEDs have provided an alternative to traditional liquid-crystal display (LCD) displays due to the improved contrast, response time, wider viewing angle, and the possibility of fabricating flexible displays. Most OLEDs that are currently commercially available employ phosphorescent organo-transition metal emitters, belonging to the second OLED generation emitters. They have the advantage of high operational lifetime for red and green emission color, however, poor lifetimes are still found for blue emitter materials. Thus, for generation of blue light, traditional organic molecules are applied. Frequently, it is considered that TADF-based OLEDs may represent the third generation of OLEDs. However, the vast majority of research on TADF-based materials is still focusing on improving efficiency, operational device lifetime, and color purity, though first OLED displays that use TADF emitters are already on the market. These devices are based on TADF emitters combined with color-pure fluorescent organic emitters. The TADF materials harvest efficiently all electrically excited excitons. They represent donors for efficient radiationless energy transfer to fluorescent acceptors, which finally emit light. The corresponding mechanism was named hyperfluorescence. Fluorescence imaging TADF-based materials have a unique advantage in some imaging techniques because of their longer emission lifetimes than promptly materials that show prompt fluorescence. For instance, the TADF exhibiting molecule ACRFLCN shows a strong sensitivity towards triplet oxygen making it an effective molecular oxygen sensor. The fluorescein derivative DCF-MPYM has shown success in the field of bioimaging as its long lifetime allows time-resolved fluorescence imaging in living cells. These tailored organic compounds are especially promising in bioimaging applications because of their low cytotoxicity compared to traditional compounds like lanthanide complexes. Mechanoluminescence TADF compounds can also be synthesized to exhibit a tunable color change based on the macroscopic particle size in powder form. In these compounds, color shift of light emission through mechanical grinding can occur, a phenomenon termed mechanoluminescence. Specifically, asymmetric compounds with diphenyl sulfoxide and phenothiazine moieties have been synthesized displaying linearly tuneable mechanochromism due to a combination of fluorescence and TADF. The compound named SCP shows dual emission peaks in its photoluminescence spectrum and changes from a green to blue color through mechanical grinding. Challenges Research of TADF materials has provided impressive results and devices made with these compounds have already achieved good device performance with high quantum efficiencies. However, the synthesis and application of TADF materials still has multiple challenges to overcome before they become commercially viable. Likely the biggest hurdle is the difficulty in producing a blue light emitting TADF molecules with a reasonable operational lifetime. Fabrication of a long operational lifetime pf blue light emitting OLEDs is a challenge not only for TADF, but also for phosphorescent materials. This is due degradation pathways at the high energy of blue light. Another difficulty in producing efficient TADF materials is the lack of sufficient knowledge concerning detailed structure-property relations for rational molecular design. Though, the combination of donating and accepting groups and the twisted or bridged molecular structure type already provide good fundamental starting concepts for new material concepts. See also Fluorescence Phosphorescence OLED Light-emitting diode Singlet state Triplet state Intersystem crossing References External links TADF OLED emitters, introduction and market status What is thermally activated delayed fluorescence? Lighting Optical diodes Organic electronics Light-emitting diodes Flexible electronics Electronic display devices
Thermally activated delayed fluorescence
[ "Engineering" ]
2,978
[ "Electronic engineering", "Flexible electronics" ]
60,645,723
https://en.wikipedia.org/wiki/LAMOST%20J112456.61%2B453531.3
LAMOST J112456.61+453531.3 (unofficial abbreviation J1124+4535) is a magnitude 13.98 star in the constellation Ursa Major, below the "bowl" of the Big Dipper. It is located approximately 60,000 light-years from Earth. Initial observations of J1124+4535 by the Large Sky Area Multi-Object Fibre Spectroscopic Telescope showed low amounts of magnesium, and later, the Subaru Telescope confirmed the low amounts of magnesium and also found high amounts of europium. J1124+4535 also lacks the same observable chemical signature as other stars in its parent interstellar cloud, indicating that J1124+4535 did not form in the cloud, confirming that the star must have formed outside the Milky Way. The star's origin was most likely the result of a dwarf galaxy collision with the Milky Way some 5 to 9 billion years ago. The remnants of the destroyed galaxy can still be seen as the most visible streams of the galactic halo. References Extragalactic stars Ursa Major
LAMOST J112456.61+453531.3
[ "Astronomy" ]
222
[ "Ursa Major", "Constellations" ]
60,645,788
https://en.wikipedia.org/wiki/ALG1-CDG
ALG1-CDG is an autosomal recessive congenital disorder of glycosylation caused by biallelic pathogenic variants in ALG1. The first cases of ALG1-CDG were described in 2004, and the causative gene was identified at the same time. This disorder was originally designated CDG-IK, under earlier nomenclature for congenital disorders of glycosylation. Clinically, individuals with ALG1-CDG have developmental delay, hypotonia, seizures and microcephaly. Fewer than 60 cases of ALG1-CDG have been confirmed in published literature. ALG1-CDG can be suspected based on clinical findings, and abnormal serum transferrin glycosylation test results. Confirmation of the diagnosis can be performed based on sequence analysis of ALG1. The analysis of ALG1 is complicated by the presence of a pseudogene. There are no specific treatments for ALG1-CDG, and most care consists of managing symptoms. References Congenital disorders of glycosylation Autosomal recessive disorders
ALG1-CDG
[ "Chemistry" ]
228
[ "Congenital disorders of glycosylation", "Carbohydrate chemistry" ]
60,646,773
https://en.wikipedia.org/wiki/%E2%84%93-adic%20sheaf
In algebraic geometry, an ℓ-adic sheaf on a Noetherian scheme X is an inverse system consisting of -modules in the étale topology and inducing . Bhatt–Scholze's pro-étale topology gives an alternative approach. Motivation The development of étale cohomology as a whole was fueled by the desire to produce a 'topological' theory of cohomology for algebraic varieties, i.e. a Weil cohomology theory that works in any characteristic. An essential feature of such a theory is that it admits coefficients in a field of characteristic 0. However, constant étale sheaves with no torsion have no interesting cohomology. For example, if is a smooth variety over a field , then for all positive . On the other hand, the constant sheaves do produce the 'correct' cohomology, as long as is invertible in the ground field . So one takes a prime for which this is true and defines -adic cohomology as . This definition, however, is not completely satisfactory: As in the classical case of topological spaces, one might want to consider cohomology with coefficients in a local system of -vector spaces, and there should be a category equivalence between such local systems and continuous -representations of the étale fundamental group. Another problem with the definition above is that it behaves well only when is a separably closed. In this case, all the groups occurring in the inverse limit are finitely generated and taking the limit is exact. But if is for example a number field, the cohomology groups will often be infinite and the limit not exact, which causes issues with functoriality. For instance, there is in general no Hochschild-Serre spectral sequence relating to the Galois cohomology of . These considerations lead one to consider the category of inverse systems of sheaves as described above. One has then the desired equivalence of categories with representations of the fundamental group (for -local systems, and when is normal for -systems as well), and the issue in the last paragraph is resolved by so-called continuous étale cohomology, where one takes the derived functor of the composite functor of taking the limit over global sections of the system. Constructible and lisse ℓ-adic sheaves An ℓ-adic sheaf is said to be constructible if each is constructible. lisse if each is constructible and locally constant. Some authors (e.g., those of SGA 4) assume an ℓ-adic sheaf to be constructible. Given a connected scheme X with a geometric point x, SGA 1 defines the étale fundamental group of X at x to be the group classifying finite Galois coverings of X. Then the category of lisse ℓ-adic sheaves on X is equivalent to the category of continuous representations of on finite free -modules. This is an analog of the correspondence between local systems and continuous representations of the fundament group in algebraic topology (because of this, a lisse ℓ-adic sheaf is sometimes also called a local system). ℓ-adic cohomology An ℓ-adic cohomology groups is an inverse limit of étale cohomology groups with certain torsion coefficients. The "derived category" of constructible ℓ-adic sheaves In a way similar to that for ℓ-adic cohomology, the derived category of constructible -sheaves is defined essentially as writes "in daily life, one pretends (without getting into much trouble) that is simply the full subcategory of some hypothetical derived category ..." See also Fourier–Deligne transform References Exposé V, VI of External links Mathoverflow: A nice explanation of what is a smooth (ℓ-adic) sheaf? Number theory learning seminar 2016–2017 at Stanford Algebraic geometry
ℓ-adic sheaf
[ "Mathematics" ]
800
[ "Fields of abstract algebra", "Algebraic geometry" ]
60,648,192
https://en.wikipedia.org/wiki/Xiangfeng%20wu
Xiangfeng wu () were wind surveying instruments used to gather and measure the direction of the wind in ancient China. History Prior to the invention of Xiangfeng wu, the ancient Chinese used pieces of silk or cloth that was hung on a pole to measure wind direction. Epigraphic evidence attributing to the discovery of weather crow on a wall painting in a tomb dating to the Eastern Han dynasty in 1972. The Sanfu huangtu (三輔黃圖, Description of the Three Districts in the Capital), a 3rd-century book written by Miao Changyan about the palaces at Chang'an, describes a copper bird-shaped wind vane situated on a tower roof for the measurement of wind direction. Xiangfeng wu composed of copper slices that were fixed on the top of a pole which could be revolved if the wind was blowing in a certain direction. Xiangfeng wu were first used in meteorological observatories and were later installed in government towers and private houses. See also Weather vane References Chinese inventions Han dynasty Measuring instruments Meteorological instrumentation and equipment Wind
Xiangfeng wu
[ "Technology", "Engineering" ]
215
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
60,648,742
https://en.wikipedia.org/wiki/Glycerol%20and%20potassium%20permanganate
The chemical redox reaction between potassium permanganate and glycerol is often used to demonstrate the powerful oxidizing property of potassium permanganate, especially in the presence of organic compounds such as glycerol. The exothermic (heat producing) reaction between potassium permanganate (KMnO4), a strong oxidizing agent, and glycerol (C3H5(OH)3), a readily oxidised organic substance, is an example of an experiment sometimes referred to as a "chemical volcano". Explanation Potassium permanganate (KMnO4) is a dark violet colored powder. Its reaction with glycerol (commonly known as glycerin or glycerine) (C3H5(OH)3) is highly exothermic, resulting rapidly in a flame, along with the formation of carbon dioxide and water vapour: 14 KMnO4(s) + 4 C3H5(OH)3(l) → 7 K2CO3(s) + 7 Mn2O3(s) + 5 CO2(g) + 16 H2O(g). Crystalline potassium permanganate (KMnO4) is placed in an evaporating dish. A depression is made at the center of the permanganate powder and glycerol liquid is added to it. The white smoke-like vapor produced by the reaction is a mixture of carbon dioxide gas and water vapor. Since the reaction is highly exothermic, initial sparking occurs, followed by a lilac- or pink-colored flame. When energy or heat is added to electrons, their energy level increases to an excited state. This state is short-lived, and once the electrons release the energy, they return to their normal energy levels. During this process the energy is visibly observed as light. When the reaction is complete, it leaves behind a grayish solid with green regions. Gallery See also Carbon snake Sugar snake References Chemistry classroom experiments Articles containing video clips
Glycerol and potassium permanganate
[ "Chemistry" ]
424
[ "Chemistry classroom experiments" ]
60,649,128
https://en.wikipedia.org/wiki/Decyl%28triphenyl%29phosphonium
Decyl(triphenyl)phosphonium (DTPP) is the organophosphorus cation with the formula C10H21P(C6H5)3+. It is a lipophilic quaternary phosphonium cation. It forms the basis for many mitochondrial-targeted drugs, including MitoQ, MitoE, and SkQ. It binds to the mitochondrial matrix by insertion into the inner membrane. DTPP itself can cause mitochondrial swelling in kidney tissue, an action possibly related to increased membrane permeability. References Further reading Quaternary phosphonium compounds Organophosphorus compounds
Decyl(triphenyl)phosphonium
[ "Chemistry" ]
135
[ "Functional groups", "Organic compounds", "Organophosphorus compounds", "Organic compound stubs", "Organic chemistry stubs" ]
60,650,636
https://en.wikipedia.org/wiki/Iron%20aluminide
Iron aluminides are intermetallic compounds of iron and aluminium - they typically contain ~18% Al or more. Good oxide and sulfur resistance, with strength comparable to steel alloys, and low cost of materials have made these compounds of metallurgical interest - however low ductility and issues with hydrogen embrittlement are barriers to their processing and use in structural applications. Overview High corrosion resistance of Iron alloys containing more than 18% aluminium was first noted in the 1930s. Their tensile strength compares favorably with steels, whilst utilizing only common elements; however they have low ductility at room temperature, and strength drops off substantially over 600 °C. The alloys also have good sulfide and oxidation resistance, good wear resistance, and lower density than steels. Peak strength and hardness is reached at the Fe3Al stoichiometric region. Although Al gives corrosion resistance via an oxide film surface, reaction (with water) may also give rise to embrittlement via hydrogen produced in the reaction between Al and H2O. Chromium (2-6%) improves room temperature ductility. In 1996, Kamey said the mechanism was not fully understood, but offered a hypothesis that it could reduce hydrogen embrittlement via its ability to stabilise the FeAl phase. Other explanations have included that chromium could facilitate slipping via crystal dislocations, and that it could contribute to surface passivation and prevent embrittling water reactions. A disordered alloy (designated FAPY) containing ~16% Al, ~5.4% Cr plus ~0.1% Zr, C, and Y, with ~1% Mo showed much improved ductility, only dropping substantially under ~200C (cf 650C for Fe3Al); this alloy also is cold workable. Phases Below ~18-20% (atomic) Al the aluminium exists as a solid solution in iron. Above this concentration there are FeAl (B2 phase) and Fe3Al (DO3 phase) existing in the form of caesium chloride (CsCl) and α-bismuth trifluoride (BiF3) crystal structures. Above ~550 °C the Fe3Al phase is transformed in FeAl (and Fe). Above ~50% Al (atomic) Fe5Al8, FeAl2, Fe2Al5, and Fe4Al13 are also known - the Al rich phases show high brittleness. Preparation The reaction between Al and Fe to generate iron aluminide is exothermic. Production from direct melting of Al and Fe is economical, but any water in the charge produces issues with the generation of hydrogen which shows solubility in the iron aluminide, leading to gas voids. Blowing with argon or vacuum melting alleviates this. Large grain size is greatly deleterious to ductility, especially with Fe3Al, and is encountered in cast iron aluminides. Coatings of iron aluminide can be prepared by chemical vapor deposition onto iron. Creep Resistance The high corrosion resistance of FeAl alloys make them desirable for high temperature applications in corrosive environments. However, FeAl alloys have intrinsically low creep strength at high temperatures because of the high diffusivity of the B2 structure. In order to be used as a high temperature alloy, FeAl must be treated to increase its creep resistance. The two most common methods to increase the creep resistance of FeAl are solid solution strengthening and precipitation hardening. Solid solution strengthening was shown to decrease the steady state creep rate and the power law exponent of FeAl by increasing the concentration of other transition metals in a FeAl alloy.  While this did increase the creep strength of the material, it is still limited by the ductility of FeAl, as the strengthened alloy fractured after just 0.3% strain. Precipitation hardening in FeAl is commonly achieved with two different types of precipitates: oxide particles and carbides. 5 nm Y based oxide particles have been shown to increase the creep resistance of FeAl at temperatures up to 800C. Similarly, Ti based carbides have been shown to have high creep resistance at low stresses, consistent with the precipitation strengthening mechanism. While precipitation strengthening is excellent at increasing creep resistance, the stability of the precipitates at high temperatures is a limiting factor. Carbides can be dissolved into the FeAl and oxide particles can coarsen at temperatures over 1000C. As a result, FeAl alloys have not been effectively strengthened for applications that require temperatures higher than 1000C and different strategies will be needed to further increase the possible operating temperature. Uses Potential uses for iron alumides include : electrical heating elements, piping and other work for high temperature process including piping for coal gasification and for superheater and re-heater tubes. It has also been suggested as a structural material for lunar use. Thanks to the good combination of mechanical and oxidation properties, iron aluminide has been successfully used as a binder phase for tungsten carbides. Also, replacing Cobalt in conventional WC-Co cermets with FeAl in the Laser cladding process caused improving oxidation and wear properties. References External links Aluminides Iron compounds Ferrous alloys
Iron aluminide
[ "Chemistry" ]
1,077
[ "Intermetallics", "Ferrous alloys", "Alloys", "Aluminides" ]
60,651,424
https://en.wikipedia.org/wiki/NGC%204260
NGC 4260 is a barred spiral galaxy in the constellation Virgo. It was discovered by William Herschel on April 13, 1784. Gallery References External links Barred spiral galaxies Virgo (constellation) 4260 07361 Astronomical objects discovered in 1788 Discoveries by William Herschel 039656
NGC 4260
[ "Astronomy" ]
60
[ "Virgo (constellation)", "Constellations" ]
44,200,907
https://en.wikipedia.org/wiki/Perylenetetracarboxylic%20dianhydride
Perylenetetracarboxylic dianhydride (PTCDA) is an organic dye molecule and an organic semiconductor. It is used as a precursor to a class of molecules known as Rylene dyes, which are useful as pigments and dyes. It is a dark red solid with low solubility in aromatic solvents. The compound has attracted much interest as an organic semiconductor. Structure PTCDA consists of a perylene core to which two anhydride groups have been attached, one at either side. It occurs in two crystalline forms, α and β. Both have the P21/c monoclinic symmetry and a density of ca. 1.7 g/cm3, which is relatively high for organic compounds. Their lattice parameters are: Self-assembly and films Use The main industrial use of PTCDA is as a precursor to Rylene dyes. References Perylene dyes Vat dyes Organic semiconductors
Perylenetetracarboxylic dianhydride
[ "Chemistry" ]
200
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
44,201,078
https://en.wikipedia.org/wiki/Cassette%20mutagenesis
Cassette mutagenesis is a type of site-directed mutagenesis that uses a short, double-stranded oligonucleotide sequence (gene cassette) to replace a fragment of target DNA. It uses complementary restriction enzyme digest ends on the target DNA and gene cassette to achieve specificity. It is different from methods that use single oligonucleotide in that a single gene cassette can contain multiple mutations. Unlike many site directed mutagenesis methods, cassette mutagenesis also does not involve primer extension by DNA polymerase. Mechanism First, restriction enzymes are used to cleave near the target sequence on DNA contained in a suitable vector. This step removes the target sequence and everything between the restriction sites. Then, the synthetic double stranded DNA containing the desired mutation and ends that are complementary to the restriction digest ends are ligated in place of the sequence removed. Finally, the resultant construct is sequenced to check that the target sequence contains the intended mutation. Usage The use of synthetic gene cassette allows total control over the type of mutation that can be generated. When studying protein functions, cassette mutagenesis can allow a scientist to change individual amino acids by introducing different codons or omitting codons. By including the SD sequence and the first few codons of a gene, a scientist can easily and dramatically affect the expression level of a protein by altering these regulatory sequences. Limitations To use this method, the sequence of the target sequence and nearby restriction sites must be known. Since restriction enzymes are used, for this method to be useful, the restriction sites flanking the target DNA has to be unique in the gene/vector system so that the gene cassette can be inserted with specificity. The length of the sequence flanked by the restriction sites is also a limiting factor due to the use of synthetic gene cassettes. Advantages Since one gene cassette can contain multiple mutations, less total oligonucleotide synthesis and purification is needed. Compared to mutagenesis methods that requires the synthesis of double stranded DNA using a single stranded template (1-30% in vitro in M13), the efficiency of the ligation of oligodeoxynucleotide cassette is close to 100%. The high efficiency of the mutagenesis means mutants can be screened directly by sequencing. Once the vector is set up with flanking restriction sites, all manipulations (i.e., mutagenesis, sequencing, expression) can be performed in the same plasmid. References Genetics techniques Molecular genetics Mutagenesis Protein engineering
Cassette mutagenesis
[ "Chemistry", "Engineering", "Biology" ]
518
[ "Genetics techniques", "Molecular genetics", "Genetic engineering", "Molecular biology" ]
44,201,934
https://en.wikipedia.org/wiki/West%20Africa%20Network%20for%20Peacebuilding
The West Africa Network for Peacebuilding (WANEP) is a leading Regional Peacebuilding organisation founded in 1998 in response to civil wars that plagued West Africa in the 1990s. Over the years, WANEP has succeeded in establishing strong national networks in every Member State of ECOWAS with over 550 member organisations across West Africa. WANEP places special focus on collaborative approaches to conflict prevention, and peacebuilding, working with diverse actors from civil society, governments, intergovernmental bodies, women groups and other partners in a bid to establish a platform for dialogue, experience sharing and learning, thereby complementing efforts at ensuring sustainable peace and development in West Africa and beyond. History and founders WANEP was founded by two distinguished Africans; Samuel Gbaydee Doe and Emmanuel Habuka Bombande. Sam Gbadyee Doe is a peacebuilding and development professional from Liberia. He began his academic career from the University of Liberia where he studied Economics with the intention of being a banker and later proceeded to the Eastern Mennonite University for his MA in conflict transformation and Bradford University in the U.K. for a PhD. It was at EMU that the dream for the establishment of WANEP began when his paths crossed with John Paul Lederach and Emmanuel Bombande. Mr. Emmanuel Habuka Bombande is a Ghanaian National Peacebuilding Practitioner with proven expertise in conflict transformation, peacebuilding, and development. After his social science degree from the Kwame Nkrumah University of Science and Technology-Kumasi Ghana, Mr. Bombande proceeded to the Eastern Mennonite University, USA for a master's degree in Conflict Transformation. It was there he met Sam Doe leading to the birth of WANEP. The story of WANEP cannot be completely told without mentioning the invaluable contributions of Professor John Paul Lederach especially during the formation and nurturing stages of the institution which he described as …”Beginning of a new era of peace, healing, reconciliation and hope in West Africa.” He was instrumental to bringing together and mentoring the two co-founders of WANEP. Based on Sam Doe's background in working with trauma during the Second Liberian Civil War, and in establishing youth dialogue organizations and Bombande’s efforts with Hizkias Assefa at the Nairobi Peace Initiative in resolving the Kokomba-Nanumba conflict in northern Ghana, WANEP focused on both lessening existing conflicts and preventing future outbreaks. By directing efforts toward grassroots efforts the organization helped local leaders and citizens resolve conflicts without outside intervention. Leadership Samuel Gbaydee Doe served as the executive director of WANEP from its formation in 1998 until 2004. In 2004, then director of programs, Emmanuel Bombande was appointed executive director. In 2015,Dr. Chukwuemeka Eze, formerly the Program Director took over from Bombande as the Executive Director while Levinia Addae-Mensah is the Deputy Executive Director/Program Director. Programs and Organizations WANEP has a niche in training CSOs, government and other practitioners in peacebuilding and conflict prevention. This niche has worked well in supporting peace and averting conflict in many situations. WANEP implements programs to equip the youth, business owners, women, traditional leaders and state agencies with requisite skills in conflict prevention and supports them to find local solutions or interventions to their unique conflict situations. WANEP emphasizes ownership and bottom-up approach to peacebuilding practice which enables the National Networks to carry out interventions that reflect the peculiarities of human security issues in their various countries. The national networks represent the WANEP brand and work in member states while the Regional Secretariat engages at the strategic level with ECOWAS, AU, UN and other intergovernmental, governmental and non-governmental organizations. Strategic Focus WANEP's key area of focus includes but not limited to the following; Early Warning and Response (WARN) Nonviolence and Peace Education (NAPE) Capacity-Building & Development Engendering Women in Peacebuilding Programs (WIPNET) Dialogue and Mediation Civil Society Coordination and Democratic Governance Gender Research and Publication Key Impact/Achievements Under the WIPNET program, WANEP facilitated the Liberian Women Mass action for Peace initiated by Liberian Women in 2003. The women peace activism’s intervention in Liberia under the WIPNET program, led to the eventual ceasefire in the Liberian war. The outcome of the Liberian peace process and the role of WIPNET have been acknowledged all over the world and led to the Nobel Peace Prize awarded to Her Excellency Ellen Johnson and Lemay Gbowe (WANEP Liberia former Coordinator for the WIPNET program). The mass action for peace is credited with urging Charles Taylor’s government and the Liberians for Reconciliation and Democracy (LURD) into a ceasefire and leading to the end of the conflict. WANEP also facilitated the creation of ‘Peace Hut’ in Liberia to facilitate the reconciliation process and provide environment for victims to meet and reconcile with offenders. This concept was replicated in Cote d'Ivoire. President Ellen Johnson Sirleaf has cited WIPNET as being one of the key factors supporting women’s prominent role in peace building in Liberia. In 2015, WANEP signed an MOU with the African Union Commission to provide support to the Commission’s Peace and Security Department in the implementation of the AU Peace and Security Architecture (APSA) including the gender mainstreaming of the architecture. The work of WANEP in establishing an early warning system has been instrumental in helping to resolve multiple conflicts in early stages. In 2014 this effort was officially recognized by Cote d’Ivoire’s permanent representative to the United Nations Youssoufou Bamba in his official remarks at the Informal interactive Dialogue on the Responsibility to Protect: "Fulfilling our collective responsibility: International assistance and the responsibility to protect,", September 8, 2014. WANEP experience in peacebuilding has been sought for replication in East and Central Africa. The unique structure of WANEP contributed to WANEP’s choice as the Civil Society implementing partner for the operationalization of ECOWAS Early Warning Mechanism (ECOWARN). In this regard, allow me to note the remarkable work done by the Ivorian section of the West Africa Network for Peacebuilding, WANEP-CI, which has set up an independent early warning system, in particular through the dissemination of monthly reports. of the collection of information, relating to human security, and which naturally aims to support actions to prevent conflicts and promote peace in Côte d'Ivoire. In September 2013 WANEP was also recognized for their work with ECOWAS to promote peace education in selected secondary schools across the region. These studies will impart new ways of thinking and new way of viewing conflict thereby leading to building new structures and cultural practices in the society that deepens peaceful coexistence. ECOWAS/WANEP relationship was the first example of civil society and intergovernmental partnership not only in West Africa but Africa generally. The partnership has been highlighted as the best practice of building alliances with CSOs in conflict prevention and is a referral point Affiliations The organization works with several regional partners including the Economic Community of West African States, the African Union’s Economic, Social, and Cultural Council ECOSOCC, and the United Nations Economic and Social Council (ECOSOC). WANEP is also a member of the Peace and Security cluster of ECOSOCC representing West Africa, and is the West Africa Regional Representative of the Global Partnership for the Prevention of Armed Conflict (GPPAC) and the Focal Point for Africa CSOs on the AU-EU Joint Strategy (JAES). Strategic Partnerships In 2002, WANEP entered into a historic partnership with the Economic Community of West African States (ECOWAS) in the implementation of a regional early warning and response system referred to as ECOWARN. In 2004, WANEP and ECOWAS signed a Memorandum of Understanding (MOU) which has consistently been renewed. this inter-governmental structure acts as a regional early warning and response system. WANEP and ECOWAS partnership enables typically non-governmental organizations a path to Track 1 diplomacy efforts early on in conflicts. Inspired by its desire to contribute and commit to the sustenance of peace during elections, WANEP in collaboration with United States Agency for International Development (USAID) is implementing a five-year project (2015 to 2019) tagged, “Mitigating Election Violence in West Africa through National Early Warning Systems (NEWS)” in target countries of West Africa namely, Burkina Faso, Cote d’Ivoire, Niger, Ghana and Sierra Leone. Through this project, WANEP set up and operationalized an election situation room during elections in those countries. WANEP also partnered with other key stakeholders to run similar rooms in countries such as Benin, The Gambia and Nigeria. WANEP is working in partnership with the Kofi Annan International Peacekeeping Training Centre (KAIPTC) in Accra, Ghana to run the West Africa Peacebuilding Institute (WAPI). References WANEP Publications and Reports (www.wanep.org/resource page) International organizations based in Africa Conflict (process) Dispute resolution West Africa Peace organizations Organizations established in 1998 Peacebuilding International development in Africa
West Africa Network for Peacebuilding
[ "Biology" ]
1,860
[ "Behavior", "Aggression", "Human behavior", "Conflict (process)" ]
44,203,314
https://en.wikipedia.org/wiki/Conical%20roof
A conical roof or cone roof is a cone-shaped roof that is circular at its base and terminates in a point. Distribution Conical roofs are frequently found on top of towers in medieval town fortifications and castles, where they may either sit directly on the outer wall of the tower (sometimes projecting beyond it to form eaves) or form a superstructure above the fighting platform or terrace of the tower. The latter necessitated the use of spouts to lead the water away over the top of the walls (e.g. as at Andernach's Alter Krahnen). In this case the cone roof was surrounded by a defensive wall, a parapet or a battlement. Such conical roofs were usually constructed using a timber-framed support structure covered with slate; more rarely they were made of masonry. A small circular turret or tourelle with a conical roof is called a pepperpot or pepperbox turret. Present Today, conical roofs are more often used in rural areas either for circular or small square buildings. They are difficult to construct but use locally available materials. Conical roofs are widely used in Armenian and Georgian church architecture. A key feature of the Solomon Islands Parliament Building is its conical roof. See also List of roof shapes References Architecture Roofs
Conical roof
[ "Technology", "Engineering" ]
249
[ "Structural engineering", "Structural system", "Construction", "Roofs", "Architecture" ]
44,203,774
https://en.wikipedia.org/wiki/Piromelatine
Piromelatine (Neu-P11) is a multimodal sleep drug under development by Neurim Pharmaceuticals. It is an agonist at melatonin MT1/MT2 and serotonin 5-HT1A/5-HT1D receptors. Neurim is conducting a phase II randomized, placebo controlled trial of cognitive and sleep effects in Alzheimer's disease. Results of a phase II trial on insomnia in 120 adults were announced in 2013, finding piromelatine 20/50 mg improved sleep over 4 weeks vs placebo. Phase 1A/1B studies in 2011, showed safe dose-dependent improvement in sleep. Pre-clinical studies showed antinociceptive antihypertensive and cognitive benefits in rat disease models of pain, hypertension, and Alzheimer's disease. Antidepressant and anti-anxiety effects were also demonstrated in animal models. See also List of investigational sleep drugs References 5-HT1A agonists 5-HT1D agonists Melatonin receptor agonists Hypnotics Tryptamines 4-Pyrones
Piromelatine
[ "Chemistry", "Biology" ]
234
[ "Hypnotics", "Behavior", "Drug discovery", "Melatonin receptor agonists", "Sleep" ]
44,205,022
https://en.wikipedia.org/wiki/Lattice%20light-sheet%20microscopy
Lattice light-sheet microscopy is a modified version of light sheet fluorescence microscopy that increases image acquisition speed while decreasing damage to cells caused by phototoxicity. This is achieved by using a structured light sheet to excite fluorescence in successive planes of a specimen, generating a time series of 3D images which can provide information about dynamic biological processes. It was developed in the early 2010s by a team led by Eric Betzig. According to an interview conducted by The Washington Post, Betzig believes that this development will have a greater impact than the work that earned him the 2014 Nobel Prize in Chemistry for "the development of super-resolution fluorescence microscopy". Setup of Lattice Light-sheet Fluorescence Microscopy Lattice light sheet microscopy is a novel combination of techniques from Light sheet fluorescence microscopy, Bessel beam microscopy, and Super-resolution microscopy (specifically structured illumination microscopy, SIM). In lattice light sheet microscopy, very similarly to light sheet microscopy, the illumination of the sample occurs perpendicular to the image detection. Initially the light sheet is formed by stretching the linearly polarized circular input beam with a pair of cylindrical lenses along the x axis and then compressing it with an additional pair of lenses along the z axis. This modification creates a thin sheet of light that is then projected onto a binary ferroelectric spatial light modulator (SLM). The SLM is a device that spatially varies the waveform of a beam of light. The light that is reflected back from the SLM is used to eliminate unwanted diffraction. Diffraction is eliminated by the transform lens that creates a Fraunhofer diffraction pattern from the reflected light at an opaque mask containing a transparent annulus. Optical lattices are two or three dimensional interference patterns, which here are produced by the transparent annular ring. The mask is conjugate to x and z galvanometers. This quality of the microscope is important for the dithered mode of operation, where the light sheet must be oscillated within the x axis. The lattice light-sheet microscope has two modes of operation: In the dithered mode, the light sheet is rapidly scanned along the x axis and only one image is recorded per Z plane, at normal diffraction limited resolutions. The second mode of operation is the structured illumination microscopy mode (SIM). SIM is a technique where a grid pattern of excitation light is superimposed on the sample and rotated in steps between the capture of each image. These images are then processed via an algorithm to produce a reconstructed image past the limit of diffraction that is built into our optical instruments. Theory Lattice light sheet microscopy can be viewed as an improvement of Bessel beam light sheet microscopes in terms of axial resolution (also termed resolution in z). In Bessel beam light sheet microscopes, a non-diffracting Bessel beam is first created then dithered in the x direction to produce a sheet. However, the lobes of a Bessel functions carry as much energy as the central spot, resulting in illumination out of the depth of field of the observation objective. Lattice light sheet microscopy aims at reducing the intensity of the outer lobes of the Bessel functions by destructive interference. To do so, a two-dimensional lattice of regularly spaced Bessel beams is created. Then, destructive interference can be triggered by carefully tuning the spacing between the beams (that is, the period of the lattice). Practically, the lattice of interfering Bessel beams is engineered by a spatial light modulator (SLM), a liquid-crystal device whose individual pixels can be switched on and off to display a binary pattern. Due to the matrix nature of the SLM, the generated pattern contains many unwanted frequencies. Thus, these are filtered out by the means of an annulus placed in a plane conjugated with the back focal plane of the objective (Fourier domain). Finally, to obtain a uniform intensity at the sample rather than a lattice, the sheet is dithered using a galvanometer oscillating in the x direction. Improvements On Other Methods Lattice Light-Sheet Microscopy combines high resolution and clarity at high image acquisition speed, without damaging samples through photobleaching. Photobleaching is a major and highly common problem in fluorescence microscopy wherein fluorescent tags will lose their ability to emit photons upon repeated excitation. Unlike common fluorescence microscopes, samples in a Lattice Light-Sheet Microscope experience photobleaching at a rate drastically reduced when compared to conventional techniques (In conventional techniques, this results in an image signal that gets weaker over the course of multiple excitations). This allows for longer exposures without loss of signal, which in turn allows for video to be captured at over longer periods of time. The Lattice method also has the ability to resolve 200 to 1000 planes per second, an extremely fast imaging rate that allows continuous video capture. This capture rate is one order of magnitude faster than Bessel beam excitation, and two orders of magnitude faster than Spinning Disk Confocal Microscopy. These two advantages combine to allow researchers to take very detailed movies over long periods of time. Limitations Lattice light sheet microscopy is limited to transparent and thin samples to achieve good image quality. The quality of image acquired degrades with imaging depth. This phenomenon occurs due to sample-induced aberrations, and it has been proposed that imaging samples to beyond 20 to 100 μm will require adaptive optics. Resolution SIM: 150 nm by 230 nm xy resolution, 280 nm z resolution Dithered: 230 nm by 230 nm xy resolution, ~370 nm z resolution Contrast Because the excitation band is ~1.0 micron in width, and the focal depth of the detection objective is ~1.1 microns deep, the majority of illuminated molecules are in the focal plane. Depth into sample Imaging beyond 20–100 microns in depth is theorized to be possible through combining lattice light-sheet technology with adaptive optics. Applications Lattice light sheet microscopy is useful for in-vivo cellular localization and super resolution. Lattice light sheets' confined excitation band keeps nearly all illuminated cells in focus. The reduction of large, out of focus spots allow precise tracking of individual cells at a high molecular density, a capability unattainable through previous microscopy methods. Consequently, lattice light sheet is being used for a number of dynamic cellular interactions. The decrease in phototoxicity has created opportunities to study the subcellular processes of embryos without damaging their living tissues. Studies have examined and quantified the extent of the highly variable growth patterns of microtubules throughout mitosis. Dictyostelium discoideum (slime mold) cells were imaged during their rapid chemotactic movement toward one another and the initial contact. The aggregation of T cell and target cells was observed, along with the subsequent formation of the immunological synapse. The advancements of the lattice sheet method revealed three-dimensional movement patterns of actin as well as lamellipodial protrusion in these interactions. The increase in imaging speed also allowed the observation of fast moving neutrophils through the extracellular matrix in another study. The technique, along with chemical and genetic manipulation techniques, was used to capture a live image of a virus (a virus that was engineered to spike COVID-19 proteins) infecting a cell, by injecting its genetic material into the cell's endosome for the first time, at Harvard Medical School, in cooperation with other institutions. Future work The technique is being actively developed at the Janelia Research Campus of the Howard Hughes Medical Institute. Eric Betzig has stated that his goal is to combine his work on microscopy to develop a "high-speed, high-resolution, low-impact tool that can look deep inside biological systems." Penetration deeper than 20–100 μm may be achieved by combining lattice light-sheet microscopy with adaptive optics. References Fluorescence techniques Cell imaging Laboratory equipment Microscopy Optical microscopy
Lattice light-sheet microscopy
[ "Chemistry", "Biology" ]
1,631
[ "Optical microscopy", "Cell imaging", "Fluorescence techniques", "Microscopy" ]
44,206,598
https://en.wikipedia.org/wiki/S2242
S2242 is an experimental antiviral agent that is an inhibitor of herpes virus replication. References Antiviral drugs Purines Diols
S2242
[ "Biology" ]
32
[ "Antiviral drugs", "Biocides" ]
44,210,335
https://en.wikipedia.org/wiki/Asset%20health%20management
Asset health management or (AHM) is the field of study which looks at how to manage the "health" of an asset or assets. This often includes methods to establish asset health and effort to decide the appropriate actions to be taken to manage the assets' health. This also includes the discussion of health at end of life to ensure the asset's full life is used efficiently. Asset health management includes many different methods which can sometime overlap in their intended scope and methods. Asset health management has become a difficult field to discuss due to the use of the same acronym to describe multiple different approaches and the use of the same approach with different names. Asset health management can be considered a subset of Asset management. Management of multiple assets There is often also a consideration of additional work done to manage the health of multiple assets within the same framework. Sometimes referred to as Fleet health management and falling within the study of Fleet management. Although it is common to need to manage the health of multiple assets they are not always vehicles and frequently of mixed type. When resources are constrained it is a fascinating management problem to consider how best to manage the health of assets. It is rare that assets can be managed in an unconstrained way as resources are always limited by a need to make efficient use of them. Asset health management relevant standards A collection of some standards which are often use to manage the health of assets. This is not intended to be an exhaustive list and will organically improve. Open O&M MSG-3 Asset health management examples A short list is provided to illustrate the many methods that are some sort of asset health management method or philosophy. IVHM Built-in_self-test Built-in_test_equipment HUMS References Asset management Industrial engineering
Asset health management
[ "Engineering" ]
354
[ "Maintenance", "Mechanical engineering", "Industrial engineering" ]
49,044,066
https://en.wikipedia.org/wiki/Phlebia%20radiata
Phlebia radiata, commonly known as the wrinkled crust, is a common species of crust fungus in the family Meruliaceae. It is widespread in the Northern Hemisphere. It grows as a wrinkled, orange to pinkish waxy crust on the decaying wood of coniferous and deciduous trees, in which it causes a white rot. The fungus was first described scientifically in 1821 by Elias Magnus Fries. Description The fruitbody of Phlebia radiata is resupinate—flattened against its substrate like a crust. It is wrinkled, orange to pinkish in color, and has a waxy texture. It is circular to irregular in shape, reaching a diameter up to , although neighbouring fruitbodies may be fused together to form larger complexes up to in diameter. The soft texture of the flesh hardens when the fruitbody becomes old. The fungus is inedible. In mass, the spores are white. Microscopic examination reveals additional spore details: they are smooth, allantoid (sausage-shaped) to elliptical, and inamyloid, measuring 3.5–7 by 1–3 μm. Similar species include Botryobasidium vagum, Meruliporia incrassata, Piloderma bicolor, and Serpula lacrymans. Habitat and distribution Phlebia radiata is a saprophytic species, and causes a white rot in the wood it colonizes, fallen logs and branches of both coniferous and hardwood trees. References Fungi described in 1821 Fungi of Asia Fungi of Europe Fungi of North America Inedible fungi Meruliaceae Taxa named by Elias Magnus Fries Fungus species
Phlebia radiata
[ "Biology" ]
336
[ "Fungi", "Fungus species" ]
49,044,208
https://en.wikipedia.org/wiki/Tubaria%20furfuracea
Tubaria furfuracea, commonly known as the scurfy twiglet or totally tedious tubaria, is a common species of agaric fungus in the family Tubariaceae. It was first described by Christiaan Hendrik Persoon in 1801, as a species of Agaricus. French mycologist Claude-Casimir Gillet transferred it to the genus, Tubaria in 1876. Description The mushroom cap is 1–4 cm wide, orange-brown, convex to flat and depressed, with small marginal patches of veil which disappear with age or rain; its odor is mild. The gills are brown and adnate to slightly decurrent. The stalk is 1–5 cm tall and 2–4 mm wide. The spores are pale reddish-brown, elliptical, and smooth. This species is considered inedible. Similar species Similar species include T. confragosa, Galerina marginata, and Psilocybe cyanescens. References External links Tubariaceae Fungi described in 1801 Fungi of Europe Fungi of North America Inedible fungi Taxa named by Christiaan Hendrik Persoon Fungus species
Tubaria furfuracea
[ "Biology" ]
232
[ "Fungi", "Fungus species" ]
49,045,617
https://en.wikipedia.org/wiki/Global%20Environmental%20Change
Global Environmental Change is a scientific journal publishing peer-reviewed research on environmental change that was established in 1990. It is published by Elsevier. the editor-in-chief are Dabo Guan and Harini Nagendra. the journal had an impact factor of 10.466, according to Journal Citation Reports, ranking it 4th out of 265 journals in the category environmental sciences. References External links Global Environmental Change, Volume 72, January 2022 Elsevier academic journals Academic journals established in 1990 English-language journals Environmental science journals
Global Environmental Change
[ "Environmental_science" ]
108
[ "Environmental science journals", "Environmental science journal stubs" ]
49,045,837
https://en.wikipedia.org/wiki/Spatial%20ability
Spatial ability or visuo-spatial ability is the capacity to understand, reason, and remember the visual and spatial relations among objects or space. Visual-spatial abilities are used for everyday use from navigation, understanding or fixing equipment, understanding or estimating distance and measurement, and performing on a job. Spatial abilities are also important for success in fields such as sports, technical aptitude, mathematics, natural sciences, engineering, economic forecasting, meteorology, chemistry and physics. Not only do spatial abilities involve understanding the outside world, but they also involve processing outside information and reasoning with it through representation in the mind. Definition and types Spatial ability is the capacity to understand, reason and remember the visual and spatial relations among objects or space. There are four common types of spatial abilities: spatial or visuo-spatial perception, spatial visualization, mental folding and mental rotation. Each of these abilities has unique properties and importance to many types of tasks whether in certain jobs or everyday life. For example, spatial perception is defined as the ability to perceive spatial relationships with respect to the orientation of one's body despite distracting information. Mental rotation on the other hand is the mental ability to manipulate and rotate 2D or 3D objects in space quickly and accurately. Lastly, spatial visualization is characterized as complicated multi-step manipulations of spatially presented information. These three abilities are mediated and supported by a fourth spatial cognitive factor known as spatial working memory. Spatial working memory is the ability to temporarily store a certain amount of visual-spatial memories under attentional control in order to complete a task. This cognitive ability mediates individual differences in the capacity for higher level spatial abilities such as mental rotation. Spatial perception Spatial perception is defined as the ability to perceive spatial relationships in respect to the orientation of one's body despite distracting information. It consists of being able to perceive and visually understand outside spatial information such as features, properties, measurement, shapes, position and motion. For example, when one is navigating through a dense forest they are using spatial perception and awareness. Another example is when trying to understand the relations and mechanics inside of a car, they are relying on their spatial perception to understand its visual framework. Tests that measure spatial perception include the rod and frame test, where subjects must place a rod vertically while viewing a frame orientation of 22 degrees in angle, or the water-level task, where subjects have to draw or identify a horizontal line in a tilted bottle. Spatial perception is also very relevant in sports. For example, a study found that cricket players who were faster at picking up information from briefly presented visual displays were significantly better batsmen in an actual game. A 2015 study published in the Journal of Vision found that soccer players had higher perceptual ability for body kinematics such as processing multitasking crowd scenes which involve pedestrians crossing a street or complex dynamic visual scenes. Another study published in the Journal of Human Kinetics on fencing athletes found that achievement level was highly correlated with spatial perceptual skills such as visual discrimination, visual-spatial relationships, visual sequential memory, narrow attentional focus and visual information processing. A review published in the journal Neuropsychologia found that spatial perception involves attributing meaning to an object or space, so that their sensory processing is actually part of semantic processing of the incoming visual information. The review also found that spatial perception involves the human visual system in the brain and the parietal lobule which is responsible for visuomotor processing and visually goal-directed action. Studies have also found that individuals who played first person shooting games had better spatial perceptual skills like faster and more accurate performance in a peripheral and identification task while simultaneously performing a central search. Researchers suggested that, in addition to enhancing the ability to divide attention, playing action games significantly enhances perceptual skills like top-down guidance of attention to possible target locations. Mental rotation Mental rotation is the ability to mentally represent and rotate 2D and 3D objects in space quickly and accurately, while the object's features remain unchanged. Mental representations of physical objects can help utilize problem solving and understanding. For example, Hegarty (2004) showed that people manipulate mental representations for reasoning about mechanical problems, such as how gears or pulleys work. Similarly, Schwartz and Black (1999) found that doing such mental simulations such as pouring water improves people's skill to find the solution to questions about the amount of tilt required for containers of different heights and widths. In the field of sports psychology, coaches for a variety of sports (e.g. basketball, gymnastics, soccer or golf) have promoted players to use mental imagery and manipulation as one technique for performance in their game. (Jones & Stuth, 1997) Recent research (e.g., Cherney, 2008) has also demonstrated evidence that playing video games with consistent practice can improve mental rotation skills, for example improvements in women's scores after practice with a game that involved a race within a 3-D environment. Same effects have been seen playing action video games such as Unreal Tournament as well as the popular mainstream game Tetris. Jigsaw puzzles and Rubik's cube are also activities that involve higher level of mental rotation and can be practiced to improve spatial abilities over time. Mental rotation is also unique and distinct from the other spatial abilities because it also involves areas associated with motor simulation in the brain. Spatial visualization Spatial visualization is characterized as complicated multi-step manipulations of spatially presented information. It involves visual imagery which is the ability to mentally represent visual appearances of an object, and spatial imagery which consists of mentally representing spatial relations between the parts or locations of the objects or movements. Spatial visualization is especially important in the domains of science and technology. For example, an astronomer must mentally visualize the structures of a solar system and the motions of the objects within it. An engineer mentally visualizes the interactions of the parts of a machine or building that they are assigned to design or work with. Chemists must be able to understand formulas which can be viewed as abstract models of molecules with most of the spatial information deleted; spatial skills are important in restoring that information when more detailed mental models of the molecules are needed in the formulas. Spatial visualization also involves imagining and working with visual details of measurement, shapes, motion, features and properties through mental imagery and using this spatial relations to derive at an understanding to a problem. Whereas spatial perception involves understanding externally via the senses, spatial visualization is the understanding internally through mental imagery in one's mind. Another critical spatial visualization ability is mental animation. Mental animation is mentally visualizing the motion and movement of components within any form of system or in general. It is an ability highly crucial in mechanical reasoning and understanding, for example mental animation in mechanical tasks can involve deconstructing a pulley system mentally into smaller units and animating them in the corresponding sequence or laws in the mechanical system. In short, mental animation is mental imagining how mechanical objects work by analyzing the motion of their smaller parts. Mental folding is a complex spatial visualization that involves the folding of 2D pattern or material into 3D objects and representations. Compared to other studies, mental folding has had relatively little research and study. In comparison to mental rotation, mental folding is a non-rigid spatial transformation ability which means features of the manipulated object end up changing unlike mental rotation. In rigid manipulations, the object itself is not changed but rather its spatial position or orientation is, whereas in non-rigid transformations like mental folding the object and shapes are changed. Mental folding in tasks usually require a series of mental rotations to sequentially fold the object into a new one. Classic mental folding tests are the Paper folding task which is similar to Origami. Origami also requires mental folding by assessing folding a 2D paper enough times to create a 3D figure. Visual penetrative ability is least common spatial visualization task which involves ability to imagine what is inside an object based on the features outside. Spatial working memory Spatial working memory is the ability to temporarily store visual-spatial memories under attentional control, in order to complete a task. This cognitive ability mediates individual differences in the capacity for higher level spatial abilities, such as mental rotation. Spatial working memory involves storing large amounts of short-term spatial memories in relation to visuo-spatial sketchpad. It is used in the temporary storage and manipulation of visual-spatial information such as memorizing shapes, colours, location or motion of objects in space. It is also involved in tasks which consist of planning of spatial movements, like planning one's route through a complex building. The visuospatial sketchpad can be split into separate visual, spatial and possibly kin-aesthetic (movement) components. Its neurobiological function also correlates within the right hemisphere of the brain. Sex differences in humans In an extensive review of research into sex differences, Maccoby and Jacklin reported that males generally perform better on spatial ability tasks than do females, in congruence to other research findings. They also found that practice leads to rapid enhancements in spatial ability in both sexes. Vocational applications Researchers have found that spatial ability plays an important role in advanced educational credentials in science, technology, engineering, and mathematics (STEM). From studies, it has been indicated that the probability of getting an advanced degree in STEM increases in positive relation to the level of one's spatial ability. For example, a 2009 study published in the Journal of Educational Psychology found that 45% of those with STEM PhDs were within top percentage of high spatial ability in a group of 400,000 participants who were analyzed for 11 years since they were in the 12th grade. Only less than 10% of those with STEM PhDs were below the top quarter in spatial ability during adolescence. The researchers then concluded how important spatial ability is for STEM and as a factor in achieving advanced educational success in that field. Spatial visualization is especially important in science and technology. For example, an astronomer must visually imagine the structures of a solar system, and the path of the bodies within it. An engineer must visually imagine the motions of the parts of a machine or building that they are assigned to work with. Chemists must be able to understand formulas which are essentially abstract models supposed to represent spatial dynamics of molecules, and thus spatial skills are important in visualizing the molecule models that are needed in the formulas. Spatial manipulation ability is also important in the field of structural geology, when visually imagining how rocks change through time, such as migration of a magma body through crust or progressive folding of a strati-graphic succession. Another spatial visualization skill known as visual penetrative ability is important in geology as it requires geologists to visualize what is inside of a solid object based on past knowledge. Current literature also indicates that mathematics involves visuo-spatial processing. Studies have found that gifted students in math, for instance, perform better in spatial visualization than non-gifted students. A 2008 review published in the journal of Neuroscience Biobehavioural Reviews found evidence that visuo-spatial processing is intuitively involved in many aspects of processing numbers and calculating in math. For example, meaning of a digit in a multi-digit number is coded following spatial information given its relation to its position within the number. Another study found that numerical estimation might rely on integrating different visual-spatial cues (diameter, size, location, measurement) to infer an answer. A study published in 2014 also found evidence that mathematical calculation relies on the integration of various spatial processes. Another 2015 study published in the journal of Frontiers in Psychology also found that numerical processing and arithmetic performance may rely on visual perceptual ability. A 2007 study published in the journal of Cognitive Science also found that spatial visualization ability is crucial for solving kinematic problems in physics. Nonetheless, current literature indicates that spatial abilities specifically mental rotation, is crucial for achieving success in various fields of chemistry, engineering and physics. See also Aphantasia Mechanical aptitude Motor imagery Raven's progressive matrices Spatial cognition Spatial contextual awareness Spatial memory References External links Overview-Visual Spatial skills Recognizing Spatial Intelligence Cognitive science Cognitive tests Visual thinking Vision Spatial cognition
Spatial ability
[ "Physics" ]
2,468
[ "Spacetime", "Space", "Spatial cognition" ]
49,045,892
https://en.wikipedia.org/wiki/Zener%20ratio
The Zener ratio is a dimensionless number that is used to quantify the anisotropy for cubic crystals. It is sometimes referred as anisotropy ratio and is named after Clarence Zener. Conceptually, it quantifies how far a material is from being isotropic (where the value of 1 means an isotropic material). Its mathematical definition is where refers to Elastic constants in Voigt notation. Cubic materials Cubic materials are special orthotropic materials that are invariant with respect to 90° rotations with respect to the principal axes, i.e., the material is the same along its principal axes. Due to these additional symmetries the stiffness tensor can be written with just three different material properties like The inverse of this matrix is commonly written as where is the Young's modulus, is the shear modulus, and is the Poisson's ratio. Therefore, we can think of the ratio as the relation between the shear modulus for the cubic material and its (isotropic) equivalent: Universal Elastic Anisotropy Index The Zener ratio is only applicable to cubic crystals. To overcome this limitation, a 'Universal Elastic Anisotropy Index (AU)' was formulated from variational principles of elasticity and tensor algebra. The AU is now used to quantify the anisotropy of elastic crystals of all classes. Tensorial Anisotropy Index The Tensorial Anisotropy Index AT extends the Zener ratio for fully anisotropic materials and overcomes the limitation of the AU that is designed for materials exhibiting internal symmetries of elastic crystals, which is not always observed in multi-component composites. It takes into consideration all the 21 coefficients of the fully anisotropic stiffness tensor and covers the directional differences among the stiffness tensor groups. It is composed of two major parts and , the former referring to components existing in cubic tensor and the latter in anisotropic tensor so that This first component includes the modified Zener ratio and additionally accounts for directional differences in the material, which exist in orthotropic material, for instance. The second component of this index covers the influence of stiffness coefficients that are nonzero only for non-cubic materials and remains zero otherwise. where is the Coefficient of variation for each stiffness group accounting for directional differences of material stiffness, i.e. In cubic materials each stiffness component in groups 1-3 has equal value and thus this expression reduces directly to Zener ratio for cubic materials. The second component of this index is non-zero for complex materials or composites with only few or no symmetries in their internal structure. In such cases the remaining stiffness coefficients joined in three groups are not null See also Anisotropy Orthotropic material Linear elasticity References Crystallography Orientation (geometry) Elasticity (physics)
Zener ratio
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
594
[ "Physical phenomena", "Elasticity (physics)", "Deformation (mechanics)", "Materials science", "Crystallography", "Topology", "Space", "Condensed matter physics", "Geometry", "Spacetime", "Orientation (geometry)", "Physical properties" ]
49,046,201
https://en.wikipedia.org/wiki/Bovista%20pila
Bovista pila, commonly known as the tumbling puffball, is a species of puffball fungus in the family Agaricaceae. A temperate species, it is widely distributed in North America, where it grows on the ground on road sides, in pastures, grassy areas, and open woods. There are few well-documented occurrences of B. pila outside North America. B. pila closely resembles the European B. nigrescens, from which it can be reliably distinguished only by microscopic characteristics. The egg-shaped to spherical puffball of B. pila measures up to in diameter. Its white outer skin flakes off in age to reveal a shiny, bronze-colored inner skin that encloses a spore sac. The spores are more or less spherical, with short tube-like extensions. The puffballs are initially attached to the ground by a small cord that readily breaks off, leaving the mature puffball to be blown about. Young puffballs are edible while their internal tissue is still white and firm. B. pila puffballs have been used by the Chippewa people of North America as a charm, and as an ethnoveterinary medicine for livestock farming in western Canada. Taxonomy The species was described as new to science in 1873 by Miles Joseph Berkeley and Moses Ashley Curtis, from specimens collected in Wisconsin. In their short description, they emphasize the short pedicels (tube-like extensions) on the spores, and indicate that these pedicels—initially about as long as the spore is wide—soon break off. According to the nomenclatural authority MycoBank, taxonomic synonyms (i.e., having different type specimens) include Pier Andrea Saccardo's 1882 Bovista tabacina, Job Bicknell Ellis and Benjamin Matlack Everhart's 1885 Mycenastrum oregonense, and Andrew Price Morgan's 1892 Bovista montana. William Chambers Coker and John Nathaniel Couch called B. pila "the American representative of B. nigrescens in Europe", referring to their close resemblance. Bovista pila is commonly known as the tumbling puffball, referring to the propensity of detached puffballs to be blown about by the wind. The specific epithet pila is Latin for "ball". Description B. pila has an egg-shaped to roughly spherical fruit body measuring up to in diameter. The thin (0.25 millimeter) outer tissue layer (exoperidium) is white to slightly pink. Its surface texture, initially appearing as if covered with minute flakes of bran (furfuraceous), becomes marked with irregular, crooked lines (rivulose). The exoperidium flakes off in maturity to reveal a thin, inner peridium (endoperidium). The color of this shiny inner skin, splotched with darker areas, resembles the metallic colors of bronze and copper. Bovista pila puffballs are attached to the ground by a small cord (a rhizomorph) that typically breaks off when the puffball is mature. The interior flesh, or gleba, comprises spores and surrounding capillitial tissue. Initially white and firm with tiny, irregularly shaped chambers (visible with a magnifying glass), the gleba later becomes greenish and then brown and powdery as the spores mature. In age, the upper surface of the puffball cracks and tears open. The resilient texture of the inner peridium enables the puffball to maintain its ball-like shape after it has detached from the ground. As the old puffballs get blown around, spores get shaken out of the tears. The spores of Bovista pila are spherical, smooth (when viewed with a light microscope), and measure 3.5–4.5 μm. They have thick walls and very short pedicels. Basidia (spore-bearing cells) are club-shaped, measuring 8–10.5 by 14–18 μm. They are usually four-spored (rarely, some are three-spored), with unequal length sterigmata between 4 and 7.4 μm. The capillitia (sterile fibers interspersed among the spores) tend to form loose balls about 2 mm in diameter. The main, trunk-like branches of the capillitia are up to 15 μm in diameter, with walls that are typically 2–3 μm thick. Similar species Characteristics typically used to identify Bovista pila in the field include its relatively small size, the metallic lustre of the endoperidium, and the presence of rhizomorphs. B. plumbea is similar in appearance, but can be distinguished by its typically smaller fruit body and the blue-gray color of its inner coat. Unlike B. pila, B. plumbea is attached to the ground by a mass of mycelial fibers known as a sterile base. Microscopically, B. plumbea has larger spores (5–7 by 4.5–6.0 μm); with long pedicels (9–14 μm). Another lookalike is the European B. nigrescens, which can most reliably be distinguished from B. pila by its microscopic characteristics. The spores of B. nigrescens are oval rather than spherical, rougher than those of B. pila, and have a hyaline (translucent) pedicel about equal in length to the spore diameter (5 μm). The puffball Disciseda pila was named for its external resemblance to B. pila. Found in Texas and Argentina, it has much larger, warted spores that measure 7.9–9.4 μm. Habitat and distribution Bovista pila is found in corrals, stables, roadsides, pastures and open woods. The puffballs fruit singly, scattered, or in groups on the ground. It is also known to grow in lawns and parks. The puffball spore cases are persistent and may overwinter. Fruiting occurs throughout the mushroom season. Bovista pila is widely distributed in North America (including Hawaii). There are few well-documented occurrences of B. pila outside North America. Hanns Kreisel recorded it from Russia, in what is now known as the Sakha Republic. The puffball has been tentatively identified from the Galápagos Islands, and has been collected from Pernambuco and São Paulo, Brazil. The South American material, however, has grayish-yellow coloration in the gleba, which may be indicative of not yet fully matured specimens. This renders identification of this material tentative, as unripe material may have different microscopic characteristics from mature material. Although the puffball has been reported from both the European part of Turkey as well as Anatolia, reports without supporting microscopic or macroscopic information are viewed with skepticism. Uses Edible when the interior gleba is still firm and white, Bovista pila puffballs have a mild taste and odor. The puffball was used by the Chippewa people of North America as a charm, and medicinally as a hemostat. In British Columbia, Canada, it is used by livestock farmers who are not allowed to use conventional drugs under certified organic programs. The spore mass of the puffball is applied to bleeding hoof trimming 'nicks', and then wrapped with breathable first-aid tape. It is also similarly used on bleeding areas resulting from disbudding, and wounds resulting from sternal abscesses. References External links Agaricaceae Edible fungi Fungi described in 1873 Fungi of North America Fungi of Brazil Taxa named by Miles Joseph Berkeley Fungi of Oceania Fungi of the Galápagos Islands Fungi without expected TNC conservation status Fungus species
Bovista pila
[ "Biology" ]
1,620
[ "Fungi", "Fungus species" ]
49,046,722
https://en.wikipedia.org/wiki/Bovista%20colorata
Bovista colorata is a species of puffball fungus in the family Agaricaceae. It is found in eastern North America and northwestern South America. The puffball was first described as Lycoperdon coloratum by Charles Horton Peck in 1878, from collections made in Sand Lake, New York. Hanns Kreisel transferred it to the genus Bovista by in 1964. The golden to orange-yellow fruitbodies are in diameter. Its spores are spherical, measuring 3.5–5 μm in diameter. References External links Fungi described in 1878 Fungi of North America Fungi of South America Taxa named by Charles Horton Peck Fungus species
Bovista colorata
[ "Biology" ]
135
[ "Fungi", "Fungus species" ]
49,047,722
https://en.wikipedia.org/wiki/Rhizopogon%20parvisporus
Rhizopogon parvisporus is a small, truffle-like fungus in the family Rhizopogonaceae. Found in Canada, it was described as new to science in 1962 by Constance Bowerman, from collections made in Newfoundland. Description The roughly spherical to irregularly shaped fruitbodies of the fungus measure in diameter when fresh, although they tend to shrink when dry. They have a hard, wrinkled surface that is yellowish brown or lighter in color. The peridium is 300–570 μm thick. The spores have the shape of narrow ellipsoids, and rarely exceed 5 μm in length. They often contain two oil droplets, but occasionally have three or four. Habitat and distribution The fungus is only known from Fort Smith (Northwest Territories), and Newfoundland. In the former location, it was found along a riverbank in spruce woods, while in the latter it grew on mossy slopes in thickets of alder and fir. References External links Fungi of Canada Rhizopogonaceae Fungi described in 1992 Fungi without expected TNC conservation status Fungus species
Rhizopogon parvisporus
[ "Biology" ]
224
[ "Fungi", "Fungus species" ]
49,048,168
https://en.wikipedia.org/wiki/Haitinger%20Prize
The Haitinger Prize of the Austrian Academy of Sciences was founded in 1904 by the chemist and factory director, Ludwig Camillo Haitinger (1860–1945), who created the award in honor of his father, Karl Ludwig Haitinger. From 1905 to 1943 it was awarded every year, for "studies in chemistry and physics that proved to be of great practical use for industrial applications". The prize was awarded for the last time in the year 1954. Winners 1905 Friedrich Hasenöhrl for electromagnetic theory 1906 F. Ratz Rudolf Scheuble for candles which burn in color 1907 Robert Kremann for research on esters 1908 Marian Smoluchowski for theoretical investigation of Brownian motion 1909 F. Haiser F. Wenzel 1910 Anton Skrabal for research on kinetic reactions of potassium permanganate 1911 Gustav Jaumann for authoring the corotational rates known as “Jaumann derivatives” 1912 Albert Defant for atmospheric physics and weather research Wilhelm Schmidt for research on microclimatology 1913 Franz Faltis for research on opiates, particularly morphine Otto Hönigschmid for measurement of atomic mass 1914 Karl Przibram for studies on the electrical charge of fog particles 1915 Heinrich Mache for absolute measurement method of radioactivity 1916 Emil Abel for catalysis research 1917 Felix Ehrenhaft for photophoresis and effects on the interaction of light with particles 1918 Wolfgang Joseph Pauli (the father of the Nobel laureate Wolfgang Ernst Pauli) for his research on the chemistry of colloids. 1919 Max Bamberger Julius Zellner 1920 Erwin Schrödinger for fundamentals of color theory Hans Thirring for studies on general relativity 1921 Alfons Klemenc for studies on electrochemistry 1922 Alois Zinke for condensed ring systems Anton Kailan for research on radium and ultraviolet radiation 1923 Adolph Smekal for research on quantum theory of dispersion 1924 Franz Aigner for underwater sound navigation Gerhard Kirsch for research on nuclear physics and geologic time measurement 1925 Robert Kremann for the discovery of electrolyte effect of alloys Ludwig Moser for quantitative rules for metals 1926 Georg Stetter for using electronics to measure the energy of nuclear particles 1927 Moritz Kohn for organic chemistry J. Lindner for organic chemistry 1928 Karl Wilhelm Friedrich Kohlrausch for the law of independent migration of ions 1929 Fritz Feigel for his techniques in analytical chemistry L. Schmid for organic chemistry 1931 Ewald Schmidt for research on radioactivity 1932 Otto Redlich for research on the properties of water and aqueous solutions 1933 Elizabeth Rona for her method of extracting polonium Berta Karlik for her work on luminescence 1935 Joseph Mattauch for development of the Mattauch isobar rule 1936 Otto Kratky for studies on colloidal particles 1937 Marietta Blau and Hertha Wambacher for the identification of alpha-particles and protons 1939 Herbert Haberlandt for luminescence of fluorites 1947 Berta Karlik for her discovery of Astatine See also List of chemistry awards List of physics awards References Sources Awards established in 1904 Organisations based in Vienna Austrian Academy of Sciences Chemistry awards Physics awards 1904 establishments in Austria-Hungary
Haitinger Prize
[ "Technology" ]
659
[ "Science and technology awards", "Chemistry awards", "Physics awards" ]
49,051,352
https://en.wikipedia.org/wiki/Kepler-395c
Kepler-395c is a potentially habitable exoplanet 616 light-years away in the constellation of Cygnus. Habitability and Properties It orbits an M-type star. It's radius is 1.32 ± 0.09 times that of Earth. It orbits at 0.177 AU with an orbital period of 34.9893 days. Because of its proximity to its star, it's likely to be tidally locked, meaning one side always facing the star, and one side always facing away. This means one side is blistering hot, and one side is bitter cold. However, in between those hostile zones, there would be a sliver of habitability. If it has a thick enough atmosphere, the sliver may even be global. See also Kepler-186e Kepler-186f Kepler-438b Kepler-442b Kepler-296e Kepler-62e Kepler-69c Kepler-395b References Exoplanets discovered by the Kepler space telescope Exoplanets discovered in 2014 Transiting exoplanets Cygnus (constellation)
Kepler-395c
[ "Astronomy" ]
227
[ "Cygnus (constellation)", "Constellations" ]
49,051,429
https://en.wikipedia.org/wiki/The%20Devil%27s%20Farmhouse
The Devil's Farmhouse, also known in Maltese as Ir-Razzett tax-Xitan, and officially as Ir-Razzett Tax-Xjaten (The Farmhouse of the Devils or The Devils' Farmhouse), is an 18th-century farmhouse in Mellieħa, Malta. The farmhouse features two unconnected buildings. The original scope for the buildings was to function as stables and a horse-riding school (Cavalerizza). At one point, the buildings were converted into farmhouses by different farmers, and underwent some structural changes. A Maltese myth claims that the farmhouse was built by the devil, a tale from which it derives its historic name. It is a national monument and in a dilapidated state. History The Devil's Farmhouse was built in the 18th century during the rule of the Knights of Malta to be used as a horse stable. It is found in Ta' Randa very close to L-Għar ta' Zamberat (Zamberat's Cave). The farmhouse stands away from urban development in isolation. Attributed to the farmhouse is a myth that the farmhouse was built by the devil/s in one/three-day/s The only architectural feature that gives the impression of relating to demonic icons are the two enclosed staircases, that lead to the roof of the stable, which are suggested to appear as two horns. There is also a traditional carnival song/poem that mentions Ta' Randa and the devil. The song (or poem) named Il-Karnival goes to say as the following: Very roughly translated, this text announces the feast of the devil at the site. Architecture The building has a simple and modest vernacular architecture, with slit windows, that function as ventilators, and waterspouts. It has no inscriptions or symbols to shed further information about its use apart from Roman numerals that were inscribed when it was converted to a farmhouse. These are found on the walls and woods, and record the sale of different types of vegetables by farmers. The features of the building are examples of Maltese traditional architecture that include roofs built with limestone slabs and animal feeding mangers. Despite the conversion to a farmhouse, the building clearly does not look like it was originally meant to be one, as it is not a traditional Maltese farmhouse. This and the position of the mangers prove that the building was built for horses. These characteristics may suggest that the building may have been a cow farm. The high roof of one of the buildings still prove that it is unlikely it was originally a cow farm, but it may be speculative to say that at one point it had been so. It features two separate unconnected buildings, which may have been built during different periods. At the site, within the front of the farmhouse, stand two traditional giren which were built for bird hunting. The building was used as a hunting lodge and as a horse-riding school (Cavalerizza) by the knights to keep their horses inside. Other later additions inside the building are the wooden beams that were introduced to support the limestone slabs. The farmhouse is in a dilapidated state and is in need of restoration. Some of the roofs already collapsed while other are expected to collapse. Cultural Heritage The farmhouse is a national monument of architectural significance. The Malta Environment and Planning Authority scheduled it as a Grade 1 National Monument, that protects it from being demolished, altered or further developed but allows the reconstruction of damaged parts. The building is listed as part of the National Inventory of the Cultural Property of the Maltese Islands (NICPMI). Gallery Further reading References 18th-century establishments in Malta Abandoned buildings and structures Architectural controversies Buildings and structures completed in the 18th century Defunct schools in Malta Farmhouses in Malta Hunting lodges in Malta Limestone buildings in Malta Mellieħa National Inventory of the Cultural Property of the Maltese Islands The Devil in legend Vernacular architecture in Malta
The Devil's Farmhouse
[ "Engineering" ]
785
[ "Architectural controversies", "Architecture" ]