id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
43,506,102
https://en.wikipedia.org/wiki/Brain%20painting
Brain painting is a non-invasive P300-based brain-computer interface (BCI) that allows painting without the use of muscular activity. The technology combines electroencephalography, signal processing algorithms and visual stimulation on a monitor to detect where the user focuses his attention, allowing him to voluntarily trigger commands to a painting software. The research project aims at assisting people afflicted with the Locked-in syndrome due to neurological or neuromuscular disease (e.g. amyotrophic lateral sclerosis ALS), who are severely restricted in communication with their environment, and therefore cut off from the possibility of creative expression. History Brain painting was co-developed by Andrea Kübler from the University of Würzburg (Germany) and Adi Hoesle. After development and testing, Brain Painting first appeared in 2010 to general press and to scientific press with a report of evaluation on healthy and locked-in participants Supported since 2012 by the EU project "BackHome" (FP7-ICT-288566), the BCI has been adapted for independent home use, and installed at locked-in artist's home: Heide Pfützner in 2012 and Jürgen Thiele in 2013. Long-term evaluation by a locked-in end user showed good satisfaction towards the system. After successful crowdfunding support, the artist Heide Pfüztner had an exhibition in summer 2013 in Easdale, Scotland, and from July to December 2014 in Würzburg (Germany) References External links Brain–computer interface Human–computer interaction Painting techniques
Brain painting
Engineering
320
27,733,408
https://en.wikipedia.org/wiki/Celestial%20Sphere%20Woodrow%20Wilson%20Memorial
The grounds of the Palais des Nations (seat of the United Nations Office at Geneva) contain many fine objects donated by member states of the United Nations, private sponsors and artists. The Celestial Sphere (also known as the Armillary Sphere) in the Ariana Park of the Palais des Nations is the best-known of these. The huge—over four-meter-diameter—Celestial Sphere is the chef d'oeuvre of the American sculptor Paul Manship (1885–1966). It was donated in 1939 by the Woodrow Wilson Foundation to what was then the League of Nations building. Known also as the Woodrow Wilson Memorial Sphere of the Palais des Nations it is today a symbol of Geneva International and of Geneva as the centre of dialogue and peace. History Contacted in late 1935 by the Board of the Woodrow Wilson Foundation, Manship was asked to provide an idea for a memorial to President of the United States Woodrow Wilson as the founding father of the League of Nations. At that time the Palais des Nations was still under construction. The first idea for Manship's contribution to the new buildings was to have him design two doors to the Assembly Hall from the Halle des Pas Perdus. Both the artist and the donor, the Woodrow Wilson Foundation, rejected this idea because doors would not be suitable for a memorial. Manship then proposed a large-scale version of the present celestial sphere, which he had developed after years of study. It is based upon several earlier versions, including the Aero Memorial in Philadelphia, Pennsylvania. It differs from these in that the Sphere is supported upon the backs of four tortoises, taken from his models for the gates to the New York Bronx Zoo, which in turn rest upon a stepped socle bearing a cast representation of the Chinese "celestial sea" (Hai Shui Jiang Ya). The tortoises may therefore be thought to represent the Chinese tortoise of immortality (Ao) - an auspicious symbol from Tang times on. Other zodiac signs come from the world's major civilizations, both past and present. Manship described this sphere in the following words: In a letter written by Ham Armstrong to Arthus Sweetser dated 30 June 1935, we read that the building committee considered the Celestial Sphere, which they had seen in Paris, superb, not only in originality of conception, but in delicacy of execution and in spirituality of meaning. However, two obstacles were foreseen; first, that it would cost more than the budget available and, second, that it would be difficult to obtain the approval of committee in New York and Geneva on anything so novel and non-utilitarian. Nonetheless, Manship's proposal for a monumental celestial sphere was accepted and a commission for the project was awarded to him in April 1936. Process In spring of 1936, immediately after the approval by the committee, Manship began working on a large-scale model in wax. At his atelier, he gathered a team of sculptors and other artists to work on the various aspects of the design. The team included such famous names as Angelo Colombo, Giuseppe Massari, and Richard Pousette-Dart, the renowned painter who collaborated with Herbert Kammerer on the sphere's lettering. The original plaster moulds, executed by Flitzer, were ready in 1938 and were sent to the Bruno Bearzi Atelier in Florence for casting. Bearzi cast the sphere's elements from these plaster moulds using a cire-perdu process from a bronze/zinc high-tin alloy with added lead. The constellations were originally gilded, with chrome-silvered starts. The meridians and architectural elements of the composition have been variously nielloed. The Celestial Sphere measures 410 cm. in diameter and weighs some 5,800 kg. The spherical frame is adorned with constellations and stars. The Sphere represents 85 constellations of the universe and shows four stars of the first four magnitudes. The constellations are gilded and the 840 stars are silvered. As his signature, it bears Manship's self-portrait with his tools, in profile, hidden among the constellations. A place for the Celestial Sphere One of the main difficulties was to find a location for the sphere. Even though Manship designed it for the Court of Honour in front of the Assembly Hall, the question was raised in 1937 whether this space should be left completely open for a full panorama. When neither the Woodrow Wilson Foundation nor the artist wanted to hear of a change in 1938, it was decided to put the sphere in the middle of the Park, not too close to the building and not too close to the trees. The sphere was placed in a small reservoir that would reflect the image of the sphere and the building in the water. The sphere was installed in its present location, in the Court d'Honneur of the Ariana park of the Palais des Nations by the Bearzi Atelier in August 1939. The official inauguration of what has become a United Nations symbol took place in September 1939. The sphere is equipped with a motor. In the words of the artist it was designed "so that it would rotate slowly" around an axis turned to the Pole star, and it was intended to be illuminated at night. Concerns Dysfunctional rotation system and illumination Due to the outbreak of the Second World War the rotation motor of the Celestial Sphere was used for several months only. In the files of the Woodrow Wilson Foundation, the following brief description was found: "A complex silence and solitude reigned; the great ceremony of dedication, with the 30th Assembly in session, had become impossible: only an occasional chance visitor and a few especially interested Americans watched the Italians putting the great sphere, representative of universal comity, into its place of high honour." The rotation motor of the Celestial Sphere was not used during 1940–1945 and ceased to function in the early 1960s. Deteriorating conditions The sphere began to have significant problems as early as 1942. The alloy used by the Bearzi Atelier contracted so sharply during the winter that a considerable amount of water could and did enter the hollow constellations. The freezing of that water caused the metal to crack. Already several of the constellations had to be repaired in 1942–43 and at least one cover of a meridian had to be replaced after falling off. "Weep holes" were drilled in all the constellations at that time to allow the water to drain out. The socle, which bears the whole of the 5,800 kg weight, has cracked. Large areas of corrosion and uneven natural patina are seen. The 840 chrome-plated stars, once present in four sizes, have been widely lost. The sphere cage is at the limit of its weight bearing load. Metal fatigue, cracks and corrosion have increasingly added to its deterioration. Symbol of Peace - Pax Universalis Today the Celestial Sphere stands in the Court d’Honneur of the Palais des Nations, itself an important landmark of the City of Geneva. It serves as a vivid reminder that despite all cultural and religious differences we are inhabitants of one and the same planet of the galaxy, the Earth. The time has come to think in terms of Pax Universalis rather than of other Paxes, and one of the contributors to a Pax Universalis is an action-oriented dialogue, based on common human values and the ideals of the United Nations. Gallery References Jean-Claude Pallas (2001). Histoire et architecture des Palais des Nations, 1924-2001: l'art déco au service des relations internationales, Nations Unies, pp. 48, 65, 100, 111, 354. Franklin Delano Roosevelt, Edgar Burkhardt Nixon, Donald B. Schewe (1979)Franklin D. Roosevelt and foreign affairs, second series, January 1937-August 1939. Ernest Willian Watson, Arthur Leighton Guptill (1951), American artist, Watson-Guptil Publications. Janis C. Conner, Joel Rosenkranz, David Finn (1989). Rediscoveries in American sculpture: studio works, 1893-1939. (2006).Encyclopedia Americana, Scholastic Library Publishing, p. 264. I.Dembinski (2009). International Geneva Yearbook 2008, Dominique Dembinski-Goumard, p. 341. Harry Rand (1989). Paul Manship Smithsonian Institution Press, p. 124-126. (1949). United Nations world, UN World Inc., p. 63. Albert Picot (1965). Le rayonnement international de Genève, Editions du Griffon. Laure De Gonneville (2009). Suisse 2009 Edition Petite Futé. (2006). Geneva - centre for new dialogue among civilizations, UN Special Magazine, No. 652 (www.unspecial.org) (2008). Pax Universalis Aeternaque, UN Special Magazine, No. 671 (www.unspecial.org) Christian David and Evelina Rioukhina (2010). The Celestial Sphere Woodrow Wilson Memorial, UN Special (magazine), No. 699 Tom Armstrong (1976). 200 years of American sculpture, Whitney Museum of American Art (1985) Paul Manship: changing taste in America: 19 May to 18 August 1985, Minnesota Museum of Art, Landmark Center. (2000). Booklet “The Dutch 17th Century in Etchings” for the exhibition of Rembrandt at the United Nations by Museum Geelvinck Hinlopen Huis (with the project proposals by Maecenas World Patrimony Foundation ((www.maecenasworldpatrimony.org) “Contribute to the Cycle of Life – the restoration of the Armillary Sphere”, Geneva. Alastair Duncan (1986). American art deco, Abrams. Carol Hynning Smith (1987). Drawings by Paul Manship: the Minnesota Museum of Art collection, Minnesota Museum of Art. External links Genève tourisme La Genève internationale Peace monuments in Switzerland UN Special magazine Maecenas World Patrimony Foundation Monuments and memorials in Switzerland League of Nations Tourist attractions in Geneva Buildings and structures in Geneva Ancient Greek astronomy Historical scientific instruments Astronomical instruments Hellenistic engineering Works by Paul Manship Sculptures of turtles Sculptures of birds Sculptures of fish Sculptures of snakes United Nations art collection Peafowl in art Animal sculptures Sculptures of women in Switzerland Sculptures of dragons Sculptures of sheep Sculptures of cattle
Celestial Sphere Woodrow Wilson Memorial
Astronomy
2,101
21,626,418
https://en.wikipedia.org/wiki/Preview%20%28computing%29
Preview is a computing function to display a document, page, or film before it is produced in its final form. In the case of printed material this is known as "print preview". Content preview Previewing allows users to see the current stage of the process before producing into a final form. Preview lets users visualize the current/final product and correct possible errors easily before finalizing the product. Preview is necessary for markup language editing software like Web development applications. Web development applications like Adobe Dreamweaver and most HTML editors have a "preview in browser" feature. Though browsers in general produce the same results, each browser version can display HTML pages somewhat differently. Preview in browser lets writers check how the page will appear in multiple target browsers. Video editing applications also have a preview feature to see the current product made during the editing process. Final Cut Pro's interface has two preview windows, "Viewer window" and "Canvas window". "Viewer window" lets users preview clips and decide which one to use. User can also make changes to clips in this window. "Canvas window" shows a preview of the current project's rendered product. Many interactive websites and online forums allow users to preview their content before submitting it. This is particularly useful on sites with complex markup (not WYSIWYG), where it serves as an opportunity to identify and correct errors and formatting problems before saving the content. Print preview Print preview is a functionality that lets users see the pages that are about to print, allowing users to see exactly how the pages will look when they are printed. By previewing what the layout will look like when printed without actually printing, users can check and fix possible errors before they begin actual printing. Most applications have a print preview feature and some applications, like Adobe Photoshop and Microsoft Office, automatically open "Print Preview" when the "Print" menu is selected. This feature is useful for making sure that the layout is the way the user expects it to be before the actual printing. Microsoft Word's Print Preview feature lets users to zoom in/out in the document or show multiple pages in one window. Graphic tools like Adobe Photoshop's Print Preview lets users position and scale the image before printing. Many web browsers also have a print preview feature so that users can preview how the website contents will be printed out on paper. Internet Explorer has a Print Preview feature to prevent accidents like printing ten pages where it ought to print one, or printing a page with an unwanted background color. In Internet Explorer Print Preview, one can adjust the paper size to print on, margins, and page orientation of the Web page. Mozilla Firefox has Print Preview built in as well. Safari lets users preview the web page when Print is clicked. In Print, the Preview button opens the Preview application and the print preview of the web page shows up. References User interfaces
Preview (computing)
Technology
590
5,977,457
https://en.wikipedia.org/wiki/Musical%20clock
A musical clock is a clock that marks the hours of the day with a musical tune. They can be considered elaborate versions of striking or chiming clocks. Elaborate large-scale musical clocks with automatons are often installed in public places and are widespread in Japan. Unlike conventional electronic musical clocks, these clocks plays pre-recorded music samples, instead of using programmed sound synthesis. One of the earliest known domestic musical clocks was constructed by Nicholas Vallin in 1598, and it currently resides in the British Museum in London. Description The music on mechanical clocks is typically played from a spiked cylinder on bells, organ pipes, or bellows. On electric clocks such as quartz clocks, the music is usually generated using an electronic sound module. Most of these quartz musical clocks utilize either FM synthesis or sample-based synthesis technology for sound generation to produce high-fidelity and complex music, similar to the sound generation methods of electronic musical instruments. Pipe organ clock The pipe organ clock was a specific clock that chimed with a small pipe organ built into the unit. An example is a Markwick Markham made for the Turkish market, circa 1770. Popularity in Japan In Japan, aside from the extensive popularity of large-scale musical clocks installed in public facilities, electronic musical wall clocks has become a popular novelty items since the late 1990s. They are mostly collected for their aesthetic and decorative values, especially those with elaborate movements and advanced music generation. Most of these clocks are manufactured by Seiko and Rhythm. See also Automaton clock Music by CPE Bach for musical clock References External links Clock designs Mechanical musical instruments
Musical clock
Physics,Technology
318
77,598,920
https://en.wikipedia.org/wiki/IRC%20Cloud
IRC Cloud is a cloud-based IRC client that is used via web browser, Android and iOS. IRC Cloud was founded by Richard Jones and James Wheare. See also Comparison of IRC clients Further reading IRC all the way down (ZNC + IRCCloud + Quassel) References IRC clients
IRC Cloud
Technology
70
3,195,358
https://en.wikipedia.org/wiki/KEMA
KEMA (Keuring van Elektrotechnische Materialen te Arnhem) NV, established in 1927, is a global energy consultancy company headquartered in Arnhem, Netherlands. It offers management consulting, technology consulting & services to the energy value chain that include business and technical consultancy, operational support, measurements & inspection, and testing & certification services. On 22 December 2011, DNV acquired 74.3% of KEMA's shares, creating a global consulting and certification company with 2300 experts located in over 20 countries. On 12 September 2013, DNV and GL merged into DNV GL, becoming the world's leading ship and offshore classification society. As DNV GL the company continued to issue KEMA certificates from their laboratories. On 30 December 2019, the property of KEMA B.V. has been transferred from DNV GL to CESI S.p.A. The acquisition includes all the high voltage testing, inspection and certification activities carried out at the KEMA owned laboratories in Arnhem (The Netherlands) and Prague (Czech Republic). The transaction was completed on 2 March 2020 with the acquisition of the Chalfont laboratory (USA). The KEMA testing and inspections facilities include the world’s largest high-power laboratory, with the highest short circuit power of 10,000 MVA, and the world’s first laboratory capable of testing ultra-high voltage components for super grids, as well as the Flex Power Grid Laboratory, for advanced testing of smart grids components. History KEMA was founded in 1927 as the Dutch electricity industry’s Arnhem-based test house, providing electrical safety testing and certification activities. In the span of eighty years, The company is a risk management company actively providing independent applied research and consultancy services via an international network of subsidiaries and agencies. On 22 December 2011, DNV acquired 74.3% of KEMA's shares, creating a global consulting and certification company within the cleaner energy, sustainability, power generation, transmission and distribution sectors. On 12 September 2013, DNV and GL merged into DNV GL. On 30 December 2019, the property of KEMA B.V. has been transferred from DNV GL to CESI S.p.A. The acquisition includes all the high voltage testing, inspection and certification activities carried out at the KEMA owned laboratories in Arnhem (The Netherlands) and Prague (Czech Republic). The transaction was completed on 2 March 2020 with the acquisition of the Chalfont laboratory (USA). References External links https://www.cesi.it/ https://www.cesi.it/testing-certification-inspection/ Certification marks Organizations established in 1927 Product-testing organizations Laboratories in the Netherlands Commercial laboratories
KEMA
Mathematics
578
65,252,533
https://en.wikipedia.org/wiki/PokerGO
PokerGO is an over-the-top content platform based in Las Vegas, Nevada. PokerGO was launched in 2017 as a subscription-based streaming service, offering poker-centric online streaming. The content offered on PokerGO includes poker tournaments, along with cash game-orientated shows. As of February 2021, PokerGO's library of content includes over 2,400 videos totaling over 3,800 continuous hours. Content PokerGO includes shows, tournament replays, and cash games. Other media includes episodes, live-streams, and recap videos. Live-streamed events can be accessed as on-demand videos after. Poker tournaments and cash games High Stakes Poker High Stakes Poker is a cash game poker television program hosting a mix of professional and amateur poker players playing high stakes No-Limit Hold’em with buy-ins ranging from $100,000 to $500,000. The show debuted in January 2006 and initially ran for seven seasons till May 2011. In February 2020, PokerGO announced that they had acquired the High Stakes Poker brand and the show's assets. In December 2020, a new season of High Stakes Poker aired on PokerGO and included returning players Tom Dwan, Phil Hellmuth, Brandon Adams, and Phil Ivey, while also introducing new players to High Stakes Poker including Jason Koon, Jean-Robert Bellande, Bryn Kenney, Doug Polk, Michael Schwimer, and Chamath Palihapitiya. There are nine seasons of High Stakes Poker and 126 episodes, and current hosts include Nick Schulman and A.J. Benza. Poker After Dark Poker After Dark is a poker television program that follows the development of one table of poker players over a period of time. The seasons are split into weeks with each given a theme based on the players involved. Poker After Dark originally began under a No-Limit Hold’em sit-n-go format before evolving to cash games that also featured different game variations such as Pot-Limit Omaha, 2-7 Triple Draw, H.O.R.S.E., or Short Deck. The original series would see one table play over five episodes with the sixth episode being a director's cut. The show was acquired in 2017 and rebooted as a live stream show for four seasons before returning to the episodic format for season 12. World Series of Poker The World Series of Poker (WSOP) is a series of tournaments of most major poker variants that have been held annually in Las Vegas since 1970. The WSOP expanded to Europe in 2007, and Asia Pacific in 2012. PokerGO acquired the global television and digital media rights to the WSOP in 2017. The agreement included the expansion of programming in a shared deal that saw live coverage on both ESPN and PokerGO. In 2020, WSOP Classic was added to the PokerGO library that included footage from both the WSOP Main Event and WSOP bracelet events from 2003—2010. Super High Roller Bowl The Super High Roller Bowl is a recurring high-stakes No-Limit Hold’em poker tournament held at various locations around the world since 2015. After beginning in Las Vegas, the Super High Roller Bowl has expanded to Macau, London, Bahamas, Australia, and Russia, as well as having an online event on partypoker in 2020. U.S. Poker Open The U.S. Poker Open is a poker tournament series hosted in the PokerGO Studio since 2018. The series features a variety of events at different buy-in amounts and crowns a series champion each year. Previous series champions include Stephen Chidwick (2018) and David Peters (2019). Poker Masters Held since 2017, the Poker Masters is a poker tournament series hosted in the PokerGO Studio. The series awards a Purple Jacket to the overall champion, and winners include Steffen Sontheimer (2017), Ali Imsirovic (2018), and Sam Soverel (2019). In 2020, the event moved online to partypoker, and Alexandros Kolonias won the championship. Additional poker tournaments and cash games Additional poker tournament and cash games coverage available on PokerGO includes: High Stakes Duel High Stakes Feud PokerGO Cup Dolly's Game Rob's Home Game Friday Night Poker Super High Roller Cash Game British Poker Open Australian Poker Open World Poker Tour Partypoker MILLIONS Aussie Millions ARIA High Roller Series Super High Roller Celebrity Shootout Doubles Poker Championship Face the Ace Poker Central Celebrity Shootout Original programming Pokerography Pokerography is a biopic series that tells the stories behind players and outlines their lives and poker careers. There are 23 episodes of Pokerography. Some of the players featured include Antonio Esfandiari, Chris Moneymaker, Jennifer Tilly, Mike Sexton, and Phil Hellmuth. Super High Roller Club Presented by Ali Nejad, Super High Roller Club is a six-part series that gives viewers a glimpse into the lives of poker players as they tell stories from the felt and from life. The players involved include Brandon Adams, Nick Schulman, Farah Galfond, Antonio Esfandiari, Phil Hellmuth, and Daniel Negreanu. Real Talk Real Talk is a roundtable talk show hosted by Remko Rinkema, who he discusses a wide variety of topics with poker players. The players involved include Scott Blumstein, Liv Boeree, Matt Berkey, Kane Kalas, Bryn Kenney, Bryon Kaverman, Justin Bonomo, Isaac Haxton, Maria Ho, Jason Koon, Greg Merson, and Mike Sexton. The Big Blind The Big Blind is a poker trivia show. Host Jeff Platt tests three contestants each week on their knowledge of Las Vegas, casinos, gambling, and poker. Legends of the Game Legends of the Game is a six-part docuseries providing a closer look at legendary gamblers and poker's most defining characters as Benny Binion, Stu Ungar, and David “Chip” Reese. Dead Money Dead Money follows poker professional Matt Berkey as he prepares to play the 2016 Super High Roller Bowl. Although predominantly a cash game player, Berkey is set to play his biggest buy-in poker tournament of $300,000, and Dead Money gives viewers a unique look into his strategy and decision-making on the way to him finishing in fifth-place for $1,100,000. Additional programming Additional programming available on PokerGO includes: The Championship Run Stories from the Felt Deep Issues Beyond the Rail Major Wager INSIDERS: Super High Roller Bowl 2018 2020 Hindsight Hand Histories Inside Poker Poker Nights Tell Tale Chasing Hearts Grinders Device support and technical details Devices PokerGO can be accessed via a web browser on personal computers, tablets, or mobile phones, while PokerGO apps are available on various platforms, including Apple iPhone, Apple iPad, Apple TV, android mobile and tablet devices, Roku devices, and Amazon Fire TV. PokerGO is available worldwide except in China. Service plans PokerGO launched with a two-tier subscription model: monthly and annual. PokerGO is now on a three-tier subscription model: monthly, quarterly, and annual. PokerGO Studio PokerGo announced in April 2018 that they would be building a 10,000 square-foot PokerGO Studio at the ARIA Resort & Casino. The studio has space for nine poker tables and a capacity of up to 300 people. It includes space for fans and spectators, a full-service bar, a lounge for seating, and the flexibility to host a wide variety of events. The PokerGO Studio opened in May 2018 with the filming of Poker After Dark. Open House week featured two nights of $100/$200 No-Limit Hold’em cash games with players including Daniel Negreanu, Maria Ho, Brandon Adams, Eli Elezra, Bill Perkins, Dan Shak, Mike Matusow, Matt Berkey, Tom Marchese, Antonio Esfandiari, and Phil Hellmuth. The first tournament series at the PokerGO Studio was held later that month with the 2018 Super High Roller Bowl attracting 48 players. Justin Bonomo defeated Daniel Negreanu heads-up to win the $5,000,000 first-place prize and his second Super High Roller Bowl title. Poker The PokerGO Studio is the home for PokerGO-owned live events from poker tournaments to cash games. The Super High Roller Bowl, U.S. Poker Open, and Poker Masters tournament series' have been held inside the PokerGO Studio since 2018, and new PokerGO shows of High Stakes Duel and High Stakes Feud were filmed in the PokerGO Studio. It has also hosted World Poker Tour final tables including the WPT Bobby Baldwin Classic and WPT Five Diamond World Poker Classic. The PokerGO Studio also hosts cash games including Poker After Dark, High Stakes Poker, Friday Night Poker, Dolly's Game, and Rob's Home Game. Poker After Dark originally filmed at South Point Casino, Golden Nugget, and ARIA Resort & Casino, before relocating to the PokerGO Studio during Season 9. High Stakes Poker originally filmed at the Golden Nugget, The Palms, South Point Casino, and the Bellagio Resort & Casino, before relocating to the PokerGO Studio when the show relaunched for Season 8 after a nearly 10-year hiatus. Other poker events held from inside the PokerGO Studio include the Global Poker Awards. References Streaming media systems Gambling in the United States Internet streaming services Poker companies 2017 establishments in Nevada Companies based in Las Vegas Entertainment companies established in 2017 Internet television streaming services Subscription video on demand services
PokerGO
Technology
1,937
4,731,927
https://en.wikipedia.org/wiki/Saha%20Institute%20of%20Nuclear%20Physics
The Saha Institute of Nuclear Physics (SINP) is an institution of basic research and training in physical and biophysical sciences located in Bidhannagar, Kolkata, India. The institute is named after the famous Indian physicist Meghnad Saha. Previous Directors Gautam Bhattacharyya Ajit Mohanty Bikas Chakrabarti Milan K. Sanyal [2009 to 2014] Bikash Sinha Manoj K. Pal D. N. Kundu B. D. Nag Chowdhury Meghnad Saha See also Education in India List of colleges in West Bengal Education in West Bengal References External links Research institutes in West Bengal Research institutes in Kolkata Homi Bhabha National Institute Physics research institutes University of Calcutta affiliates Research institutes established in 1949 Nuclear research institutes 1943 establishments in India
Saha Institute of Nuclear Physics
Engineering
166
76,729,401
https://en.wikipedia.org/wiki/Kesteven%2075
Kesteven 75, abbreviated as Kes 75 and also called SNR G029.7-00.3, G29.7-0.3 and 4C -03.70, is a supernova remnant located in the constellation Aquila. Morphology Kesteven 75 is a supernova remnant of composite morphology. In the radio band, it shows a shell or partial shell of about 90 arcseconds in radius, with a central nebula of about 25 × 35 arcseconds. The complete absence of emission in the eastern part indicates a strong density gradient of the interstellar medium where Kesteven 75 expands. The central component has been observed to have a flat radio spectrum with substantial polarization, which is characteristic of a pulsar or plerion wind nebula. The expansion speed of this plerion is approximately 1000 km/s. X-ray observations with the ASCA and Chandra observatories also show the composite nature of Kesteven 75, its morphology being very similar to that of radio frequencies. The emission from the shell of Kesteven 75 is mainly concentrated in two regions, to the southeast and southwest. Likewise, a "jet"-torus structure, common in young plerions, has been identified. According to certain observations of Chandra, the spectra of the shell can be explained by a thermal model of two temperatures, possibly associated on the one hand with the impacted material, and on the other with the ejecta material that has suffered a second impact due to the reverse shock. The infrared emission from the shell is spatially correlated with the X-ray emission, suggesting that the dust particles are heated by the collision of gas previously heated by the X-rays. This dust reaches a temperature of 140 K due to upon collision with a hot and relatively dense plasma. Kesteven 75 has also been detected in the gamma ray region between 20 and 200 keV with the INTEGRAL space observatory, and between 0.3 and 5 TeV with the H.E.S.S telescope system. Progenitor Based on the high velocity of the ejecta and the low density implied by the initially estimated distance (much greater than currently accepted), it has been suggested that Kesteven 75 comes from a type Ib/c supernova explosion. However, subsequent studies have proposed that Kesteven 75 is probably the result of a more common type IIP supernova, where the plerion expands into an asymmetric nickel bubble. The parent star is thought to have a mass between 8 and 12 solar masses. Stellar remnant Kesteven 75 houses the X-ray pulsar PSR J1846−025, discovered in 2000, which provides energy to the plerion. It is a highly energetic stellar remnant (E = 8.3 × 1036 erg/s) whose rotation period is 325 ms. In 2006, it was detected that this pulsar was in an "active" or "flaring" state, which led to changes in its spectrum, as well as in the morphology of the associated plerion; Fourteen years later, activity was detected again in PSR J1846−025. However, the energetic and spectral properties of this pulsar firmly distinguish it from a magnetar. Its inferred magnetic field (Bs = 4.9 × 1013 G) is the largest among this type of objects, and is probably responsible for the activity intervals discovered, suggesting a transition to a magnetar state. Distance Although historically estimates of the distance to Kesteven 75 have varied significantly between 5,000 and 21,000 parsecs, recent analyzes based on H I observations place this supernova remnant at 5,800 ± 500 parsecs from Earth. On the other hand, Kesteven 75 is a very young supernova remnant, with an age of less than 840 years. The study of the expansion of the plerion over 10 years has allowed us to limit the age of this remnant to 480 ± 50 years . Consequently, Kesteven 75 contains the youngest plerion in our galaxy. See also List of supernova remnants External links Kesteven 75 at SIMBAD References Supernova remnants Aquila (constellation)
Kesteven 75
Astronomy
866
8,664,961
https://en.wikipedia.org/wiki/E-Group
E-Groups are unique architectural complexes found among a number of ancient Maya settlements. They are central components to the settlement organization of Maya sites and, like many other civic and ceremonial buildings, could have served for astronomical observations. These sites have been discovered in the Maya Lowlands and other regions of Mesoamerica and have been dated to Middle Preclassical to Terminal Classic Period. It has been a common opinion that the alignments incorporated in these structural complexes correspond to the sun's solstices and equinoxes. Recent research has shown, however, that the orientations of these assemblages are highly variable, but pertain to alignment groups that are widespread in the Maya area and materialized mostly in other types of buildings, recording different agriculturally significant dates. Origin of the name E-Groups are named after "Group E" at the Classic period site of Uaxactun, which was the first one documented by Mesoamerican archaeologists. At Uaxactun, the Group E complex consists of a long terraced platform with three supra-structures arranged along a linear axis oriented north-south. The two smaller outlying structures flank the larger central temple. A stairway leads down to a plaza formed by Uaxacatun's Pyramid E-VII. Three stele immediately front the E-Group, and a larger stele is located midway between Group E and Pyramid E-VII. Each of the four stairways incorporated into the complex (the main central one and three leading up to each supra-structure) bears two side masks (for a total of 16). There is a small platform located on the western part of the plaza, often a tiered structure, located opposite of the central of the three supra-structures. From a point of observation on Pyramid E-VII, the three structures have the following orientation: North structure (Temple E-I) – in line with the sunrise at the Summer (June) solstice South structure (Temple E-III) – in line with the sunrise at the Winter (December) solstice Central structure (Temple E-II) – in line with the sunrise at the equinoxes (September and March) As revealed by excavation reports, however, these alignments could not have been observationally functional, because they connect architectural elements from different periods. Distribution in Mesoamerica E-Group structures are found at a number of sites across the Maya area, particularly in the lowlands region. The oldest-known E-Groups coincide with the earliest Maya ceremonial sites of the Preclassic period, indicative of the central role played by astronomical and administrative concerns in the very beginnings of Maya ceremonial construction and planning. The oldest documented E-Group in the Yucatán Peninsula is found at the site of Seibal. However, many earlier E Groups have been found in the Olmec region, western Maya Lowlands and along the Pacific coast in Chiapas. Construction of E-groups continues on through the Classic period, with examples of these including the Lost World Pyramid at Tikal in the Petén Basin of northern Guatemala, and Structure 5C-2nd at Cerros, in Belize. Caracol, also in Belize and the site that defeated Tikal during the Middle Classic, has a large-scale E-Group located in the western portion of its central core. Significance Astronomical Use E-groups have been heavily theorized to serve as astronomical observatories. In this manner, E-groups were considered useful for farmers who needed to schedule agriculture activities throughout the varying seasons. They were also hypothesized to serve as timekeeping tools for trading purposes. The leading theory stuck that E-groups were useful for observing solar zeniths, as the sun's path was significant to Maya culture. Research has found that E-groups were not precise in their astronomical measurements indicating that they were more of a symbolic rather than observational use. Mesoamerican Ball Game The Mesoamerican Ball Game has been associated with E-groups. Certain E-groups, such as Seibal, have ball game imagery indicating the game played by people of that site. In addition, sites like Tikal included ball courts near their E-groups. Public Spaces Viewsheds were one architectural aspect that were constructed at locations containing the Middle Preclassic E-Groups, who were mostly located in the Central Maya Lowlands. This discovery indicated that large plazas and other similar architectural structures demonstrate a visible community. It was observed that settlers of this region intentionally spaced these monuments apart from one another as a method of defining different groups. Additionally, recent evidence suggests that these different community spaces were civic. Directionality In the E-Group found within Chan's Central Group, researchers discovered that the directionality of the E-Group buildings were not only cross-linked with astrological beliefs, but also to maximize agricultural capabilities of the community. For example, the east and west buildings were correlated to the sun's natural cycle while the north and south buildings were correlated with the sun's positions at midday and in the underworld, respectively. E-Groups believed the sun also passed through the underworld when it could not be seen by the naked eye, while the sun's position at midday (north) referred to the sun shining on the heavens, exemplifying supreme power. This data collection was completed by LiDAR technology. History 1924–1954 Frans Blom is credited with the discovery of the first E Group in 1924 while working in Uaxactún, Guatemala, a northeast region of the Lowland Maya. This site has been dated to originate from the Pre Classic Mayan period. The E Group he identified was an open plaza defined in the west by a pyramid and in the east by a platform supporting three north–south oriented buildings. Blom posited that the assemblage was an astronomical observatory based on the observation that when viewed from the western pyramid, the three eastern buildings marked the position of the sun at sunrise on the equinoxes and solstices. From the western radial structure of the E Group, sunrise during the summer solstice could be seen above the northern structure while the sunrise during the winter solstice can be observed above the southern structure. In 1928, Oliver Ricketson theorized that the sunrises during the equinoxes could be observed over the central eastern structure. In 1943 Karl Ruppert published his discovery of 13 more E Group structures contained in the classic Maya Lowlands. He also identified 6 more structures that were similar to Blom's original discovery but had slight differences. In addition to these, Thompson had already unknowingly excavated two E Groups. In total during this time period 25 E Groups were identified at 22 different sites–most within a 110 km radius of Uaxactún. At this point only 4 E Groups had been excavated. 1955–1984 During this time period 10 additional E Groups were reported with 4 more being excavated. Arlen Chase excavated the Cenote E Group in 1983 which led to him defining two styles of E Group. The first is the Cenote style which dates back to around 1000 BCE is characterized by a long eastern platform supporting one larger central building. The second kind is the Uaxactún style with the shorter eastern platform supporting three smaller structures. In 1980 Marvin Cohodas began discussing the relationship of E Groups to celebrating agricultural cycles, an idea that was further investigated by James Aimers (1993:171–179), as well as Travis Stanton and David Freidel (2003). Cohodas also began to discuss notion that the E Group related to origin places for the sun and moon. 1985–2016 142 additional E Groups were discovered during this period, many located in the Southeast Petén. By this point 34 E Groups had been excavated in total. Anthony Aveni and Horst Hartung (1988, 1989) looked more at Uaxactún's Group E complex to test the theory that it functioned as an astronomical observatory with their results indicating that it likely was. Juan Pedro Laporte (2001:141) conducted a survey of 177 sites in the Southeast Petén found that 85% had an E Group assemblage. Laporte (2001:142) noted that E groups were the largest open public space at most sites hinting more at their central nature to the community. in 2003 the alignments of 40 E Groups were analyzed showed them to be observatories (Aveni et al. 2003:162, Table 1). The analysis also showed a shift from solstice dating to zenith passage dating–a sign of influence from Teotihuacan at around 250–500 CE. Other sights were aligned with the 20-day Winals (Mayan months). This demonstrates that the particular design of a site's E Group was aligned with the values of the people that inhabited the site. There is still an ongoing debate about whether E Groups had other ritual purposes that were more important than astronomical observations, however, it is likely that both uses were important and should continue to be researched. Current Research Current research on E Groups has produced many important findings. The first of these is that early E Groups were made by clearing the landscape to bedrock then forming the bedrock into something with building like features. This bedrock was later encased by E Group reconstruction fills. Forming of bedrock is a common practice and important motif found across ancient America. A second result has come from the analysis of varying E Group sizes and locations. One E Group variant found in Belize (Robin et al. 2012) is small enough and within close enough proximity to a residential complex that it can be inferred the E Group was used by a single family. This is in contrast to the Uaxactún E Group that would have been used by the whole populous. We would like to be able to use E Groups to study population density and societal structure further however a lot of later occupation has made this hard to do. Finally, it has been discovered that most E Groups are placed strategically along crucial Mesoamerican trade routes. This calls for further investigation into the purpose of E Groups and whether they might have served some economic purposes. Pseudo E-Groups In 2006, archaeologist, Thomas Guderjan, conducted research on, what he called, "Pseudo E-Groups." This term refers to the regional variant of E-Groups, mainly residing in Eastern Peten during the Late Classic period. These sites mainly consisted of two buildings joined by a mutual substructure. Additionally, Pseudo E-Groups lack a western building that acts as an observatory. This difference is only correlated with the E-Groups in Eastern Peten. To date, there are currently four known Pseudo E-Groups: Blue Creek, Chan Chich, San Jose, Quam Hill. Notes References Maya architecture Buildings and structures in Mesoamerica Ancient astronomical observatories Solstices
E-Group
Astronomy
2,222
76,043,809
https://en.wikipedia.org/wiki/Magnesium%20selenide
Magnesium selenide is an inorganic compound with the chemical formula MgSe. It contains magnesium and selenium in a 1:1 ratio. It belongs to the II-VI family of semiconductor compounds. Structure Three crystal structures for MgSe have been experimentally characterized. The rock-salt structure is considered to be the most stable crystal structure that has been observed in bulk samples of MgSe, and a cubic lattice constant of 0.55 nm was deduced for this structure. Although attempts at preparing pure zincblende MgSe have been unsuccessful, the lattice constant of zincblende MgSe has been extrapolated from epitaxial thin films of zincblende MgxZn1−xSySe1−x and MgxZn1−xSe grown on gallium arsenide, the latter of which was prepared with a high magnesium content (up to 95% Mg, i.e., Mg0.95Zn0.05Se). There is good agreement between these and other extrapolations that the lattice constant of pure zincblende MgSe is 0.59 nm. The wurtzite structure of MgSe has been observed, but it is unstable and slowly converts to the rock-salt structure. NiAs- and FeSi-type crystal structures of MgSe are predicted to form by subjecting the rock-salt crystal structure to extremely high pressures. Electronic properties Both rock-salt and zincblende MgSe are semiconductors. On the basis of different extrapolations, a room temperature bandgap of 4.0 eV has been recommended for zincblende MgSe. A room temperature bandgap of 3.9 eV was determined for rock-salt MgSe. Preparation Thin films of amorphous, wurtzite and rock-salt MgSe have been prepared by vacuum deposition of Mg and Se at cryogenic temperatures, followed by heating and annealing. Compound semiconductor alloys of MgSe, such as MgxZn1−xSe, have been prepared by molecular beam epitaxy. Reactions Samples of pure MgSe and Mg-rich MgxZn1−xSe (x > 0.7) readily react with water and oxidize in air. References Magnesium compounds Selenides II-VI semiconductors
Magnesium selenide
Chemistry
468
38,960,533
https://en.wikipedia.org/wiki/Psychoactive%20Substances%20Act%202013
The Psychoactive Substances Act 2013 is a law in New Zealand. The purpose of the Act is to regulate the availability of psychoactive substances in New Zealand to protect the health of, and minimise harm to, individuals who use psychoactive substances. The law seeks to make manufacturers test and prove their products are low-risk before they can be sold. Testing is expected to cost manufacturers $1 to 2 million dollars. There is also an $180,000 application fee. A later addition to the law, Section 4(f), specified that "animals must not be used in trials for the purposes of assessing whether a psychoactive product should be approved." This may mean that, in practice, approval will be difficult or impossible. So far, no manufacturing licenses have been applied for. The Act was brought in as a reaction to widespread concerns over the 2005 deregulation, or decriminalisation, of selling psychoactive substances in New Zealand with the introduction of section 62 in the Misuse of Drugs Amendment Act 2005 and the Misuse of Drugs (Restricted Substances) Regulations 2008. These laws made psychoactive substances such as party pills and legal highs available in New Zealand in a relatively new experimental market aimed at decriminalising the production and sale of recreational drugs. References External links Text of the Psychoactive Substances Act Psychoactive Substances Bill a ‘game-changer’ Government press release New regulatory regime for psychoactive substances Ministry of Health background info Statutes of New Zealand Drug control law 2013 in New Zealand law Drug policy of New Zealand
Psychoactive Substances Act 2013
Chemistry
310
40,514,692
https://en.wikipedia.org/wiki/CyberTel%20Cellular
CyberTel Cellular was an early St. Louis-based cellular telecommunications company. CyberTel's first cellular tower in St. Louis became operational on July 16, 1984. The cellular phone that was in use at that time was the Motorola DynaTAC 8000X, costing $3,995. In October 1988, CyberTel, which offered paging services under the name BeepCall, acquired Minnesota Communications Corporation to effectively double its paging services, according to David Bayer, president, and owner of CyberTel. Acquisition CyberTel was acquired by Ameritech in 1991 for $512 million, which operated the company under the name CyberTel Cellular and Paging until the fall of 1994, when the name was finally changed to Ameritech Cellular. References External links AT&T Cell Phone Booster Mobile technology Defunct companies based in Missouri
CyberTel Cellular
Technology
171
7,345,405
https://en.wikipedia.org/wiki/Kendall%27s%20notation
In queueing theory, a discipline within the mathematical theory of probability, Kendall's notation (or sometimes Kendall notation) is the standard system used to describe and classify a queueing node. D. G. Kendall proposed describing queueing models using three factors written A/S/c in 1953 where A denotes the time between arrivals to the queue, S the service time distribution and c the number of service channels open at the node. It has since been extended to A/S/c/K/N/D where K is the capacity of the queue, N is the size of the population of jobs to be served, and D is the queueing discipline. When the final three parameters are not specified (e.g. M/M/1 queue), it is assumed K = ∞, N = ∞ and D = FIFO. First example: M/M/1 queue A M/M/1 queue means that the time between arrivals is Markovian (M), i.e. the inter-arrival time follows an exponential distribution of parameter λ. The second M means that the service time is Markovian: it follows an exponential distribution of parameter μ. The last parameter is the number of service channel which one (1). Description of the parameters In this section, we describe the parameters A/S/c/K/N/D from left to right. A: The arrival process A code describing the arrival process. The codes used are: S: The service time distribution This gives the distribution of time of the service of a customer. Some common notations are: c: The number of servers The number of service channels (or servers). The M/M/1 queue has a single server and the M/M/c queue c servers. K: The number of places in the queue The capacity of queue, or the maximum number of customers allowed in the queue. When the number is at this maximum, further arrivals are turned away. If this number is omitted, the capacity is assumed to be unlimited, or infinite. Note: This is sometimes denoted c + K where K is the buffer size, the number of places in the queue above the number of servers c. N: The calling population The size of calling source. The size of the population from which the customers come. A small population will significantly affect the effective arrival rate, because, as more customers are in system, there are fewer free customers available to arrive into the system. If this number is omitted, the population is assumed to be unlimited, or infinite. D: The queue's discipline The Service Discipline or Priority order that jobs in the queue, or waiting line, are served: Note: An alternative notation practice is to record the queue discipline before the population and system capacity, with or without enclosing parenthesis. This does not normally cause confusion because the notation is different. References Mathematical notation Single queueing nodes
Kendall's notation
Mathematics
593
10,182,656
https://en.wikipedia.org/wiki/14%20Eridani
14 Eridani is a star in the equatorial Eridanus constellation. It has an apparent visual magnitude of 6.143 and is moving closer to the Sun with a radial velocity of around −5 km/s. The measured annual parallax shift is , which provides an estimated distance of about 121 light years. Proper motion studies indicate that this is an astrometric binary. The visible component has a stellar classification of , which indicates it has the spectrum of an F-type main-sequence star with mild underabundances of iron and methylidyne. It is 1.4 billion years old with 1.3 times the mass of the Sun and 1.5 times the Sun's radius. The star is radiating 3.87 times the luminosity of the Sun from its photosphere at an effective temperature of 6,719 K. The system has been detected as a source of X-ray emission. References F-type main-sequence stars Eridanus (constellation) Durchmusterung objects Eridani, 14 020395 015244 0988 Astrometric binaries
14 Eridani
Astronomy
229
24,201,740
https://en.wikipedia.org/wiki/C6H5NO3
{{DISPLAYTITLE:C6H5NO3}} The molecular formula C6H5NO3 (molar mass: 139.11 g/mol, exact mass: 139.0269 u) may refer to: 3-Hydroxypicolinic acid 4-Nitrophenol
C6H5NO3
Chemistry
67
20,769,112
https://en.wikipedia.org/wiki/Asymmetric%20membrane%20capsule
The asymmetric membrane capsule is an example of a single core osmotic delivery system, consisting of a drug-containing core surrounded by an asymmetric membrane made with a non disintegrating polymer (cellulose acetate, ethylcellulose etc.). References Dosage forms
Asymmetric membrane capsule
Chemistry
65
40,797,274
https://en.wikipedia.org/wiki/Nexus%205
Nexus 5 (code-named Hammerhead) is an Android smartphone sold by Google and manufactured by LG Electronics. It is the fifth generation of the Nexus series, succeeding the Nexus 4. It was unveiled on October 31, 2013 and served as the launch device for Android 4.4 "KitKat", which introduced a refreshed interface, performance improvements, greater Google Now integration, and other changes. Much of the hardware is similar to the LG G2 which was also made by LG and released earlier that year. The Nexus 5 received mostly positive reviews, praising the device's balance of overall performance and cost in comparison to other "flagship" phones, along with the quality of its display and some of the changes introduced by Android 4.4. The Nexus 5 was followed by the Nexus 6 in October 2014, although the Nexus 6 is a higher-end phablet and not a direct successor, with the Nexus 5 and Nexus 6 sold alongside each other for several months. Google ended production of the Nexus 5 in December 2014, but sales of the black Nexus 5 continued until March 11, 2015. Google released the Nexus 5X in September 2015 (alongside the higher-end Nexus 6P), with a similar design and price as the original Nexus 5. History The device was unveiled on October 31, 2013; it was made available for pre-order from Google Play Store the same day, sold in a black color with either 16 or 32 GB of internal storage. Initial pricing was set at $349 for the 16 GB model, and $399 for the 32 GB version. This was much lower than comparable smartphones, which would cost around $649. Google released two additional color options in February 2014, white and red, which had identical pricing and hardware. Google ended production of the Nexus 5 in December 2014, following the release of the Nexus 6. The red and white models were removed from the Google Play Store the same month, while the black model remained available until March 11, 2015. Hardware The exterior of the Nexus 5 is made from a polycarbonate shell with similarities to the 2013 Nexus 7, unlike its predecessor that uses a glass-based construction. Three exterior colors are available: black, white and red. Its hardware contains similarities to the LG G2; it is powered by a 2.26 GHz quad-core Snapdragon 800 processor with 2 GB of RAM, either 16 or 32 GB of internal storage, and a 2300 mAh battery. The Nexus 5 uses a 4.95-inch (marketed as 5-inch) 445 PPI 1080p IPS display, and includes an 8-megapixel rear-facing camera with optical image stabilization (OIS), and a 1.3-megapixel front-facing camera. The Nexus 5 supports LTE networks where available, unlike the Nexus 4 which unofficially supported LTE on Band 4 only with a hidden software option, but was not formally approved or marketed for any LTE use. There are two variants of the Nexus 5, with varying support for cellular frequency bands; one is specific to North America (LG-D820), and the other is designed for the rest of the world (LG-D821). Notable new hardware features also include two new composite sensors: a step detector and a step counter. These new sensors allow applications to easily track steps when the user is walking, running, or climbing stairs. Both sensors are implemented in hardware for low power consumption. Like its predecessors, the Nexus 5 does not have a microSD card slot, while it features a multi-color LED notification light. There are two grills present on the lower edge of the Nexus 5: one is for the mono speaker and the other is for the microphone. Software The Nexus 5 was the first Android device to ship with Android 4.4 "KitKat", which had a refreshed interface, improved performance, improved NFC support (such as the ability to emulate a smart card), a new "HDR+" camera shooting mode, native printing functionality, a screen recording utility, and other new and improved functionality. The device also shipped with Google Now Launcher, a redesigned home screen which allows users to quickly access Google Now on a dedicated page, and allows voice search to be activated on the home screen with a voice command. Unlike the stock home screen, Google Now Launcher is not a component of Android itself; it is implemented as part of the Google Search application. Until 26 February 2014, when it was released on Google Play Store for selected Android 4.4 devices, Google Now Launcher was exclusively shipped by default on the Nexus 5, and was not enabled in Android 4.4 updates for any other Nexus device. While an update to the Google Search application containing Google Now Launcher (which itself was tweaked to improve compatibility with other devices as well) was publicly released shortly after the Nexus 5's release, the launcher itself could not at the time be enabled without installing a second shim application. Hangouts, which now supports text messaging, is used as the default text messaging application. In December 2013, the Nexus 5 began receiving the Android 4.4.1 update, which introduced HDR+, fixed issues with auto focus, white balance and other camera issues. HDR+ takes a burst of shots with short exposures, selectively aligning the sharpest shots and averaging them using computational photography techniques. Short exposures avoids blur, blowing out highlights and averaging multiple shots reduces noise. HDR+ is similar to lucky imaging used in astrophotography. HDR+ is processed on the Qualcomm Hexagon DSP. It also fixes low speaker volume output in certain applications. Android 4.4.2 update followed in a few days, providing further bugfixes and security improvements. In early June 2014, the Nexus 5 received Android 4.4.3 update that included dozens of bug fixes, while another mid-June 2014 Android 4.4.4 update included a fix for an OpenSSL man-in-the-middle vulnerability. A developer preview of the Android 5.0 "Lollipop" system image was released for the Nexus 5 after the annual Google I/O conference held on June 26, 2014. The release version of Android 5.0 "Lollipop" was made available on November 12, 2014, in form of factory operating system images. On December 15, 2014 Android 5.0.1 "Lollipop" update began rolling out to Nexus 5 with build number LRX22C, the update was listed as being "miscellaneous Android updates." In March 2015, the Nexus 5 began receiving the Android 5.1 "Lollipop" update, which addresses performance issues and other user interface tweaks; however, it is known to introduce certain camera issues. In June 2015, Google made the Android 5.1.1 "Lollipop" update available for Nexus 5 aiming to fix the bugs that were not fixed by Android 5.1 update. In May 2015, a developer preview of Android Marshmallow was made available for the Nexus 5. In November 2015, Nexus 5 started receiving Android 6.0 "Marshmallow" update across the world. Following which Nexus 5 became one of the first devices to get an Android 6.0.1 Marshmallow update in December 2015. In August 2016, Google confirmed that the Nexus 5 will not receive an official Android 7.0 Nougat update, meaning that Android 6.0.1 Marshmallow is the last officially supported Android version for the device. The Nexus 5's Snapdragon 800 has sufficient processing power to run Android 7.0 Nougat, as shown by successful tests with the Android N Developer Preview program (indeed the Snapdragon 800 is more powerful than the Snapdragon 410 which does support Nougat), and unofficial custom Nougat ROMs have been created for the Nexus 5. The Nexus 5 can also run other mobile operating systems such as Ubuntu Touch. Reception The Nexus 5 received mostly positive reviews; critics felt that the device provided a notable balance between performance and pricing. Although noting nuances with its display, such as its color reproduction and low maximum brightness in comparison to competitors, CNET praised software features such as the new phone dialer interface and Google Now integration on the home screen, but did not believe KitKat provided many major changes over its predecessors. Hangouts as the default text messaging app was criticized for its user interface and for attempting to force the use of a Google service, while the quality of photos taken with the device was described as being "great, but it didn't particularly blow me away." In conclusion, the Nexus 5 was given a 4 out of 5, concluding that "to even compare this $400 phone to those that cost upward of $650 unlocked (like the Samsung Galaxy S4, HTC One, and Apple iPhone 5S) speaks volumes about the Nexus 5's massive appeal and affordability", and that the device "extends the allure of the Nexus brand to anyone simply looking for an excellent yet inexpensive handset." Engadget was similarly positive in its review of the Nexus 5, but noted that the device's LTE support was lacking, as its supported bands are segregated across two model variants, and does not support Verizon Wireless. While praised for its crisp quality, it was noted that the display did not render colors (particularly black and white) as richly as the Galaxy S4, but added that "if you think saturated colors are overrated anyway, you're going to love the display here." While considered an improvement over the Nexus 4, the device's battery life was criticized for not being as good as its competitors, and the quality of its camera was considered to be inconsistent. In conclusion, Engadget argued that "whether or not it was the company's intent, Google is sending a message to smartphone makers that it's possible to make high-quality handsets without costing consumers the proverbial arm and leg. Now we just wait and see if that message will be warmly received." Compared to the LG G2 which was released earlier and shares the same manufacturer and much of the same hardware, the Nexus 5 has a lower-quality rear camera and smaller battery to hit a cheaper price point. However, Nexus 5 has been touted as a clean Android software alternative with the added advantage of running the latest Android 4.4 "KitKat". DPReview praised the Android 4.4.1's HDR+ which brought substantial improvement in speed and prioritized faster shutter speeds with higher ISOs which reduced blur. The computational photography increased dynamic range and reduced noise. However HDR+ resulted in less detail and more noise in low-light scenes. The Nexus 5 still held up well against newer devices as it performed faster than the follow-up Nexus 6 released a year later, also noting that the Nexus 6 (at an MSRP of US$650 versus US$350) was much more expensive. Since the release, a portion of users have reported frequent crashes of the camera app which supposedly would not allow use of the camera unless the phone was restarted. It is unknown whether this problem has been fixed. See also Comparison of Google Nexus smartphones References External links LG Electronics smartphones Android (operating system) devices Mobile phones introduced in 2013 Discontinued flagship smartphones Google Nexus Ubuntu Touch devices
Nexus 5
Technology
2,350
639,553
https://en.wikipedia.org/wiki/Eug%C3%A8ne%20Charles%20Catalan
Eugène Charles Catalan (; 30 May 1814 – 14 February 1894) was a French and Belgian mathematician who worked on continued fractions, descriptive geometry, number theory and combinatorics. His notable contributions included discovering a periodic minimal surface in the space ; stating the famous Catalan's conjecture, which was eventually proved in 2002; and introducing the Catalan numbers to solve a combinatorial problem. Biography Catalan was born in Bruges (now in Belgium, then under Dutch rule even though the Kingdom of the Netherlands had not yet been formally instituted), the only child of a French jeweller by the name of Joseph Catalan, in 1814. In 1825, he traveled to Paris and learned mathematics at École Polytechnique, where he met Joseph Liouville (1833). In December 1834 he was expelled along with most of the students in his year as part of a crackdown by the July Monarchy against republican tendencies among the students. He resumed his studies in January 1835, graduated that summer, and went on to teach at Châlons-sur-Marne. Catalan came back to the École Polytechnique, and, with the help of Liouville, obtained his degree in mathematics in 1841. He went on to Charlemagne College to teach descriptive geometry. Though he was politically active and strongly left-wing, leading him to participate in the 1848 Revolution, he had an animated career and also sat in the France's Chamber of Deputies. Later, in 1849, Catalan was visited at his home by the French Police, searching for illicit teaching material; however, none was found. The University of Liège appointed him chair of analysis in 1865. In 1879, still in Belgium, he became journal editor where he published as a foot note Paul-Jean Busschop's theory after refusing it in 1873 - letting Busschop know that it was too empirical. In 1883, he worked for the Belgian Academy of Science in the field of number theory. He died in Liège, Belgium where he had received a chair. Work He worked on continued fractions, descriptive geometry, number theory and combinatorics. He gave his name to a unique surface (periodic minimal surface in the space ) that he discovered in 1855. Before that, he had stated the famous Catalan's conjecture, which was published in 1844 and was eventually proved in 2002, by the Romanian mathematician Preda Mihăilescu. He introduced the Catalan numbers to solve a combinatorial problem (although these were actually discovered a century earlier by the Chinese astronomer Minggatu). Selected publications Théorèmes et Problèmes Géométrie élémentaire, Brussels, 2nd edition 1852, 6th edition 1879 Éléments de géométrie, 1843, 2nd printing 1847 Traité élémentaire de géométrie descriptive, 2 volumes 1850, 1852, 3rd edition 1867/1868, 5th edition 1881 Nouveau manuel des aspirants au baccalauréat ès sciences, 1852 (12 editions published) Solutions des problèmes de mathématique et de physique donnés à la Sorbonne dans les compositions du baccalauréat ès sciences, 1855/56 Manuel des candidats à l'École Polytechnique, 2 volumes, 1857–58 Notions d'astronomie, 1860 (6 editions published) Traité élémentaire des séries, 1860 Histoire d'un concours, 1865, 2nd edition 1867 Cours d'analyse de l'université de Liège, 1870, 2nd edition 1880 Intégrales eulériennes ou elliptiques, 1892 See also Catalan pseudoprime Catalan's triangle Catalan–Dickson conjecture Catalan–Mersenne number conjecture Catalan beta function Fermat–Catalan conjecture Fuss–Catalan number References External links Catalan 1814 births 1894 deaths Scientists from Bruges Academic staff of the University of Liège 19th-century Belgian mathematicians Corresponding members of the Saint Petersburg Academy of Sciences Combinatorialists Number theorists 19th-century French mathematicians
Eugène Charles Catalan
Mathematics
790
1,721,284
https://en.wikipedia.org/wiki/Method%20of%20exhaustion
The method of exhaustion () is a method of finding the area of a shape by inscribing inside it a sequence of polygons whose areas converge to the area of the containing shape. If the sequence is correctly constructed, the difference in area between the nth polygon and the containing shape will become arbitrarily small as n becomes large. As this difference becomes arbitrarily small, the possible values for the area of the shape are systematically "exhausted" by the lower bound areas successively established by the sequence members. The method of exhaustion typically required a form of proof by contradiction, known as reductio ad absurdum. This amounts to finding an area of a region by first comparing it to the area of a second region, which can be "exhausted" so that its area becomes arbitrarily close to the true area. The proof involves assuming that the true area is greater than the second area, proving that assertion false, assuming it is less than the second area, then proving that assertion false, too. History The idea originated in the late 5th century BC with Antiphon, although it is not entirely clear how well he understood it. The theory was made rigorous a few decades later by Eudoxus of Cnidus, who used it to calculate areas and volumes. It was later reinvented in China by Liu Hui in the 3rd century AD in order to find the area of a circle. The first use of the term was in 1647 by Gregory of Saint Vincent in Opus geometricum quadraturae circuli et sectionum. The method of exhaustion is seen as a precursor to the methods of calculus. The development of analytical geometry and rigorous integral calculus in the 17th-19th centuries subsumed the method of exhaustion so that it is no longer explicitly used to solve problems. An important alternative approach was Cavalieri's principle, also termed the method of indivisibles which eventually evolved into the infinitesimal calculus of Roberval, Torricelli, Wallis, Leibniz, and others. Euclid Euclid used the method of exhaustion to prove the following six propositions in the 12th book of his Elements. Proposition 2: The area of circles is proportional to the square of their diameters. Proposition 5: The volumes of two tetrahedra of the same height are proportional to the areas of their triangular bases. Proposition 10: The volume of a cone is a third of the volume of the corresponding cylinder which has the same base and height. Proposition 11: The volume of a cone (or cylinder) of the same height is proportional to the area of the base. Proposition 12: The volume of a cone (or cylinder) that is similar to another is proportional to the cube of the ratio of the diameters of the bases. Proposition 18: The volume of a sphere is proportional to the cube of its diameter. Archimedes Archimedes used the method of exhaustion as a way to compute the area inside a circle by filling the circle with a sequence of polygons with an increasing number of sides and a corresponding increase in area. The quotients formed by the area of these polygons divided by the square of the circle radius can be made arbitrarily close to π as the number of polygon sides becomes large, proving that the area inside the circle of radius r is πr2, π being defined as the ratio of the circumference to the diameter (C/d). He also provided the bounds 3 + 10/71 < π < 3 + 10/70, (giving a range of 1/497) by comparing the perimeters of the circle with the perimeters of the inscribed and circumscribed 96-sided regular polygons. Other results he obtained with the method of exhaustion included The area bounded by the intersection of a line and a parabola is 4/3 that of the triangle having the same base and height (the quadrature of the parabola); The area of an ellipse is proportional to a rectangle having sides equal to its major and minor axes; The volume of a sphere is 4 times that of a cone having a base of the same radius and height equal to this radius; The volume of a cylinder having a height equal to its diameter is 3/2 that of a sphere having the same diameter; The area bounded by one spiral rotation and a line is 1/3 that of the circle having a radius equal to the line segment length; Use of the method of exhaustion also led to the successful evaluation of an infinite geometric series (for the first time); See also The Method of Mechanical Theorems The Quadrature of the Parabola Trapezoidal rule Pythagorean Theorem References Volume Euclidean geometry Integral calculus History of mathematics 5th century BC in Greece
Method of exhaustion
Physics,Mathematics
983
72,231,981
https://en.wikipedia.org/wiki/Smart%20Energy
Smart Energy is an international, peer-reviewed open-access multi-disciplinary scientific journal focused on energy transition to upcoming smart renewable energy systems. The journal was established in 2021 and is published by Elsevier. The editor-in-chief is Brian Vad Mathiesen (Aalborg University). It is emphasized that efforts to advocate UN's goals of sustainable development are welcomed, specifically "Affordable and clean energy". Abstracting and indexing The journal is abstracted and indexed in Scopus, and the Directory of Open Access Journals. References External links Energy and fuel journals Academic journals established in 2015 English-language journals Elsevier academic journals Creative Commons Attribution-licensed journals Continuous journals
Smart Energy
Environmental_science
142
24,485,417
https://en.wikipedia.org/wiki/Neo-Inositol
The chemical compound neo-inositol is one of the nine stereoisomers cyclohexane-1,2,3,4,5,6-hexol, the "inositols". Its formula is ; the six carbon atoms form a ring, each of them is bonded to a hydrogen atom and a hydroxyl group (–OH). If the ring is assumed horizontal, three consecutive hydroxyls lie above the respective hydrogens, and the other three lie below them. Like the other stereoisomers, neo-inositol is considered a carbohydrate, specifically a sugar alcohol (to distinguish it from the more familiar ketose and aldose sugars, like glucose). It occurs in nature, but only in small amounts; usually much smaller than those of myo-inositol, the most important stereoisomer. Chemical and physical properties Crystal structure neo-inositol crystallizes in the triclinic system with group . The cell parameters are a = 479.9 pm, b = 652.0 pm, c = 650.5 pm, α = 70.61°, β = 69.41°, γ = 73.66°, Z = 1, with molecular symmetry . The cell volume is 0.176 nm3. The ring has the chair conformation with puckering parameter Q = 60.9 pm. Synthesis neo-Inositol can be obtained from para-benzoquinone via conduritol intermediates. Natural occurrence and biological roles Small amounts of neo-inositol can be deteceted in human urine. See also allo-Inositol cis-Inositol D-chiro-Inositol L-chiro-Inositol epi-Inositol muco-Inositol scyllo-Inositol References Inositol
Neo-Inositol
Chemistry
403
320,873
https://en.wikipedia.org/wiki/Rocketdyne
Rocketdyne is an American rocket engine design and production company headquartered in Canoga Park, in the western San Fernando Valley of suburban Los Angeles, in southern California. Rocketdyne was founded as a division of North American Aviation in 1955 and was later part of Rockwell International from 1967 until 1996 and Boeing from 1996 to 2005. In 2005, Boeing sold the Rocketdyne division to United Technologies Corporation, becoming Pratt & Whitney Rocketdyne as part of Pratt & Whitney. In 2013, Rocketdyne was sold to GenCorp, Inc., which merged it with Aerojet to form Aerojet Rocketdyne. History After World War II, North American Aviation (NAA) was contracted by the Defense Department to study the German V-2 missile and adapt its engine to Society of Automotive Engineers (SAE) measurements and U.S. construction details. NAA also used the same general concept of separate burner/injectors from the V-2 engine design to build a much larger engine for the Navaho missile project (1946–1958). This work was considered unimportant in the 1940s and funded at a very low level, but the start of the Korean War in 1950 changed priorities. NAA had begun to use the Santa Susana Field Laboratory (SSFL) high in the Simi Hills around 1947 for the Navaho's rocket engine testing. At that time the site was much further away from major populated areas than the early test sites NAA had been using within Los Angeles. Navaho ran into continual difficulties and was canceled in 1958 when the Chrysler Corporation Missile Division's Redstone missile design (essentially an improved V-2) had caught up in development. However the Rocketdyne engine, known as the A-5 or NAA75-110, proved to be considerably more reliable than the one developed for Redstone, so the missile was redesigned with the A-5 even though the resulting missile had much shorter range. As the missile entered production, NAA spun off Rocketdyne in 1955 as a separate division, and built its new plant in the then small Los Angeles suburb of Canoga Park, in the San Fernando Valley near and below its Santa Susana Field Laboratory. In 1967, NAA, with its Rocketdyne and Atomics International divisions, merged with the Rockwell Corporation to form North American Rockwell, becoming in 1973 Rockwell International. Thor, Delta, Atlas Rocketdyne's next major development was its first all-new design, the S-3D, which had been developed in parallel to the V-2 derived A series. The S-3 was used on the Army's Jupiter missile design, essentially a development of the Redstone, and was later selected for the competitor Air Force Thor missile. An even larger design, the LR89/LR105, was used on the Atlas missile. The Thor had a short military career, but it was used as a satellite launcher through the 1950s and 60s in a number of different versions. One, Thor Delta, became the baseline for the current Delta series of space launchers, although since the late 1960s the Delta has had almost nothing in common with the Thor. Although the original S-3 engine was used on some Delta versions, most use its updated RS-27 design, originally developed as a single engine to replace the three-engine cluster on the Atlas. The Atlas also had a short military career as a deterrent weapon, but the Atlas rocket family descended from it became an important orbital launcher for many decades, both for the Project Mercury crewed spacecraft, and in the much-employed Atlas-Agena and Atlas-Centaur rockets. The Atlas V is still in manufacture and use. NASA Rocketdyne also became the major supplier for NASA's development efforts, supplying all of the major engines for the Saturn rocket, and potentially, the huge Nova rocket designs. Rocketdyne's H-1 engine was used by the Saturn I booster main stage. Five F-1 engines powered the Saturn V's S-IC first stage, while five J-2 engines powered its S-II second stage, and one J-2 the S-IVB third stages. By 1965, Rocketdyne built the vast majority of United States rocket engines, excepting those of the Titan rocket (built by Aerojet), and its payroll had grown to 65,000. This sort of growth appeared to be destined to continue in the 1970s when Rocketdyne won the contract for the RS-25 Space Shuttle Main Engine (SSME), but the rapid downturn in other military and civilian contracts led to downsizing of the company. North American Aviation, largely a spacecraft manufacturer, and also tied almost entirely to the Space Shuttle, merged with the Rockwell Corporation in 1966 to form the North American Rockwell company, which became Rockwell International in 1973, with Rocketdyne as a major division. Downsizing During continued downsizing in the 1980s and 1990s, Rockwell International shed several parts of the former North American Rockwell corporation. The aerospace entities of Rockwell International, including the former NAA and Rocketdyne, were sold to Boeing in 1996. Rocketdyne became part of Boeing's Defense division. In February 2005, Boeing reached an agreement to sell what was by then referred to as "Rocketdyne Propulsion & Power" to Pratt & Whitney of United Technologies Corporation. The transaction was completed on August 2, 2005. Boeing retained ownership of Rocketdyne's Santa Susana Field Lab. GenCorp, Inc. purchased Pratt & Whitney Rocketdyne in 2013 from United Technologies Corporation, and merged it with Aerojet to form Aerojet Rocketdyne. Facilities and operations Canoga Park, California Rocketdyne maintained division headquarters and rocket engine manufacturing facilities at Canoga Park from 1955 until 2014. North American Aviation's rocket development activities began with engine tests nearby the Los Angeles Airport. In 1948, NAA began testing liquid rocket engines within the Simi Hills which would later become the Santa Susana Field Laboratory. The company sought a location for a manufacturing plant nearby the Simi Hills testing site. In 1954, North American Aviation purchased 56 acres of land within the current Warner Center area then deeded the property to the Air Force. The Air Force, in turn, designated the site Air Force Plant No. 56 and contracted with Rocketdyne to build and operate the facility. NAA completed construction of the main manufacturing building and designated Rocketdyne as a new company division in November 1955. Rocketdyne's success resulted in the addition of buildings within a growing footprint. At its peak, the Rocketdyne Canoga facility comprised some 27 different buildings over 119 acres of land, including over one million square feet of manufacturing area plus 516,000 square feet of office space. The Canoga plant grew into areas both east and southeast of the original location. In 1960, Rocketdyne opened a headquarters building at the southeast corner of Victory Boulevard and Canoga Avenue. A pedestrian tunnel underneath Victory Boulevard east of Canoga Avenue provided access between buildings to the South (including the Headquarters) and those located to the North of the street. (The tunnel was removed in 1973.) The Canoga plant shrank over time via piecemeal property sales and building demolitions into the 2000s. With the completion of the Apollo program in 1969, Rocketdyne ended the leases of several facilities and returned the headquarters offices to the Canoga Main building. In 1973, Rocketdyne repurchased the Air Force Plant No. 56 property, thereby ending the government designation. The Space Shuttle program ended in 2011, and further reductions followed. Pratt and Whitney retained ownership of the Canoga property when Rocketdyne was sold to Aerojet in 2013; the remaining property measured roughly 47 acres with buildings and structures comprising a total of 770,000 square feet. Rocketdyne played a key role in the United States space program and the development of propulsion systems. Ten years after being established, the Canoga plant produced the vast majority of America's United States liquid rocket engines (except those of the Titan rocket, them being built by Aerojet). Through the end of the twentieth century, Rocketdyne products powered all major engines for the Saturn program and every space program in the United States. Six specific periods of liquid rocket engine development and manufacturing programs took place at the Canoga plant: Atlas (1954-late 1960s), Thor (1961-1975), Jupiter (1955-1962), Saturn (1961-1975); Apollo (1961-1972); Space Shuttle (1981-2011). Key rocket engine technologies were advanced at the Rocketdyne Canoga plant: gimbaling of rocket engines, introduction of engine injector baffling plates for improved combustion stability, tubular regenerative cooling, "stage and a half" engine configuration first used on Atlas, thrust chamber ignition using pyrophoric chemicals and electrically controlled starting sequences. Aerojet Rocketdyne moved its office and manufacturing operations to the DeSoto campus in 2014. Demolition and site clearing of the former Rocketdyne facility in Canoga Park commenced in August 2016. As of February 2019, the future land use of the site has not been announced. McGregor, Texas Rocketdyne's Solid Propulsion Operations business unit was engaged in the development, testing and production of solid rocket engines at McGregor, Texas for nearly twenty years. The Rocket Fuels Division of Phillips Petroleum Company began using the former Bluebonnet Ordnance Plant in 1952. In 1958, Phillips and Rocketdyne entered a partnership to form Astrodyne Incorporated. In 1959, Rocketdyne purchased full ownership of the company and renamed it Solid Propulsion Operations (later designated the Solid Rocket Division). The purchase caused Rocketdyne to invest in facilities and research at McGregor towards diversification into other propellant types and rockets engines. Notably, Rocketdyne installed a facility capable of testing engines having up to three million pounds of thrust. The Solid Propulsion Operations initially used ammonium nitrate-based propellants in the manufacture of gas generators used to start aircraft jet engines, turbopumps of the Rocketdyne H-1 rocket engine and the manufacture of the Jet Assisted Take Off (JATO) rocket engines. Ullage motors were developed for the Saturn V Space Vehicle. The group also built solid propellant boosters providing for the zero-length launching of North American F-100 Super Sabre and Lockheed F-104 Starfighter aircraft. The motor provided a takeoff thrust of 130,000 lbf for 4 seconds, accelerating the aircraft to 275 miles per hour and 4 g before separating and dropping away from the jet. In 1959, the group began using ammonium perchlorate oxidizer combined with carboxyl-terminated polybutadiene (CTPB) binder to produce solid propellants marketed under the trade name "Flexadyne." For the next nineteen years, Rocketdyne used the formulation in the production of solid rocket motors for three major missile systems: the AIM-7 Sparrow III, AGM-45 Shrike, and the AIM-54 Phoenix. Rocketdyne transferred operation of the McGregor plant to Hercules Inc. in 1978. A portion of the former Bluebonnet Ordnance Plant is now used by SpaceX as their Rocket Development and Test Facility. Neosho, Missouri A rocket engine manufacturing plant was operated by Rocketdyne over a twelve-year period at Neosho, Missouri. The plant was constructed by the U.S. Air Force within a 2,000-acre portion of Fort Crowder, a decommissioned World War II training base. The Rocketdyne division of North American Aviation operated the site, employing approximately 1,250 workers beginning in 1956. The plant primarily produced the MA-5 booster, sustainer and vernier rocket engines, H-1 engines and components for the F-1 and J-2 rocket engines. The P4-1 (a.k.a. LR64) engine was also manufactured for the AQM-37A target drone. The engines and components were evaluated at an on-site test area located approximately one mile from the plant. Rocketdyne closed the plant in 1968. The plant has been used by several different companies for the refurbishment of jet aircraft engines. The citizens of Neosho have placed a commemorative monument dedicated to the men and women of Rocketdyne Neosho "whose tireless efforts and relentless pursuit of quality resulted in the world's finest liquid rocket engines." Nevada Field Laboratory Rocketdyne established and operated a 120,000 acre rocket engine test and development facility nearby Reno, Nevada from 1962 until 1970. The Nevada Field Laboratory had three active open-air test facilities and two administrative areas. The test facilities were used for the Gemini and Apollo space programs, the annular aerospike engine and the early (proposal-stage) development of the Space Shuttle main engine. Power generation In addition to its primary business of building rocket engines, Rocketdyne has developed power generation and control systems. These included early nuclear power generation experiments, radioisotope thermoelectric generators (RTG), and solar power equipment, including the main power system for the International Space Station. In the Boeing sale to Pratt & Whitney, the Power Systems division of Rocketdyne was transferred to Hamilton Sundstrand, another subsidiary of United Technologies Corporation. List of engines Some of the engines developed by Rocketdyne are: Rocketdyne A1 to A6 (LOX/Alcohol) Used on Redstone Rocketdyne A7 (LOX/Alcohol) Used on Jupiter-C Rocketdyne 16NS-1,000 Rocketdyne Kiwi Nuclear rocket engine Rocketdyne M-34 Rocketdyne MA-2 Rocketdyne MA-3 Rocketdyne Megaboom modular sled rocket Rocketdyne P Rocketdyne LR64 Rocketdyne LR70 Rocketdyne LR79 family: XLR83-NA-1 - Navaho G-26 XLR89-1 - Atlas A LR79-7 - Thor, Delta, Thor-Able, Thor-Agena A, Thor Agena B, Thor Agena D, Thor-Burner S-3D - Jupiter XLR89-1 - Atlas A, B, C XLR71-NA-1 - Navaho II B-2C - Atlas A XLR89-5 - Atlas D S-3 - Juno II, Saturn A-2 MB-3-1 - Delta A, B, C, Thor Ablestar LR89-5 - Atlas E, F H-1 - Saturn I/IB MB-3 Press Mod - Sea Horse LR89-7 - Atlas LV-3C, Atlas Agena, Atlas Centaur, Atlas F/Agena D, Atlas H, Atlas G, Atlas I MB-3-3 - H-I RZ.2 - Europa H-1c - Saturn IB-A, IB-B H-1b - Saturn B-1, Saturn A-2, Saturn IB, Saturn IB-C, Saturn IB-CE, Saturn IB-D, Saturn INT-11, Saturn INT-12, Saturn INT-13, Saturn INT-14, Saturn INT-15. RS-27 - N-I, N-II, Delta 1000, Delta 4000, Delta 5000, Delta 2000, Delta 3000 MB-3-J - N RS-27A - Delta 6925, Delta 6920-8, Delta 6925-8, Delta 6920-10, Delta 8930 RS-27C - Barbarian MDD, Delta 7925 RS-56-OBA - Atlas II, IIA, IIAS Rocketdyne LR-101 Vernier engine used by Atlas, Thor and Delta Rocketdyne LR105 family: S-4 - Super-Jupiter XLR105-5 - Atlas Able, Atlas B, Atlas C, Atlas LV-3C, Atlas D, Atlas-Agena, Atlas LV-3B LR105-3 LR105-5 - Atlas LV-3C, Atlas E, Atlas Agena B, Atlas F, Atlas Agena D, Atlas Centaur D, Atlas SLV-3 LR105-7 - Atlas Agena D, Atlas F/Agena D, Atlas H, Atlas G, Atlas I RS-56-OSA - Atlas II, IIA, IIAS Rocketdyne Aeolus Rocketdyne XRS-2200, linear aerospike engine, tested for X-33 Rocketdyne RS-2200, linear aerospike engine, intended for Venturestar Rocketdyne E-1 (RP-1/LOX) Backup design for the Titan I Rocketdyne F-1 (RP-1/LOX) Used by the Saturn V. Rocketdyne H-1 (RP-1/LOX) Used by the Saturn I and IB Rocketdyne J-2 (LH2/LOX) Used by both the Saturn V and Saturn IB. Rocketdyne RS-25 Space Shuttle Main Engine (SSME) (LH2/LOX) The main engine for the Space Shuttle, also used on the Space Launch System Rocketdyne RS-27A (RP-1/LOX) Used by the Delta II/III and Atlas ICBM Rocketdyne RS-56 (RP-1/LOX) Used by the Atlas II first stage Rocketdyne RS-68 (LH2/LOX) Used by the Delta IV first stage Rocketdyne XLR46-NA-2, intended for the North American NA-247 interceptor proposal Gallery See also Rocketdyne engines Aerojet Rocketdyne Pratt & Whitney Rocketdyne Atomics International Division Santa Susana Field Laboratory References External links Rocketdyne internet archives (unofficial) GenCorp, Inc.: Rocketdyne Acquisition presentation 01 Rocketry Aerospace companies of the United States Former defense companies of the United States Technology companies based in Greater Los Angeles Manufacturing companies based in Los Angeles Canoga Park, Los Angeles Simi Hills North American Aviation Boeing mergers and acquisitions Aerojet Rocketdyne Holdings United Technologies American companies established in 1955 Manufacturing companies established in 1955 Technology companies established in 1955 Manufacturing companies disestablished in 2005 Technology companies disestablished in 2005 1955 establishments in California 2005 disestablishments in California Defunct manufacturing companies based in Greater Los Angeles History of the San Fernando Valley 1967 mergers and acquisitions 1996 mergers and acquisitions 2005 mergers and acquisitions
Rocketdyne
Engineering
3,784
30,268,552
https://en.wikipedia.org/wiki/Society%20of%20Biological%20Inorganic%20Chemistry
The Society of Biological Inorganic Chemistry is a learned society established to advance research and education in the field of biological inorganic chemistry. It holds training courses, workshops and conferences to facilitate exchange of information between scientists involved in the research and teaching of biological inorganic chemistry. It has an official journal, the Journal of Biological Inorganic Chemistry. The society was founded in 1995, following discussions within the Steering Committee of the European Science Foundation program "The Chemistry of Metals in Biological Systems". The first president was C. David Garner (1995–1998). Later presidents were Elizabeth C. Theil (1998–2000), Alfred X. Trautwein (2000–2002), Harry B. Gray(2002–2004), Fraser Armstrong (2004–2006), and Jose J. G. Moura (2010–2012). External links Scientific organizations established in 1995 Chemistry societies
Society of Biological Inorganic Chemistry
Chemistry
177
60,546
https://en.wikipedia.org/wiki/Unique%20factorization%20domain
In mathematics, a unique factorization domain (UFD) (also sometimes called a factorial ring following the terminology of Bourbaki) is a ring in which a statement analogous to the fundamental theorem of arithmetic holds. Specifically, a UFD is an integral domain (a nontrivial commutative ring in which the product of any two non-zero elements is non-zero) in which every non-zero non-unit element can be written as a product of irreducible elements, uniquely up to order and units. Important examples of UFDs are the integers and polynomial rings in one or more variables with coefficients coming from the integers or from a field. Unique factorization domains appear in the following chain of class inclusions: Definition Formally, a unique factorization domain is defined to be an integral domain R in which every non-zero element x of R which is not a unit can be written as a finite product of irreducible elements pi of R: x = p1 p2 ⋅⋅⋅ pn with and this representation is unique in the following sense: If q1, ..., qm are irreducible elements of R such that x = q1 q2 ⋅⋅⋅ qm with , then , and there exists a bijective map such that pi is associated to qφ(i) for . Examples Most rings familiar from elementary mathematics are UFDs: All principal ideal domains, hence all Euclidean domains, are UFDs. In particular, the integers (also see Fundamental theorem of arithmetic), the Gaussian integers and the Eisenstein integers are UFDs. If R is a UFD, then so is R[X], the ring of polynomials with coefficients in R. Unless R is a field, R[X] is not a principal ideal domain. By induction, a polynomial ring in any number of variables over any UFD (and in particular over a field or over the integers) is a UFD. The formal power series ring over a field K (or more generally over a regular UFD such as a PID) is a UFD. On the other hand, the formal power series ring over a UFD need not be a UFD, even if the UFD is local. For example, if R is the localization of at the prime ideal then R is a local ring that is a UFD, but the formal power series ring R over R is not a UFD. The Auslander–Buchsbaum theorem states that every regular local ring is a UFD. is a UFD for all integers , but not for . Mori showed that if the completion of a Zariski ring, such as a Noetherian local ring, is a UFD, then the ring is a UFD. The converse of this is not true: there are Noetherian local rings that are UFDs but whose completions are not. The question of when this happens is rather subtle: for example, for the localization of at the prime ideal , both the local ring and its completion are UFDs, but in the apparently similar example of the localization of at the prime ideal the local ring is a UFD but its completion is not. Let be a field of any characteristic other than 2. Klein and Nagata showed that the ring is a UFD whenever Q is a nonsingular quadratic form in the Xs and n is at least 5. When , the ring need not be a UFD. For example, is not a UFD, because the element XY equals the element ZW so that XY and ZW are two different factorizations of the same element into irreducibles. The ring is a UFD, but the ring is not. On the other hand, The ring is not a UFD, but the ring is. Similarly the coordinate ring of the 2-dimensional real sphere is a UFD, but the coordinate ring of the complex sphere is not. Suppose that the variables Xi are given weights wi, and is a homogeneous polynomial of weight w. Then if c is coprime to w and R is a UFD and either every finitely generated projective module over R is free or c is 1 mod w, the ring is a UFD. Non-examples The quadratic integer ring of all complex numbers of the form , where a and b are integers, is not a UFD because 6 factors as both 2×3 and as . These truly are different factorizations, because the only units in this ring are 1 and −1; thus, none of 2, 3, , and are associate. It is not hard to show that all four factors are irreducible as well, though this may not be obvious. See also Algebraic integer. For a square-free positive integer d, the ring of integers of will fail to be a UFD unless d is a Heegner number. The ring of formal power series over the complex numbers is a UFD, but the subring of those that converge everywhere, in other words the ring of entire functions in a single complex variable, is not a UFD, since there exist entire functions with an infinity of zeros, and thus an infinity of irreducible factors, while a UFD factorization must be finite, e.g.: Properties Some concepts defined for integers can be generalized to UFDs: In UFDs, every irreducible element is prime. (In any integral domain, every prime element is irreducible, but the converse does not always hold. For instance, the element is irreducible, but not prime.) Note that this has a partial converse: a domain satisfying the ACCP is a UFD if and only if every irreducible element is prime. Any two elements of a UFD have a greatest common divisor and a least common multiple. Here, a greatest common divisor of a and b is an element d that divides both a and b, and such that every other common divisor of a and b divides d. All greatest common divisors of a and b are associated. Any UFD is integrally closed. In other words, if R is a UFD with quotient field K, and if an element k in K is a root of a monic polynomial with coefficients in R, then k is an element of R. Let S be a multiplicatively closed subset of a UFD A. Then the localization S−1A is a UFD. A partial converse to this also holds; see below. Equivalent conditions for a ring to be a UFD A Noetherian integral domain is a UFD if and only if every height 1 prime ideal is principal (a proof is given at the end). Also, a Dedekind domain is a UFD if and only if its ideal class group is trivial. In this case, it is in fact a principal ideal domain. In general, for an integral domain A, the following conditions are equivalent: A is a UFD. Every nonzero prime ideal of A contains a prime element. A satisfies ascending chain condition on principal ideals (ACCP), and the localization S−1A is a UFD, where S is a multiplicatively closed subset of A generated by prime elements. (Nagata criterion) A satisfies ACCP and every irreducible is prime. A is atomic and every irreducible is prime. A is a GCD domain satisfying ACCP. A is a Schreier domain, and atomic. A is a pre-Schreier domain and atomic. A has a divisor theory in which every divisor is principal. A is a Krull domain in which every divisorial ideal is principal (in fact, this is the definition of UFD in Bourbaki.) A is a Krull domain and every prime ideal of height 1 is principal. In practice, (2) and (3) are the most useful conditions to check. For example, it follows immediately from (2) that a PID is a UFD, since every prime ideal is generated by a prime element in a PID. For another example, consider a Noetherian integral domain in which every height one prime ideal is principal. Since every prime ideal has finite height, it contains a height one prime ideal (induction on height) that is principal. By (2), the ring is a UFD. See also Parafactorial local ring Noncommutative unique factorization domain Citations References Chap. 4. Chapter II.5 Ring theory Algebraic number theory factorization
Unique factorization domain
Mathematics
1,818
43,511,798
https://en.wikipedia.org/wiki/Dan%20Meredith
Dan Meredith, also known as Dan Blah, is an Internet freedom supporter, journalist, technologist, and media activist. He currently is chief technologist at Reset, a privately funded non-profit funding organization. He was a founding Director of the Open Technology Fund, a U.S. Government funded program created in 2012 at Radio Free Asia to support global Internet freedom, privacy-enhancing technologies, and Internet censorship circumvention technologies. Meredith joined Al Jazeera's Transparency Unit in 2011, led by Clayton Swisher, where he increased communication security between investigative field journalists and their sources. He was an early part of the Open Technology Institute in 2009, led by Sascha Meinrath. While at OTI, Meredith was involved with: The "Internet in a Suitcase" project, a U.S. Department of State funded effort to create ad hoc mesh wireless technologies; collaborated with Philadelphia community organizers to secure US$11.8 million from the federal Broadband Technology Opportunities Program; and, worked on Network Neutrality court cases Hart v. Comcast and Comcast Corp. v. FCC with Robb Topolski, who discovered Comcast blocking Bittorrent traffic in 2007. Meredith was a co-founder and senior network engineer of the CUWiN Foundation, a non-profit launched in 2000 that aimed to develop "decentralized, community-owned networks that foster democratic cultures and local content". He was an active Indymedia volunteer throughout the mid 2000s at the Champaign-Urbana Independent Media Center (UCIMC) and its low power FM radio station, Radio Free Urbana WRFU-LP. Meredith joined the Linux Foundation's Core Infrastructure Initiative as an inaugural appointee to its Advisory Board in 2014. References People in information technology Year of birth missing (living people) Living people
Dan Meredith
Technology
362
12,044,574
https://en.wikipedia.org/wiki/FYVE%2C%20RhoGEF%20and%20PH%20domain%20containing
FYVE, RhoGEF and PH domain containing (FGD) is a gene family consisting of: FGD1 FGD2 FGD3 FGD4 Type 1 is associated with Aarskog-Scott syndrome. See also Guanine nucleotide exchange factor RhoGEF References External links Gene families
FYVE, RhoGEF and PH domain containing
Chemistry,Biology
71
40,448,087
https://en.wikipedia.org/wiki/Alnylam%20Pharmaceuticals
Alnylam Pharmaceuticals, Inc. is an American biopharmaceutical company focused on the discovery, development and commercialization of RNA interference (RNAi) therapeutics for genetically defined diseases. The company was founded in 2002 and is headquartered in Cambridge, Massachusetts. In 2016, Forbes included the company on its "100 Most Innovative Growth Companies" list. History The company is a spin-off from the Max Planck Institute for Biophysical Chemistry. In 2002, Alnylam was founded by scientists Phillip Sharp, Paul Schimmel, David Bartel, Thomas Tuschl, and Phillip Zamore, and by investors Christoph Westphal and John Kennedy Clarke; John Maraganore was the founding CEO. The company was named after Alnilam, a star in Orion's belt. The spelling was modified to make it unique. In 2003, the firm merged with the German pharmaceutical company, Ribopharma AG. The newly formed company also received $24.6 million in funding from private-equity firms. On February 27, 2004, Alnylam Pharmaceuticals filed for an IPO. The company raised $26 million and began trading as ALNY on the Nasdaq stock exchange. In 2005, the company partnered with Medtronic to develop drug-device combinations to treat neurodegenerative disorders, and in 2006 with Biogen Idec to develop treatments of progressive multifocal leukoencephalopathy. In 2007, it entered into a nonexclusive alliance with Hoffmann-La Roche, in which Alnylam received $331 million in exchange for access to its technology platform. and also partnered with Ionis Pharmaceuticals to found the company Regulus Therapeutics, focused on microRNA therapeutics. In 2009, the company formed alliances with Cubist Pharmaceuticals and Kyowa Hakko Kirin to market a drug targeted at respiratory syncytial virus. In 2010, it expanded its previous collaboration with Medtronic to include the CHDI Foundation in its Huntington's disease focused research. In 2011, it partnered with GlaxoSmithKline to develop RNAi technology enhancing vaccine production. The company entered into a 10-year alliance with Monsanto in 2012, to develop biotech solutions for the farming industry by developing natural molecules for crop protection. In 2012, it formed a partnership with Sanofi Genzyme to develop a treatment for transthyretin-mediated amyloidosis, a hereditary disease in Asia. In February 2013, it formed a partnership with The Medicines Company to develop a drug to treat a genetic form of high cholesterol. In July 2013, during a Phase I trial Alnylam demonstrated statistically significant reduction of a protein called transthyretin, or TTR and demonstrated human efficacy with intravenous and subcutaneous modes of administration. In 2014, Sanofi Genzyme acquired a 12 percent stake in Alnylam and increased its rights to several of the company's drugs for $700 million. In a separate transaction Alnylam announced that it had purchased Merck & Co.'s Sirna Therapeutics, for $25 million cash and $150 million in stock. In 2015, the company had $41 million in revenue and a market cap of $5.2 billion. In 2016, the company purchased land in Norton, Massachusetts to build a manufacturing facility. In October 2016 the Phase III clinical trial of the company's lead product, revusiran, was halted due to increased deaths in the drug arm of the trial, and the company said it was terminating development of the compound. On October 10, 2018, Alnylam appoints Margaret Hamburg to Board of Directors. Prior to this, from May 2009 to April 2015, she serves as Commissioner of the U.S. Food and Drug Administration (FDA). In February 2020, Alnylam appointed former Sanofi CEO Olivier Brandicourt to its board of directors. In 2021, it was announced that Maraganore would step down as CEO, to be succeeded by the company's chief operating officer, Yvonne Greenstreet, on January 1, 2022. In December 2021, Alnylam submitted a clinical trial authorisation (CTA) application to the Medicines and Healthcare Products Regulatory Agency in the United Kingdom to initiate a Phase 1 study of ALN-APP, an investigational RNAi therapeutic targeting amyloid precursor protein (APP) for the treatment of Alzheimer's disease and cerebral amyloid angiopathy. On December 22, 2021, Novartis announced that the US Food and Drug Administration (FDA) approved Leqvio (inclisiran), a small interfering RNA (siRNA) therapy to lower low-density lipoprotein cholesterol. Leqvio is indicated in the United States as an adjunct to diet and maximally tolerated statin therapy for the treatment of adults with clinical atherosclerotic cardiovascular disease (ASCVD) or heterozygous familial hypercholesterolemia (HeFH) who require additional lowering of LDL-C. The effect of Leqvio on cardiovascular morbidity and mortality is being explored in clinical trials currently underway. Novartis obtained global rights to develop, manufacture and commercialize Leqvio under a license and collaboration agreement with Alnylam Pharmaceuticals. Alnylam still does not earn money, but writes losses. The losses ("GAAP Operating Loss") from Alnylam were around $650 million in the late 2020. Alnylam expects to achieve net profits financially in 2022 or 2023. In July 2023, Roche partnered with Alnylam Pharmaceuticals in a deal worth $2.8 billion for the development of a hypertension drug. Products In 2016, Alnylam Pharmaceuticals had 18 potential treatments in various development stages in genetic medicine, cardiometabolic disease and hepatic infectious disease. In late 2016, the company's lead candidate in phase III studies was patisiran, a treatment targeting transthyretin (TTR) for the treatment of TTR-mediated amyloidosis (ATTR), in patients with the compromised nervous system condition of familial amyloidotic polyneuropathy (FAP). In August 2018, with its commercial name Onpattro, patisiran received the U.S. regulatory approval to treat polyneuropathy in patients with hereditary ATTR amyloidosis. FDA on patisiran In September 2023, the FDA raised doubts about the efficacy of patisiran for treating cardiomyopathy associated with transthyretin-mediated amyloidosis (ATTR-CM). The FDA's Cardiovascular and Renal Drugs Advisory Committee meeting is scheduled for September 13, 2023. Although the APOLLO-B study met key endpoints, the FDA questioned the clinical significance of the results particularly for patients not on background therapy with tafamidis. The FDA is seeking the committee's input on the clinical meaningfulness and patient populations for patisiran use potentially challenging Pfizer's tafamidis dominance in ATTR-CM treatment. A decision on Alnylam's application is expected by October 8, 2023. Patisiran was previously approved in 2018 for hereditary ATTR amyloidosis polyneuropathy, becoming the first RNA interference therapeutic approved by the FDA. References External links Alnylam on ClinicalTrials.gov Pharmaceutical companies of the United States Biotechnology companies of the United States Biopharmaceutical companies Companies listed on the Nasdaq Therapeutic gene modulation Health care companies based in Massachusetts Companies based in Cambridge, Massachusetts Biotechnology companies established in 2002 Pharmaceutical companies established in 2002 2002 establishments in Massachusetts 2004 initial public offerings
Alnylam Pharmaceuticals
Biology
1,567
58,507,200
https://en.wikipedia.org/wiki/Gearspace
Gearspace is a website and forum dedicated to audio engineering. Gearspace is one of the largest resources for pro audio information, with over 1.6 million monthly visitors from 218 countries. Originally established in 2002 as Gearslutz, the site rebranded in March 2021. History In 2002, Julian Standen and Meg Lee Chin, both musicians and audio engineers, created the site, which is widely regarded as a top online resource for music production knowledge and discussion. The site has been described as the "best place … for help with your interface, DAW, signal path, or just about anything else." In 2018, the website was ranked by Alexa.com as the 7,360th most popular website in the world. In 2020, it had over 1.6 million monthly visitors from 218 countries. Behringer Lawsuit In mid-2017, Music Tribe, the parent company of music equipment manufacturer Behringer, pursued legal action against synthesizer manufacturer Dave Smith Instruments (DSI) and a number of the website's forum participants, including a DSI employee, for defamation over various statements made in forum discussions that alleged that Behringer copies other companies' products and exhibits other questionable business practices. Name change On January 6, 2021, a forum user started an online petition at Change.org encouraging the website to change its name from Gearslutz. Site co-founder Standen announced later the same month that the site would be undergoing a name change, stating "the word-play pun in the name has gotten old and it is now time to move forward". On March 29, 2021, Standen confirmed that the site would be renamed "Gearspace.com". References British music websites Audio engineering
Gearspace
Engineering
346
74,550,668
https://en.wikipedia.org/wiki/Bed%20rotting
Bed rotting is a phrase from social media wherein a person stays in bed for an entire day without engaging in daily activities and chores. This concept emphasizes taking time to rest, recharge, and enjoy leisure activities like watching TV, reading, or scrolling through social media without the pressure to be productive. On February 13, 2024, Dictionary.com announced that it added "bed rotting" along with more than 1,700 new or updated definitions to reflect recent online trends. It was defined as "the practice of spending many hours in bed during the day, often with snacks or an electronic device, as a voluntary retreat from activity or stress." Background Many who partake in bed rotting commonly spend their time on their smartphone or reading a book. The behavior may have a negative impact in individuals experiencing depression along with being a symptom of depression. While some see it as a way to prioritize mental health and combat burnout, it’s important to balance it with other activities to maintain overall well-being. The trend has gained traction on social media, where users share their "bed rotting" experiences, celebrating the art of doing nothing in a cozy, comfortable setting. Response Some observers have interpreted this as a reaction to stress and or anxiety. Lifehacker has described bed rotting as "an aspect of JOMO". See also Bed-ins for peace Couch potato Generation Z Bedrest Bedridden References Sleep Self-care 2020s fads and trends Depression (mood) 2020s neologisms Culture of beds
Bed rotting
Biology
310
34,633,252
https://en.wikipedia.org/wiki/Ruina%20montium
Ruina montium (Latin, "wrecking of mountains") was an ancient Roman mining technique described by Pliny the Elder (Natural History 33.21), who served as procurator in Spain. It is thought to draw on the principle of Pascal's barrel. Miners would excavate narrow cavities down into a mountain, whereby filling the cavities with water would cause pressures large enough to fragment thick rock walls. See also Hushing Hydraulic mining Las Médulas Mountaintop removal mining References Hydrostatics Industry in ancient Rome Mining techniques
Ruina montium
Chemistry
116
49,760,156
https://en.wikipedia.org/wiki/USA-266
USA-266, also known as GPS IIF-12, GPS SVN-70 and NAVSTAR 76, is an American navigation satellite which forms part of the Global Positioning System. It was the twelfth of twelve Block IIF satellites to be launched. Launch Built by Boeing and launched by United Launch Alliance (ULA), USA-266 was launched at 13:38 UTC on 5 February 2016, atop an Atlas V 401 launch vehicle, vehicle number AV-057. The launch took place from Space Launch Complex 41 at the Cape Canaveral Air Force Station, and placed USA-266 directly into semi-synchronous orbit. Orbit As of March 2016, USA-266 was in an orbit with a perigee of , an apogee of , a period of 717.9 minutes, and 55.01° of inclination to the equator. It is used to broadcast the PRN 32 signal, and operates in slot 5 of plane F of the GPS constellation. The satellite has a design life of 12 years and a mass of . It is currently in service following commissioning on 9 March 2016. References Spacecraft launched in 2016 GPS satellites USA satellites Spacecraft launched by Atlas rockets
USA-266
Technology
243
24,632,769
https://en.wikipedia.org/wiki/Mycena%20austrofilopes
Mycena austrofilopes is a species of mushroom in the family Mycenaceae. It has been found growing in leaf litter under Eucalyptus trees in Victoria, Australia. References External links Fungi described in 1997 Fungi of Australia austrofilopes Taxa named by Cheryl A. Grgurinovic Fungus species
Mycena austrofilopes
Biology
63
52,923,863
https://en.wikipedia.org/wiki/Glasdegib
Glasdegib, sold under the brand name Daurismo, is a medication for the treatment of newly-diagnosed acute myeloid leukemia (AML) in adults older than 75 years or those who have comorbidities that preclude use of intensive induction chemotherapy. It is taken by mouth and is used in combination with low-dose cytarabine. The recommended dose of glasdegib is 100 mg orally once daily on days 1 to 28 in combination with cytarabine 20 mg subcutaneously twice daily on days 1 to 10 of each 28-day cycle in the absence of unacceptable toxicity or loss of disease control. The most common adverse reactions are anemia, fatigue, hemorrhage, febrile neutropenia, musculoskeletal pain, nausea, edema, thrombocytopenia, dyspnea, decreased appetite, dysgeusia, mucositis, constipation, and rash. It is a small molecule inhibitor of sonic hedgehog, which is a protein overexpressed in many types of cancer. It inhibits the sonic hedgehog receptor smoothened (SMO), as do most drugs in its class. History Glasdegib was approved for medical use in the United States in December 2018. FDA approval was based on a multicenter, open-label, randomized study (BRIGHT AML 1003, NCT01546038) that included 115 subjects with newly-diagnosed AML who met at least one of the following criteria: a) age 75 years or older, b) severe cardiac disease, c) baseline Eastern Cooperative Oncology Group performance status of 2, or d) baseline serum creatinine >1.3 mg/dL. Subjects were randomized 2:1 to receive glasdegib, 100 mg daily, with LDAC 20 mg subcutaneously twice daily on days 1 to 10 of a 28-day cycle (N=77) or LDAC alone (N=38) in 28-day cycles until disease progression or unacceptable toxicity. The trial was conducted in United States, Canada and Europe. Efficacy was established based on an improvement in overall survival (date of randomization to death from any cause). With a median follow-up of 20 months, median survival was 8.3 months (95% CI: 4.4, 12.2) for the glasdegib + LDAC arm and 4.3 months (95% CI: 1.9, 5.7) for the LDAC alone arm and HR of 0.46 (95% CI: 0.30, 0.71; p=0.0002). Glasdegib was granted priority review and orphan drug designation by the U.S. Food and Drug Administration (FDA). It was granted orphan drug designation by the European Medicines Agency (EMA) in October 2017. Glasdegib was approved for medical use in the European Union in June 2020. References External links Antineoplastic drugs Benzimidazoles Nitriles Orphan drugs Drugs developed by Pfizer Piperidines Teratogens Ureas
Glasdegib
Chemistry
656
54,773,609
https://en.wikipedia.org/wiki/Letters%20on%20Sunspots
Letters on Sunspots (Istoria e Dimostrazioni intorno alle Macchie Solari) was a pamphlet written by Galileo Galilei in 1612 and published in Rome by the Accademia dei Lincei in 1613. In it, Galileo outlined his recent observation of dark spots on the face of the Sun. His claims were significant in undermining the traditional Aristotelian view that the Sun was both unflawed and unmoving. The Letters on Sunspots was a continuation of Sidereus Nuncius, Galileo's first work where he publicly declared that he believed that the Copernican system was correct. Previous observations of sunspots Galileo was not the first person to observe sunspots. The earliest apparent reference to them appears in the I Ching of ancient China, while the earliest recorded observation is also Chinese, dating to 364 BC. Around the same time, the first European mention of sunspots is found, by Theophrastus. There were reports from Islamic and European astronomers of sunspots in the early ninth century; those occurring in 1129 were recorded by both Averroes and John of Worcester, whose drawings of the phenomenon are the earliest surviving today. Johannes Kepler observed a sunspot in 1607 but, like some earlier observers, believed he was watching the transit of Mercury. The sunspot activity of December 1610 was the first to be observed using the newly invented telescope, by Thomas Harriot, who sketched what he saw but did not publish it. In 1611 Johannes Fabricius saw them, and published a pamphlet entitled De Maculis in Sole Observatis, which Galileo was not aware of before he wrote the Letters on Sunspots. Critical dialogue with Scheiner When Jesuit Christoph Scheiner first observed sunspots in March 1611, he ignored them until he saw them again in October. Then, under the pseudonym Apelles latens post tabulam (Apelles hiding behind the painting), he presented his description and conclusions about them in three letters to the Augsburg banker and scholar Mark Welser. Scheiner wanted to remain anonymous to avoid involving the Jesuit order and the church generally in an area of controversy. Welser published them on his own presses, sent copies to astronomers around Europe, and invited them to reply. It was Welser's invitation which prompted Galileo to reply with two letters, arguing that the sunspots were not satellites, as Scheiner ('Apelles') maintained, but were features either on the Sun's surface or just above it. In the meantime, Scheiner sent Welser two further letters on the subject, and after he had read Galileo's first letter, he responded with a sixth of his own. These later letters were different in tone from the first three, as they hinted that Galileo was claiming credit for having discovered the phases of Venus, when in fact proper credit was due to others. They also implied that Galileo had copied Scheiner's helioscope in order to do his research. Having published Scheiner's first three letters under the title Tres Epistolae de Maculis Solaribus ("Three Letters on Solar Spots"), Welser now published his second three, also in 1612, as De Maculis Solaribus et Stellis circa Iovis Errantibus Accuratior Disquisition ("A More Accurate Disquisition Concerning Solar Spots and Stars Wandering around Jupiter"). Having read these second three letters, Galileo replied with a third of his own, much sharper and more polemical in tone than his earlier ones. Welser declined to publish Galileo's letters, perhaps because of the sarcastic tone they took towards Apelles, although the reason he gave Galileo was the exorbitant cost of producing all the illustrations Galileo wanted. Censorship by the Inquisition Publishing the Letters on Sunspots was a major financial and intellectual venture for the Accademia dei Lincei, and it was only the fourth title it had decided to issue. Federico Cesi paid for the publication himself, and wanted to strike a careful balance between introducing extraordinary new ideas and avoiding causing offence to people who might find those views problematic. This was consistent with the Accademia's project of acting as a centre for the dissemination of radical new scientific ideas, issued with the agreement of the Church authorities. Cesi tried to persuade Galileo to avoid an aggressive or polemical tone in his letters, to avoid antagonising the Jesuits (Scheiner's identity behind the pseudonym 'Apelles' was already suspected), but having read Scheiner's apparent accusations of bad faith in his later letters, Galileo did not heed his advice. Indeed, the published version of his Letters on Sunspots contained a preface by Angelo de Filiis which uncompromisingly asserted Galileo's primacy in discovering sunspots. The text was presented for censorship to the Roman Inquisition in order to obtain permission to print. The censors assigned were Cesare Fidelis, Luigi Ystella, Tommaso Pallavicini and Antonio Bucci. Ensuring the book was ready to print was a collaborative process involving the censors, Galileo, Cesi and others in working on the text until it was acceptable to the Inquisition, and the censors were well acquainted with the leading figures of the Accademia. Antonio Bucci, for example, was a physician who had previously been involved in reviewing work by Giambattista della Porta, also published by Cesi. In the case of Letters on Sunspots his critical support appears to have been helpful in ensuring that publication was not prevented by influential Dominicans of the Sacred Palace. Indeed, in his comments Bucci praised Galileo's work, with which he was already familiar, as he had been invited to take part in the Accademia's discussions about it before the manuscript was presented for censorship. The censors insisted that Galileo remove from his text any reference to scripture or claims for divine guidance. Thus the pamphlet was to have opened with a quotation from Matthew 11:12 'The kingdom of heaven suffers violence, and men of violence take it by force.' The censors objected that this could be understood to mean that astronomers wanted to overpower theology. It was therefore amended to 'Already the minds of men assail the heavens, and the more valiant conquer them.' Further on in the text Galileo's claim that 'divine goodness' had led him to advocate the system of Copernicus was struck out, and replaced with 'favourable winds'. Galileo's text referred to the idea that the heavens were immutable as 'erroneous and repugnant to the indubitable truth of Scripture.' Like all other mentions of Scripture, the censors insisted that this too was removed. Galileo wanted to claim divine inspiration for his findings and show how they accorded with Holy Writ; the censors wanted to keep unusual new ideas at a safe distance from core tenets of the faith. With these amendments Galileo was authorised to take his book to print. Half of the printed edition of 1400 copies of Letters on Sunspots contained both the Apelles Letters and Scheiner's illustrations as well as Galileo's replies. The other half contained Galileo's work only. The total cost of the book was 258.70 scudi, of which 44 scudi was the cost of the illustrations and tables and 6 scudi was the cost of engraving the frontespiece. Galileo's First Letter - 4 May 1612 Galileo describes how he has observed sunspots for eighteen months. His key conclusions are that sunspots were real and not merely optical illusions; and that they were not static, but moved. The sunspots had a single motion, moving across the Sun in a uniform fashion. Galileo argued that the Sun was a perfect sphere and that it moves by itself on its own center. The Sun carries these spots until they disappear from view at its rim in about one lunar month. Scheiner's view that the spots were satellites prompts Galileo to comment on the phases of Venus and how they supported a heliocentric view. He develops his argument to show that sunspots were not permanent and did not have a regular pattern of movement as they would if they were heavenly bodies – they were nothing like the moons of Jupiter that he had himself discovered and described in Siderius Nuncius. 'The sun, turning on its axis, carries them around without necessarily showing us the same spots, or in the same order, or having the same shape.' He noted the parallels between sunspots and clouds over the Earth, but did not assert that they were made of the same material. His comment on 'Apelles' (the pseudonym of Scheiner) was:'It seems to me therefore that Apelles has a free, and not a servile mind; he is well able to grasp true teaching; and now, prompted by the strength of so many new ideas, he is beginning to listen and to assent to true and sound philosophy, especially as regards the arrangement of the universe. But he is not yet able to detach himself completely from the fantasies he absorbed in the past, to which his intellect sometimes returns and lends assent by force of long-established habit.' Much of Galileo's first letter is devoted to demonstrating weaknesses in Scheiner's arguments – inconsistencies, false analogies, and unlikely conclusions from the observations he had made. Responding to points in Apelles' first letter Apelles says the sunspots move from east to west, when he should have said they moved from west to east. This is not in fact a disagreement about the direction of the spots, but a reminder of the conventions used by astronomers to describe them. From the point of view of the Earth, the sunspots move to move from east to west, but astronomers describe celestial movement from the 'highest' (i.e. furthest away from the Earth) point of their cycles. Apelles has not conclusively demonstrated that the spots cannot be on the surface of the Sun, simply by asserting that because it is bright, it cannot have dark parts. Apelles is wrong to say that sunspots are much darker than dark spots on the Moon; the spots are in fact not as dark as the area immediately around the Sun which is most strongly illuminated by it, and this area is itself so bright that the Moon would be invisible if we tried to observe it in that position. Responding to points in Apelles' second letter Apelles discusses the transit of Venus but is wrong about the size of the planet relative to the Sun; it is so much smaller than Apelles suggests that it may not even be possible for observers to see it making its transit, meaning that the lack of a definite sighting of the transit does not necessarily prove anything. (Scheiner had argued that since a transit of Venus was predicted but not seen, this must mean that Venus had passed behind the Sun, thereby lending support to Tycho Brahe's view that Venus, like all planets apart from the Moon, orbited around the Sun). Responding to points in Apelles' third letter Apelles reports that sunspots took around fifteen days to pass across the face of the Sun, and that he never saw the same spots re-emerge on the eastern limb of the Sun fifteen days after they disappeared on the western limb. He concludes that they could not therefore be features carried around the Sun on its surface by a regular rotation. Galileo responds that this would be the case if Apelles had shown that the spots were solid bodies, whereas it is obvious to observers that they are changing shape as they move around the Sun. He therefore says that Apelles has not proved that they could not be on the surface of the Sun. Apelles arguments are inconsistent. When considering his failure to observe a transit of Venus, he concludes that Venus must be behind the Sun (which was possible in Tycho Brahe's model of the universe but impossible in Ptolemy's); however when discussing parallax, in a later part of his argument, he claims that Venus only displays a small parallax (required in Ptolemy's system but impossible in Brahe's). Apelles argues that the spots are not in any of the 'orbs' of the Moon, Venus or Mercury; but according to Galileo these 'orbs', like deferents and epicycles, were only theoretical devices of 'pure astronomers' and not actual physical entities. 'Philosophical astronomers' have no interest in such concepts but are concerned with trying to understand how the universe actually works. Apelles does not even argue consistently on the basis of his assumptions that these orbs and other suppositional devices actually exist, for he says first that if the spots were phenomena in the 'orbs' of the Moon, Venus or Mercury (which only appear to us to be on the face of the Sun) then they would have to move with motion of those planets. However having concluded that the spots are in the 'orb' of the Sun, he maintains that they do not move with the motion of the Sun, but independently of it. Galileo then offers a different explanation to the one Apelles had suggested for the fact that as the sunspots approach the limb of the Sun in their rotation, they grow thinner. Apelles had included a diagram in his third letter to demonstrate how he believed this could be explained in terms of the spots being small moons, which went through phases. Galileo maintained that this was doubtful. As the dark area of sunspots approach the limb of the Sun, it appears from observation that the area of darkness reduces from the side facing away from the Sun – i.e. that the spots are actually getting thinner. If they were moons, the area of darkness would diminish from the side facing the centre of the Sun. Galileo points out inconsistencies in the arguments by Apelles which, in one place, would mean the spots had to be very close to the Sun, and, in another part, that they must be far away from it. The differences in speed between spots moving near the Sun's equator and those further away from it argue for their being on the surface, as the larger the notional 'orb' outside the Sun the spots might be carried on, the less visible this difference in speed would be. Galileo considers the possible 'essence' or substance of the sunspots, and says he does not believe there is yet any way of knowing it. He shows however that of all the things we observe on Earth, it is clouds that share the most characteristics with sunspots. Whatever they may be made of, they are certainly not 'stars' as Apelles suggests, since, as he himself shows, they cannot be observed making regular orbits of the Sun. Apelles had tried to make the case that the sunspots were similar to two phenomena Galileo had discovered, the moons of Jupiter and the rings of Saturn. Galileo responds that there is no comparison in either case; the moons of Jupiter (Medicean Stars) move with an absolute regularity he has already described, while Saturn simply bears no comparison to the description Apelles provides of it. (Here Galileo provides two simple in-line sketches to show what he means). Galileo assures his reader that he can confirm, after long observation, that Saturn never changes its shape, as Apelles claims, and never will. Mercury, the planet closest to the Sun, completes its transit in about six hours; it makes no sense to propose that spots on some 'orb' which is much closer to the Sun than Mercury would take around fifteen days to complete theirs. Likewise planetary orbits appear constant in their speed, whereas Apelles has shown that sunspots move rapidly in the centre of the Sun but more slowly at its edges. Galileo's Second letter – 14 August 1612 Galileo's second letter restates the key propositions from his first letter, and is otherwise mostly concerned with geometric proofs that the spots are on the surface of the Sun rather than above it. To accompany these proofs Galileo provides 38 detailed illustrations, which allow the reader to see how his observations relate to his calculations. Further observations confirm what Galileo originally believed – that the spots were on or very close to the Sun, which carries them round as it rotates on its own axis. He notes that as sunspots approach the limit of their movement cross the visible field of the Sun, at the point where they are seen 'sideways on' from Earth, they sometimes appear as thin as a thread; if, as Apelles maintained, the spots were satellites, they would be clearly set apart from the surface of the Sun at this point. The apparent acceleration of the spots as they approach the centre of the Sun and their slower speeds towards the edges, are perfectly consistent with a circular rotation on the surface. The growth in apparent size of the gaps between spots as they approach the centre, and their apparent diminution towards the edges of the Sun, likewise confirm this. He uses a geometrical diagram to demonstrate the effects of foreshortening, showing how if the sunspots were removed from the surface of the Sun by even a twentieth part of its diameter, there would be a very observable difference in the visible foreshortening effect. The apparent distance to the observer from C to F is seven times smaller than the actual distance on the surface of the Sun from C to H; however, if the spots are just a small way above the surface of the Sun, the apparent distance from C to F corresponds to the actual distance from R to N, which is less than a third the length of C to H. Thus by measuring the differences in the apparent distances between spots as they move across the Sun, it is possible to know with certainty whether the foreshortening corresponds to the proportion CF:CH or to some other proportion. The changes in apparent distance observed leave no doubt on this question. He uses a second diagram to demonstrate the gaps between sunspots which can be seen right up to the point where they disappear at the limb of the Sun. This, he says, means they must be low against the sun and thin, rather than high above its surface and thick. Galileo then counters a number of arguments that might be put forward to show that sunspots are an effect in the Earth's atmosphere. These were not arguments Apelles had advanced; rather, he had also argued against them. Galileo's points were made for the sake of completeness, although, as he argues 'it is not necessary to waste time in re-examining every other conceivable position [for the sunspots], for anyone will immediately encounter manifest impossibilities and contradictions himself, so long as he has understood the phenomena I have recounted above.'. He says that because the spots change shape, it is difficult to be certain whether some complete a full revolution and reappear in changed form after disappearing round to the dark side of the Sun for fourteen or fifteen days. However he believes that this does in fact happen. 'I am inclined to this belief upon seeing a very large one appear and grow continuously while the visible hemisphere turns; since it is credible that it was generated long before its arrival, so it is reasonable to believe that it can last after its departure, such that its duration will be much longer than the time of half a revolution of the Sun. Therefore, some spots can doubtless, or rather necessarily, be seen twice by us. He considers arguments about the natural inclination of bodies for different kinds of motion in order to judge whether the spots are on the surface of the Sun or in its atmosphere, and concludes that the regularity of sunspot motion argues that they 'originate in a solid and firm body where the motion of the whole and of the parts is a single one.' (However, in his Third Letter he argued, against Scheiner, that 'there is no one so simple as to grant that the Sun is hard and immutable'). He describes his method of observing and recording sunspots, discovered by Benedetto Castelli. This is by way of explaining to the reader that the thirty-eight illustrations which follow are highly accurate (i.e. unlike Scheiner's). His last main point addresses those who say that his ideas and observations contradict Aristotle. 'If he argued for the immutability of the heavens because in times past no alteration whatsoever had been seen in them, it is entirely credible that if vision had demonstrated to him the things that it makes manifest to us, he would have arrived at the opposite conclusion. And I will further say that I think I contradict Aristotle's doctrine much less... with the supposition of mutable celestial material, than do those who would prefer to treat it as inalterable, because I am sure that he was never as certain of the conclusion of inalterability as he was of the notion that all human discourse must defer to evident experience.' He adds a postscript to say that while he was undertaking his observations, a sunspot appeared which was so large it could be seen with the naked eye between 19 and 21 August 1612. This is included in his series of illustrations. Galileo's Third Letter – 1 December 1612 While Galileo's First and Second Letters had been written in response to Scheiner's Tres Epistolae, his Third Letter responded to Accuratior Disquisitio. Galileo was angry to see that once again Scheiner was making claims about the moons of Jupiter, since he regarded them as his own discovery. To demonstrate the falsehood of Scheiner's assertion that the moons of Jupiter were 'wandering stars', unpredictable in their movement, as well as to display his own clear superiority in observation and calculation of celestial movements, Galileo appended a complete set of Ephemerides for the Jovian moons to his third letter. Galileo shows the critical flaws in Scheiner's geometry, his understanding of the authorities he cites, his reasoning, his observations and indeed his own drawings. Introduction Galileo says there is no point in speculating about the 'essence' of sunspots, or indeed of other things, but since writing his last letter he has spent time thinking about the uniform motion of the sunspots within a specific band around the Sun's surface. He asks, in passing, 'is there not still a controversy over whether the Earth itself remains immobile, or wanders?', which is an oblique reference to the idea, required by Copernicus' model of the universe, that the earth must rotate on its own axis every day. Lastly, he humorously compares scholars who insist that every detail of Aristotle's writing must be true, whether it corresponds with reality or not, with those artists who draw portraits of people in fruit and vegetables. 'As long as these oddities are offered as jokes, they are nice and pleasing... but if someone, perhaps because he had consumed all his studies in a similar style of painting, then wanted to draw the general conclusion that every other method of imitating was imperfect and blameworthy, surely Cigoli and other celebrated painters would laugh at him.' Venus, sunspots and use of authorities Galileo takes up once again the question of whether there is any relation between the transit of Venus and sunspots. He criticises 'Apelles' for setting out a long and complex demonstration of the movement of Venus across the face of the Sun, when it was superfluous to his purpose. He criticises him further for giving an estimate of Venus's size as it crosses the Sun which is wrong, and for supporting this estimate with learned authorities from the past who did not have telescopes. Furthermore, Galileo argues, some of the ancient astronomers, including Ptolemy, made more cogent arguments than 'Apelles' suggests. Galileo notes that 'Apelles' has shifted his view on sunspots since his first letter. At first he insisted they were all spherical, like little moons; now he says they are irregular in shape, forming and dissolving. He previously said that the spots were at various distances from the Sun, wandering between it and Mercury, but he no longer maintains this view. 'Apelles' argues that the hardness and solidity of the Sun means that the fluid spots cannot be on its surface; but citing the authority of the ancients to confirm the Sun's solidity is pointless, since they had no idea of its structure; in any case the evidence of the spots themselves suggests the very opposite to the traditional view of the Sun's hardness. He agrees with 'Apelles' view that the spots are not chasms or pools on the Sun's surface, but nobody had ever argued that they were. The movement of sunspots A large portion of the Third Letter is taken up with disproving Apelles' assertion that he had observed spots passing across the Sun at different speeds – one, on the diameter, taking sixteen days, and another, at a lower latitude, in just fourteen. (If sunspots moved at differential speeds, this tended to suggest they were moons moving independently of the Sun itself). Galileo says that in his own observations he has never seen this differential rate of movement, but that spots always move at a constant speed relative to each other. First Galileo demonstrates that points on two different sunspot trajectories at two different latitudes produce lines which maintain a constant proportion with each other at any point in the rotation. Next he shows that the larger the sphere on which sunspots appear, the less differential there is in their transit times at the same two latitudes. Finally, he shows that for a spot to move along the diameter of the Sun in a period 1 as long as another spot at a latitude 30° higher, the diameter of the Sun would need to be more than twice as great as observed. From this he concludes that Apelles is simply wrong, and it is not possible for one spot to traverse the Sun in sixteen days, while another takes only fourteen. Now Galileo turns to Apelles' illustrations of sunspots, and begins to use them to show how his arguments about sunspot motion are false. He recalls how Apelles depicts them coming into view, foreshortened, before appearing at their full width. He then demonstrates that for the spots Apelles had observed to change in apparent size as they did, they would need to be on the face of the Sun, because if they were even a short distance above its surface the foreshortening effect would be remarkably different. Galileo challenges Apelles' assertion that he had seen different spots moving at different speeds; particularly that he had seen spots on the Sun's diameter rotate more rapidly than those at higher latitudes. This, he says, is contradicted not only by observation but by Apelles' own statement in another place in his work that spots in the middle of the Sun remain longer than those passing nearer its limb. Finally, Apelles' own illustrations clearly show spots transiting the Sun in around 14 days, and nothing in his illustrations supports his contention that some take 16, and others 9. Observations on other planets Having disproved Apelles' arguments on sunspots, Galileo addresses a number of his other errors. He briefly responds to Apelles' views on extraterrestrial life; then disposes of the idea that the Moon is translucent. He then returns to Apelles' analogy between sunspots and the moons of Jupiter, where he notes that Apelles has subtly moved from arguing that sunspots are like planets, to arguing that planets are like sunspots. 'Carried away by the desire to maintain what he had originally said, and unable to accommodate the spots exactly to the properties once associated with the other stars, [Apelles] has accommodated the stars to the properties that we know belong to the spots.' To dispense once and for all with Apelles' claim that the moons of Jupiter 'appear and disappear', Galileo provides predictions for their positions for the next two months to prove the regularity of their motions. To demonstrate that natural philosophy must always be led by observation and not try to fit new facts into preconceived frameworks, Galileo comments that the planet Saturn had recently and surprisingly changed its appearance. In his First Letter, he had argued that Saturn never changes its shape, and never will. Now, he agrees, it has changed shape. He does not try to prove his earlier views right in spite of new facts, but makes cautious predictions about how its appearance may change in future. Galileo concludes his remarks by criticising those who doggedly adhere to Aristotle's views, and then, drawing together all he has said about sunspots, the moons of Jupiter, and Saturn, ends with the first explicit endorsement of Copernicus in his writings: I think it is not the act of a true philosopher to persist – if I may say so – with such obstinacy in maintaining Peripatetic conclusions that have been found to be manifestly false, believing perhaps that if Aristotle were here today he would do likewise, as if defending what is false, rather than being persuaded by the truth, were the better index of perfect judgement... [and] I say to your Lordship that this star too [i.e. Saturn] and perhaps no less than the emergence of the horned Venus, agrees in a wondrous manner with the harmony of the great Copernican system, to whose universal relations we see such favourable breezes and bright escorts directing us. Significance of Letters on Sunspots Ideas The common belief until Galileo's time was that the heavens beyond the Moon were both perfect and unchanging. Many of the arguments between Scheiner and Galileo were about things observed in the skies that appeared to be changing, and what the nature and significance of that change was. Although the behaviour of sunspots was the main topic of their debate, they also touched on other disputes, such as the phases of Venus and the moons of Jupiter. In a letter to Federico Cesi, Galileo said: 'I have finally concluded, and I believe I can demonstrate necessarily, that they [i.e. the sunspots] are contiguous to the surface of the solar body, where they are continually generated and dissolved, just like clouds around the earth, and are carried around by the sun itself, which turns on itself in a lunar month with a revolution similar [in direction] to those other of the planets... which news will be I think the funeral, or rather the extremity and Last Judgement of pseudophilosophy.... I wait to hear the spoutings of great things from the Peripatetics to maintain the immutability of the skies.' 'Flaws' in the Sun The cosmology of Galileo's time, based on Aristotle's Physics, held that the Sun was 'perfect' and unflawed. Only with the invention of the telescope was it possible for sunspots to be systematically observed. Many who had never seen them found the idea of them morally and philosophically repugnant. Those who could see them, like Scheiner, wanted to find an explanation for them within the Aristotelian system. Galileo's arguments in Letters on Sunspots were intended to demonstrate these claims as false; and if they were false, Aristotelian assumptions about the universe could not be true. Moons of Jupiter Galileo had discovered the moons of Jupiter in 1609. Scheiner argued that what appeared to be spots on the Sun were in fact clusters of small moons, thereby trying to deploy one of Galileo's own discoveries as an argument for the Aristotelian model. In his Letters on Sunspots Galileo showed how sunspots were nothing like the moons of Jupiter, and the comparison was false. Scheiner claimed that the sunspots, with their irregular movements, were like the moons of Jupiter whose positions were similarly hard to predict. To counter this argument, Galileo published tables of predictions for the future position of the moons of Jupiter, so that astronomers could easily distinguish between the regular, predictable movements they followed with the ephemeral and irregular sunspots. Rotation of the Sun Showing that the Sun rotated had two effects. Firstly, it showed that the traditional Aristotelian model of the universe must be wrong, because that model assumed that the Sun had only a diurnal (daily) motion around the earth, and not a rotation on its own axis. Secondly, it showed that there was nothing necessarily unusual about rotation of a body in space. In the Aristotelian system, night and day were explained by the Sun moving round a static Earth. For Copernicus' system to work, there had to be an explanation for why half the Earth was not in permanent daylight, and the other in permanent darkness, as it completed its annual motion around the Sun. This explanation was that the Earth rotated on its own axis once every day. However it was very difficult to prove that the Earth was rotating, so to show that the Sun rotated made the Copernican model at least more plausible. While the rotation of the Sun did not prove Copernicus right, it proved his opponents wrong and made his ideas more likely to be true. Phases of Venus In the Letters on Sunspots Galileo responded to claims by Scheiner about the phases of Venus, which were an important question in the astronomy of the time. There were different schools of thought about whether Venus had phases at all – to the naked eye, none were visible. In 1610, using his telescope, Galileo had discovered in that Venus, like the Moon, had a full set of phases, but only in Letters on Sunspots did he commit this finding to publication. The fact that there was a full phase of Venus, (similar to a full moon) when Venus was in the same direction in the sky as the Sun meant that at a certain point in its orbit, Venus was on the other side of the Sun to the Earth. This indicated that Venus went around the Sun, and not around the Earth. This provided important evidence in support of the Copernican model of the universe. Copernicus At least as early as 1597, Galileo had concluded that the Copernican model of the universe was correct but had not publicly advocated this position. In Siderius Nuncius Galileo included in his dedication to the Grand Duke of Tuscany the words ' while all the while with one accord they [i.e. the planets] complete all together mighty revolutions every ten years round the centre of the universe, that is, round the Sun.' In the body of the text itself, he stated briefly that in a forthcoming work, 'I will prove that the Earth has motion', which is an indirect allusion to the Copernican system, but that is all. Copernicus is not mentioned by name. It is at the end of the Third Letter that Galileo explicitly declares his belief in the Copernican system. Movement of the Sun Galileo remarks in one passage that the Sun might not be revolving, but in another he states more definitely that the Sun does have a motion, and wonders what causes it. Here he establishes a connection between cosmology and mechanics. Galileo wrote, "I seem to have observed that physical bodies have physical inclination to some motion." Letters in Sunspots is also the first of his works to mention the concept of inertia, which would later become Newton's First Law of Motion. Language While Scheiner wrote his letters in Latin, Galileo's reply was in Italian. Scheiner did not speak Italian, so Welser had to have Galileo's letters translated into Latin so he could read them. This was not the first time Galileo had published in Italian, and Galileo was not the first natural philosopher to publish in Italian (for example Lodovico delle Colombe's account of the 1604 supernova was in Italian, as was Galileo's reply). However Letters on Sunspots was the first book the Accademia dei Lincei published in Italian. Galileo later said of his preference for Italian over Latin: 'I wrote in Italian because I wished everyone to be able to read what I wrote.... I see young men.... who, although furnished.... with a decent set of brains, yet not being able to understand things written in gibberish [i.e. Latin], take it into their heads that in these crabbed folios there must be some grand hocus-pocus of logic and philosophy much too high up for them to think of jumping at. I want them to know, that as nature has given eyes to them, just as well as to philosophers, for the purpose of seeing her works, she has also given them brains for examining and understanding them.' While Scheiner's lack of Italian hindered his response to Galileo in 1612 while they corresponded through Welser, it also meant that when Galileo published Il Saggiatore in 1623, which accused Scheiner of plagiarism, Scheiner was unaware of this until he happened to visit Rome the following year. Use of diagrams and illustrations Most readers of the time did not have a telescope, so could not see sunspots for themselves – they relied on descriptions and illustrations to make clear what they looked like. For this reason the quality and number of illustrations was essential in building public understanding. Scheiner's book of letters had contained illustrations of sunspots which were mostly 2.5 cm in diameter, leaving little space for detail and portraying sunspots as solid, dark entities. Scheiner himself had described them as 'not terribly exact' and 'drawn without precise measurement'. He also indicated that his drawings were not to scale, and the spots in his illustration had been drawn disproportionately large 'so that they would be more conspicuous.' A reader looking at these illustrations might be inclined to agree with Scheiner's view that sunspots were probably planets. Although the sunspots were constantly changing position, Scheiner presented his observations over a period of six weeks in a single fold out plate. All of his figures are small except for the observations in the top left corner. He admitted to his readers that his drawings were not made to scale, and that other factors such as variations in the weather, lack of time, or other impediments may have reduced their accuracy. Scheiner also showed the formation of spots in different orientations. Sometimes the configurations of the spots were linear following consecutive days, but the orientations became more complex over time that there was a lack of an obvious pattern. For Galileo to persuade his readers that sunspots were not planets but a much more transient and nebulous phenomenon, he needed illustrations which were larger, more detailed, more nuanced, and more 'natural.' Letters on Sunspots carried 38 engravings of sunspots, providing a visual narrative of the sun's appearance from 2 June – 8 July 1612, with some additional illustrations from August. This extensive visual representation, with its large scale and high-quality reproduction, allowed readers to see for themselves how sunspots waxed and waned as the sun rotated. The impact of this series of illustrations was to create a near-photographic sense of reality. This sense undermined the claims made by Scheiner before any argument was mounted to refute them. Galileo and Prince Cesi selected Matthaeus Greuter to create the sunspot illustrations. Originally from Strasbourg and a convert from Protestantism, Greuter moved to Rome and set up as a printer specialising in work for the Jesuit order. His work ranged from devotional images of saints through to mathematical diagrams. This relationship may have recommended him as one whose involvement in a publication would perhaps ease its path through censorship; in addition his craftsmanship was outstanding, and he devised a novel etching technique specially in order to make the sunspot illustrations as realistic as possible. Galileo drew sunspots by projecting an image of the Sun through his helioscope onto a large piece of white paper, on which he had already used a compass to draw a circle. He then sketched the sunspots in as they appeared projected onto his sheet. To make his illustrations as realistic as possible, Greuter reproduced them at full size, even with the mark of the compass point from Galileo's original. Greuter worked from Galileo's original drawings, with the verso on the copperplate and the image traced through and etched. The cost of the thirty-eight copperplates was significant, amounting to fully half of the production costs of the edition. Because half the copies of the Letters also contained the Apelles Letters, Greuter reproduced the illustrations that Alexander Mair had done for Scheiner's book, allowing Galileo's readers to compare two distinct views of the sunspots. He reduced Mair's drawings further in size, and converted nine of the twelve from etchings or engravings into woodcuts, which lacked the subtlety of Mair's originals. Scheiner was evidently impressed by Greuter's work, as he commissioned him to create the illustrations for his own magnum opus Rosa Ursina in 1626. The 1619 work Galileo co-wrote with Mario Guiducci, Discourse on Comets, mocked Scheiner for the 'badly colours and poorly-drawn images' in his work on Sunspots. Making predictions to test a hypothesis In modern science falsifiability is generally considered important. In De revolutionibus orbium coelestium Copernicus had published both a theoretical description of the universe and a set of tables and calculating methods for working out the future positions of the planets. In Letters on Sunspots Galileo did as Copernicus had done – he elaborated his ideas on the form and substance of sunspots, and accompanied this with tables of predictions for the position of the moons of Jupiter. In part this was to demonstrate that Scheiner was wrong in comparing sunspots with the moons. More generally, Galileo was using his predictions to establish the validity of his ideas – if he could be demonstrably right about the complex movements of many small moons, his readers could take that as a token of his wider credibility. This approach was the opposite of the method of Aristotelian astronomers, who did not build theoretical models based on data, but looked for ways of explaining how the available data could be accommodated within existing theory. Scholarly reception Some astronomers and philosophers, such as Kepler, did not publish views on the ideas in Galileo's Letters on Sunspots. Most scholars with an interest in the topic divided into those who supported Scheiner's view that sunspots were planets or other bodies above the surface of the Sun, or Galileo's that they were on or very near its surface. From the middle of the seventeenth century the debate about whether Scheiner or Galileo was right died down, partly because the number of sunspots was drastically reduced for several decades in the Maunder Minimum, making observation harder. After the Paris Observatory was built in 1667, Jean-Dominique Cassini instituted a programme of systematic observations, but he and his colleagues could find little pattern in the appearance of sunspots after many years of observation. However Cassini's observation did bear out Galileo's argument that sunspots indicated that the Sun was rotating, and Cassini did discover the rotation of Mars and Jupiter, which supported Galileo's contention that both the Earth and the Sun rotated. Christoph Scheiner As Cesi had feared, the hostile tone of the Letters on Sunspots towards Scheiner helped turn the Jesuits against Galileo. In 1619, Mario Guiducci published A Discourse on Comets, which was actually mostly written by Galileo, and which included an attack on Scheiner, although its focus was the work of another Jesuit, Orazio Grassi. In 1623, Galileo wrote Il Saggiatore (The Assayer), which accused Scheiner of trying to steal Galileo's ideas. In 1624, on a visit to Rome, Scheiner discovered that in The Assayer, Galileo had accused him of plagiarism. Furious, he decided to stay in Rome and devote himself to proving his own expertise in sunspots. His major work on the topic was Rosa Ursina (1626–1630). It is widely believed, though there is no direct evidence, that the bitter dispute with Scheiner was a factor in bringing Galileo to trial in 1633, and indeed that Scheiner may have worked behind the scenes to bring the trial about. As a result of pursuing this dispute with Galileo and the years of research it entailed, Scheiner eventually became the world's leading expert on sunspots. Raffaelo delle Colombe Together with Niccolò Lorini and Tommaso Caccini, delle Colombe was one of three Florentine Dominicans who opposed Galileo. Along with Raffaelo's brother Lodovico delle Colombe they formed what Galileo called the 'Pigeon League'. Caccini and delle Colombe both used the pulpit to preach against Galileo and the ideas of Copernicus, but only delle Colombe is known to have preached, on two separate occasions, against Galileo's ideas about sunspots. The first occasion was 26 February 1613, when his sermon concluded with these words: 'That ingenious Florentine mathematician of ours [i.e. Galileo] laughs at the ancients who made the sun the most clear and clean of even the smallest spot, whence they formed the proverb 'to seek a spot on the sun.' But he, with the instrument called by him a telescope makes visible that it has regular spots, as by observation of days and months he had demonstrated. But this more truly God does, because 'the heavens are not of the world in His sight'. If spots are found in the suns of the just, do you think they will be found in the moons of the unjust?' The second sermon against sunspots was on 8 December 1615, when the Letters on Sunspots had already been referred to the Inquisition for review. The sermon was delivered in Florence cathedral on the Feast of the Immaculate Conception. 'an ingenious academic took for his device a mirror in the face of the sun with the motto 'it shows what is received'. That means he had carved in his spirit I do not know what kind of beloved sun. But what would be better for Mary? Who could fixedly look at the infinite light of the Divine Sun, were it not for this virginal mirror, that in itself conceives it [the light] and renders it to the world? 'Born to us, given to us from an intact virgin?' This is 'Let what is received, be shown'. For one who seeks defects where there are none, is it not to be said to him 'he seeks a spot in the sun?' The sun is without spot, and the mother of the sun is without spot, from where Jesus is born.' The Roman Inquisition On 25 November 1615, the Inquisition decided to investigate the Letters on Sunspots because it had been mentioned by Tommaso Caccini and Gianozzo Attavanti in their complaint about Galileo. Copies of the text were issued to the Inquisition's theological experts on 19 February 1616. On the morning of 23 February they met and agreed two propositions to be censured (that the Sun is the centre of the world, and that the Earth is not the centre of the world, but moves). Neither proposition is contained in Letters on Sunspots. Shortly after the decision of the Inquisition, the Congregation of the Index placed Copernicus' De Revolutionibus on the Index. Letters on Sunspots was however not banned or required to undergo corrections. This meant that while Catholic scholars could no longer discuss heliocentrism, they could discuss the nature and origin of sunspots freely. Francesco Sizzi In 1611, before the Letters on Sunspots appeared, Francesco Sizzi had published Dianoia Astronomica, attacking the ideas of Galileo's earlier work, Siderius Nuncius. In 1612 he went to Paris and devoted himself to the study of sunspots. In 1613 he wrote to Galileo's friend Orazio Morandi, confirming that his circle of colleagues in France agreed with Galileo that sunspots were not freshly generated with each revolution of the Sun, but could be observed passing round it several times. Furthermore, Sizzi drew to Galileo's attention something he had not yet noticed – that the inclination of the path travelled by sunspots varied with the seasons. Thus in one part of the year the sunspots appeared to be travelling upwards across the face of the Sun; in another part of the year they appeared to be travelling downward. Galileo was to adopt this observation and deploy it in his Dialogue Concerning the Two Chief World Systems in 1632 to demonstrate that the Earth tilted on its axis as it orbited the Sun. Johannes Kepler In his work Phaenomenon singulare (1609) Kepler had described what he took to be the transit of Mercury, observed on 29 May 1607. However, after Michael Maestlin pointed out Galileo's work to him, he corrected himself in 1617 in his Ephemerides, recognising long after the event that what he had seen was sunspots. Welser sent Kepler a copy of Scheiner's first three Apelles letters, and Kepler replied before Galileo, arguing, like him, that Sunspots must be on the surface of the Sun and not satellites. Kepler reached this conclusion only by studying the evidence Scheiner's had provided, without making any direct observations of his own. Kepler did not however engage with the claims of Galileo in "Letters on Sunspots" or have further involvement in public discussion on the question. Michael Maestlin In his treatise on the comet of 1618, Astronomischer Discurs von dem Cometen, so in Anno 1618, Michael Maestlin made reference to the work of Fabricius and cited sunspots as evidence of the mutability of the heavens. He made no reference to the work of either Scheiner or Galileo, although he was aware of both. He concluded that sunspots are definitely on or near the Sun, and not a phenomenon of the Earth's atmosphere; that it is only thanks to the telescope that they can be studied, but that they are not a new phenomenon; and that whether they are on the surface of the Sun or move around it is a question to which there is no reliable answer. Jean Tarde The French churchman Jean Tarde visited Rome in 1615, and he also met Galileo in Florence and discussed sunspots with him, as well as Galileo's other work. He did not agree with Galileo's view that the sunspots were on or near the surface of the Sun, and held rather that they were small planets. On his return to France in 1615 he built an observatory at La Roque-Gageac where he studied sunspots further. In 1620 he published Borbonia Sidera, dedicated to Louis XIII, in which he declared the spots to be the 'Bourbon planets'. Charles Malapert The Belgian Jesuit Charles Malapert agreed with Tarde that the apparent sunspots were in fact planets. His book, published in 1633, was dedicated to Philip IV of Spain and christened them 'Austrian stars' in honour of the house of Habsburg. Pierre Gassendi Pierre Gassendi made his own observations of sunspots between 1618 and 1638. He agreed with Galileo that the spots were on the surface of the Sun, not satellites orbiting it. Like Galileo, he used observation of the spots to estimate the speed of the Sun's rotation, which he gave as 25–26 days. Most of his observations were not published however and his notes were not kept systematically. He did however discuss his findings with Descartes. Rene Descartes René Descartes was interested in sunspots and his correspondence shows that he was actively gathering information about them when he was working on Le Monde. He was aware of Scheiner's Rosa Ursine published in 1630, which conceded Galileo's point that sunspots are actually on the face of the Sun. Whether he knew of Galileo's ideas primarily through Scheiner or whether he read Letters on Sunspots directly is not known, but in his Principles of Philosophy (1644) he refers to "spots which appear on the sun's surface also revolve around it in planes inclined to that of the ecliptic", which appears to indicate at least a knowledge of Galileo's argument. Descartes used sunspots as an illustration of his Vortex Theory. Giovanni Battista Riccioli In his 1651 work Almagestum Novum, Giovanni Battista Riccioli set out 126 arguments against the Copernican model of the universe. In his 43rd argument, Riccioli considered the points Galileo had made in his Letters on Sunspots, and asserted that a heliocentric (Copernican) explanation of the phenomenon was more speculative, while a geocentric model allowed for a more parsimonious explanation and was thus more satisfactory (ref: Occam's Razor). As Riccioli explained it, whether the Sun went round the Earth or the Earth round the Sun, three movements were necessary to explain the movement of sunspots. If the Earth moves around the Sun, the necessary movements were the annual motion of the Earth, the diurnal motion of the Earth, and the rotation of the Sun. However, if the Sun moved around the Earth, this accounted for the same movement as both the annual and diurnal motions in the Copernican model. In addition, the annual gyration of the Sun at its poles, and the rotation of the Sun had to be added to completely account for the movement of sunspots. While both models required three movements, the heliocentric model required the Earth to make two movements (annual and diurnal) which could not be demonstrated, while the geocentric model was based on three observable celestial movements, and was accordingly preferable. Athanasius Kircher Athanasius Kircher succeeded Scheiner in the Chair of Mathematics at the Collegio Romano. In Mundus Subterraneus (1664), he rejected the views of both Scheiner and Galileo, reviving an earlier idea of Kepler's and arguing that sunspots were in fact smoke emanating from fires on the surface of the Sun, and that the surface of the Sun was therefore indeed perfect as the Aristotelians believed, although apparently disfigured by blemishes. Sunspots, he argued, just like the planets in astrology, had a profound influence on the Earth. Sunspots in Galileo's later writings The Assayer In Il Saggiatore (The Assayer) (1623) Galileo was mostly concerned with faults in Orazio Grassi's arguments about comets, but in the introductory section he wrote : 'How many men attacked my Letters on Sunspots, and under what disguises! The material contained therein ought to have opened to the minds eye much room for admirable speculation; instead it met with scorn and derision. Many people disbelieved it or failed to appreciate it. Others, not wanting to agree with my ideas, advanced ridiculous and impossible opinions against me; and some, overwhelmed and convinced by my arguments, attempted to rob me of that glory which was mine, pretending not to have seen my writings and trying to represent themselves as the original discoverers of these impressive marvels.' Christoph Scheiner took this to be an attack on him. He therefore used Rosa Ursina to mount a bitter riposte to Galileo, although he also conceded Galileo's main point, that sunspots exist on the Sun's surface or just above it, and thus that the Sun is not flawless. Dialogue Concerning the Two Chief World Systems In 1632 Galileo published Dialogo sopra i due Massimi Sistemi del Mondo (Dialogue Concerning the Two Chief World Systems), a fictitious four day-long discussion about natural philosophy between the characters Salviati (who argued for Copernican ideas and was effectively a mouthpiece of Galileo), Sagredo, who represented the interested but less well-informed reader, and Simplicio, who argued for Aristotle, and whose arguments were possibly a parody of those made by Pope Urban VIII. The book was reviewed by the Roman Inquisition and in 1633 Galileo was interrogated and found 'vehemently suspect of heresy' because of it. He was forced to renounce his belief in heliocentrism, sentenced to house arrest and banned from publishing anything further. The Dialogue was placed on the Index. The Dialogue is a broad synthesis of Galileo's thinking about physics, planetary movement, how far we can rely on our senses in making judgements about the world, and how we make intelligent use of evidence. It drew together all his findings and recapitulated arguments made in earlier years on specific topics. For this reason, there is no 'section on sunspots' in the Dialogue. Rather, they are referred to at various points in arguments about other topics. In the Dialogue, that sunspots are on the surface of the Sun and not planets was taken as established fact. The discussion concerned what inferences could be drawn about the universe from their rotation. Galileo did not argue that the existence of sunspots conclusively proved that the Copernican model was correct and the Aristotelian model wrong; he explained how the rotation of sunspots could be explained in both models, but that the Aristotelian explanation was much more complicated and suppositional. Day 1 The discussion opens with Salviati arguing that two key Aristotelian arguments are incompatible; either the heavens are perfect and unchanging, or that the evidence of the senses is preferable to argument and reasoning; either we should rely on the evidence of our senses when they tell us changes (such as sunspots) take place, or we should not. Holding both positions is not tenable. Day 2: Salviati argues that sunspots prove the rotation of the Sun on its axis. Aristotelians had previously held that it was impossible for a heavenly body to have more than one natural motion. Aristotelians must therefore choose between their determination that only one natural movement is possible (in which case the Sun is static, as Copernicus argued); or they must explain how a second natural motion occurs if they wish to maintain that the Sun makes a daily orbit of the Earth. This argument is resumed on Day 3 of the Dialogue. See also Copernican Revolution List of Catholic clergy scientists Solar observation References External links (video) Lecture by Paulo Galluzzi, Director of the Museu Galileo, on the involvement of the Accademia dei Lincei in publishing Letter on Sunspots Galileo's Letters on Sunspots (Rome,1613) Malapert's Austriaca Sidera Heliocyclia, (Douai, 1633) Scheiner's Prodromus pro sole mobili et terra stabili contra ... Galilaeum a Galileis (Prague, 1651) 1613 books 1613 in science Galileo Galilei Astronomical controversies History of astronomy
Letters on Sunspots
Astronomy
12,335
15,305,337
https://en.wikipedia.org/wiki/Atomic%20gardening
Atomic gardening is a form of mutation breeding where plants are exposed to radiation. Some of the mutations produced thereby have turned out to be useful. Typically this is gamma radiation in which case it is a produced by cobalt-60. The practice of plant irradiation has resulted in the development of more than 2,000 new varieties of plants, most of which are now used in agricultural production. One example is the resistance to verticillium wilt of the 'Todd's Mitcham' cultivar of peppermint, which was produced from a breeding and test program at Brookhaven National Laboratory from the mid-1950s. Additionally, the Rio Red Grapefruit, developed at the Texas A&M Citrus Center in the 1970s and approved in 1984, accounted for more than three quarters of the grapefruit produced in Texas by 2007. History Beginning in the 1950s, atomic gardens were a part of "Atoms for Peace", an American program to develop peaceful uses of fission energy after World War II. Gamma gardens were established in laboratories in the United States, Europe, Soviet Union, India, and Japan. Though these gardens were initially designed with the aim of testing the effects of radiation on plant life, research gradually turned towards using radiation to introduce beneficial mutations that could give plants useful characteristics. Such characteristics include increased resilience to adverse weather, or a faster growth rate. In addition, the Atomic Gardening Society was established in 1959 by Muriel Howorth, an atomic activist from the United Kingdom, in conjunction with a growing movement to bring atomic energy and experimentation into the lives of ordinary citizens. In 1960, Howorth published a book entitled "Atomic Gardening for the Layman" along a similar theme. The Atomic Gardening Society utilized an early form of crowd-sourcing, in which members received irradiated seeds, planted them in their gardens, and sent reports back to Howorth detailing the results. Howorth herself made national news upon growing a two-foot-tall peanut plant after planting an irradiated nut. The youngest member of the society was Christopher Abbey (15), a student at Eastbourne College and the son of a dentist, who received a certificate of merit for propagating several species of irradiated seeds to maturity. Irradiated seeds were sold to the public by C.J. Speas, a Tennessee dentist who had obtained a license for a cobalt-60 source; and sold seeds produced in a backyard cinderblock bunker. Speas did so upon seeing an opportunity for amateur gardeners to get involved in testing. Howorth, in an effort to give the members of her society a broader selection, began ordering seeds from Speas in large quantities. By 1960, Speas had reportedly shipped Howorth over three and a half million seeds, which were then distributed to nearly a thousand individual Society members. Despite the initial enthusiasm, the Atomic Gardening Society declined by the mid 1960s. This was due to a combination of public opinion moving away from atomic energy and a failure on the part of the crowd-sourced Society to produce noteworthy results. In spite of this, large-scale gamma gardens remained in use, and a number of commercial plant varieties were developed and released by laboratories and private companies alike. Methodology Gamma gardens were typically five acres (two hectares) in size, and were arranged in a circular pattern with a retractable radiation source in the middle. Plants were usually laid out like slices of a pie, stemming from the central radiation source; this pattern produced a range of radiation doses over the radius from the center. Radioactive bombardment would take place for around twenty hours, after which scientists wearing protective equipment would enter the garden and assess the results. The plants nearest the center usually died, while the ones further out often featured "tumors and other growth abnormalities". Beyond these were the plants of interest, with a higher than usual range of mutations, though not to the damaging extent of those closer to the radiation source. These gamma gardens have continued to operate on largely the same designs as those conceived in the 1950s. Research into the potential benefits of atomic gardening has continued, most notably through a joint operation between the International Atomic Energy Agency and the U.N.'s Food and Agriculture Organization. Japan's Institute of Radiation Breeding is well-known for its modern-day usage of atomic gardening techniques. Cultural significance The popularity of atomic gardening coincided with a postwar society seeking to put newly discovered atomic energy to use. Many scientists and the public believed that atomic energy could be harnessed to address numerous worldwide issues, including famine and energy shortages, leading them to embrace the new atomic era. Some scientists that had worked on the military application of atomic energy in the past invested in or sponsored programs dedicated to bringing more peaceful applications of atomic energy to the public domain, and this included atomic gardening. As public skepticism of atomic energy grew, and as nuclear arsenals continued to increase in size across the globe, atomic gardening fell out of favor, along with other Atoms for Peace initiatives. See also The Effect of Gamma Rays on Man-in-the-Moon Marigolds Mutation breeding GMO References External links Institute of Radiation Breeding (IRB), NIAS, MAFF, Hitachiohmiya, Japan IRB gamma field on Google maps Atomic Gardening: An Online History, a comprehensive outline of Atomic Gardening by Dr. Paige Johnson. Gardening Plant genetics Radiobiology
Atomic gardening
Chemistry,Biology
1,085
77,416,429
https://en.wikipedia.org/wiki/Glossary%20of%20number%20theory
This is a glossary of concepts and results in number theory, a field of mathematics. Concepts and results in arithmetic geometry and diophantine geometry can be found in Glossary of arithmetic and diophantine geometry. See also List of number theory topics. A B C D E F G H I L M N P R Q S T V W References Number theory Number theory
Glossary of number theory
Mathematics
79
51,527,976
https://en.wikipedia.org/wiki/Pawe%C5%82%20Urban
Paweł Urban (also spelled as Pawel L. Urban (Chinese name: 鄂本帕偉)) is a chemist and is a professor of Chemistry in the National Tsing Hua University (Hsinchu, Taiwan). He received his Ph.D. in Chemistry from the University of York (United Kingdom). Urban's research interests include mass spectrometry and biochemical analysis. Academic activity Urban is an inventor of the hydrogel micropatch sampling method, fizzy extraction, systems for imaging chemical reactions, and micro-arrays for mass spectrometry (MAMS). He co-authored a book on time-resolved mass spectrometry, and over 100 papers. Urban is editorial board member of Scientific Reports, HardwareX, PeerJ, and acted as a guest editor in Philosophical Transactions of the Royal Society A. His h-index is 34. He received the Ta-You Wu Memorial Award. References Living people Alumni of the University of York Taiwanese chemists Academic staff of the National Tsing Hua University Mass spectrometrists Year of birth missing (living people)
Paweł Urban
Physics,Chemistry
228
4,548,229
https://en.wikipedia.org/wiki/Interaural%20time%20difference
The interaural time difference (or ITD) when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source. When a signal is produced in the horizontal plane, its angle in relation to the head is referred to as its azimuth, with 0 degrees (0°) azimuth being directly in front of the listener, 90° to the right, and 180° being directly behind. Different methods for measuring ITDs For an abrupt stimulus such as a click, onset ITDs are measured. An onset ITD is the time difference between the onset of the signal reaching two ears. A transient ITD can be measured when using a random noise stimulus and is calculated as the time difference between a set peak of the noise stimulus reaching the ears. If the stimulus used is not abrupt but periodic, then ongoing ITDs are measured. This is where the waveforms reaching both ears can be shifted in time until they perfectly match up, and the size of this shift is recorded as the ITD. This shift is known as the interaural phase difference (IPD) and can be used for measuring the ITDs of periodic inputs such as pure tones and amplitude modulated stimuli. An amplitude-modulated stimulus IPD can be assessed by looking at either the waveform envelope or the waveform fine structure. Duplex theory The duplex theory proposed by Lord Rayleigh (1907) provides an explanation for the ability of humans to localise sounds by time differences between the sounds reaching each ear (ITDs) and differences in sound level entering the ears (interaural level differences, ILDs). But there still lies a question whether ITD or ILD is prominent. The duplex theory states that ITDs are used to localise low-frequency sounds, in particular, while ILDs are used in the localisation of high-frequency sound inputs. However, the frequency ranges for which the auditory system can use ITDs and ILDs significantly overlap, and most natural sounds will have both high- and low-frequency components, so that the auditory system will in most cases have to combine information from both ITDs and ILDs to judge the location of a sound source. A consequence of this duplex system is that it is also possible to generate so-called "cue trading" or "time–intensity trading" stimuli on headphones, where ITDs pointing to the left are offset by ILDs pointing to the right, so the sound is perceived as coming from the midline. A limitation of the duplex theory is that the theory does not completely explain directional hearing, as no explanation is given for the ability to distinguish between a sound source directly in front and behind. Also the theory only relates to localising sounds in the horizontal plane around the head. The theory also does not take into account the use of the pinna in localisation (Gelfand, 2004). Experiments conducted by Woodworth (1938) tested the duplex theory by using a solid sphere to model the shape of the head and measuring the ITDs as a function of azimuth for different frequencies. The model used had a distance between the two ears of approximately 22–23 cm. Initial measurements found that there was a maximum time delay of approximately 660 μs when the sound source was placed at directly 90° azimuth to one ear. This time delay correlates to the wavelength of a sound input with a frequency of 1500 Hz. The results concluded that when a sound played had a frequency less than 1500 Hz, the wavelength is greater than this maximum time delay between the ears. Therefore, there is a phase difference between the sound waves entering the ears providing acoustic localisation cues. With a sound input with a frequency closer to 1500 Hz the wavelength of the sound wave is similar to the natural time delay. Therefore, due to the size of the head and the distance between the ears there is a reduced phase difference, so localisations errors start to be made. When a high-frequency sound input is used with a frequency greater than 1500 Hz, the wavelength is shorter than the distance between the ears, a head shadow is produced, and ILD provide cues for the localisation of this sound. Feddersen et al. (1957) also conducted experiments taking measurements on how ITDs alter with changing the azimuth of the loudspeaker around the head at different frequencies. But unlike the Woodworth experiments, human subjects were used rather than a model of the head. The experiment results agreed with the conclusion made by Woodworth about ITDs. The experiments also concluded that there is no difference in ITDs when sounds are provided from directly in front or behind at 0° and 180° azimuth. The explanation for this is that the sound is equidistant from both ears. Interaural time differences alter as the loudspeaker is moved around the head. The maximum ITD of 660 μs occurs when a sound source is positioned at 90° azimuth to one ear. Current findings Starting in 1948, the prevailing theory on interaural time differences centered on the idea that inputs from the medial superior olive differentially process inputs from the ipsilateral and contralateral side relative to the sound. This is accomplished through a discrepancy in arrival time of excitatory inputs into the medial superior olive, based on differential conductance of the axons, which allows both sounds to ultimately converge at the same time through neurons with complementary intrinsic properties. Franken et al. attempted to further elucidate the mechanisms underlying ITD in mammalian brains. One experiment they performed was to isolate discrete inhibitory post-synaptic potentials and try to determine whether inhibitory inputs to the superior olive were allowing the faster excitatory input to delay firing until the two signals were synced. However, after blocking EPSPs with a glutamate receptor blocker, they determine that the size of inhibitory inputs was too marginal to appear to play a significant role in phase locking. This was verified when the experimenters blocked inhibitory input and still saw clear phase locking of the excitatory inputs in their absence. This led them to the theory that in-phase excitatory inputs are summated such that the brain can process sound localization by counting the number of action potentials that arise from various magnitudes of summated depolarization. Franken et al. also examined anatomical and functional patterns within the superior olive to clarify previous theories about the rostrocaudal axis serving as a source of tonotopy. Their results showed a significant correlation between tuning frequency and relative position along the dorsoventral axis, while they saw no distinguishable pattern of tuning frequency on the rostrocaudal axis. Lastly, they went on to further explore the driving forces behind the interaural time difference, specifically whether the process is simply the alignment of inputs that is processed by a coincidence detector, or whether the process is more complicated. Evidence from Franken et al. shows that the processing is affected by inputs that precede the binaural signal, which would alter the functioning of voltage-gated sodium and potassium channels to shift the membrane potential of the neuron. Furthermore, the shift is dependent on the frequency tuning of each neuron, ultimately creating a more complex confluence and analysis of sound. These findings provide several pieces of evidence that contradict existing theories about binaural audition. The anatomy of the ITD pathway The auditory nerve fibres, known as the afferent nerve fibres, carry information from the organ of Corti to the brainstem and brain. Auditory afferent fibres consist of two types of fibres called type I and type II fibres. Type I fibres innervate the base of one or two inner hair cells and Type II fibres innervate the outer hair cells. Both leave the organ of Corti through an opening called the habenula perforata. The type I fibres are thicker than the type II fibres and may also differ in how they innervate the inner hair cells. Neurons with large calyceal endings ensure preservation of timing information throughout the ITD pathway. Next in the pathway is the cochlear nucleus, which receives mainly ipsilateral (that is, from the same side) afferent input. The cochlear nucleus has three distinct anatomical divisions, known as the antero-ventral cochlear nucleus (AVCN), postero-ventral cochlear nucleus (PVCN) and dorsal cochlear nucleus (DCN) and each have different neural innervations. The AVCN contains predominant bushy cells, with one or two profusely branching dendrites; it is thought that bushy cells may process the change in the spectral profile of complex stimuli. The AVCN also contain cells with more complex firing patterns than bushy cells called multipolar cells, these cells have several profusely branching dendrites and irregular shaped cell bodies. Multipolar cells are sensitive to changes in acoustic stimuli and in particular, onset and offset of sounds, as well as changes in intensity and frequency. The axons of both cell types leave the AVCN as large tract called the ventral acoustic stria, which forms part of the trapezoid body and travels to the superior olivary complex. A group of nuclei in pons make up the superior olivary complex (SOC). This is the first stage in auditory pathway to receive input from both cochleas, which is crucial for our ability to localise the sounds source in the horizontal plane. The SOC receives input from cochlear nuclei, primarily the ipsilateral and contralateral AVCN. Four nuclei make up the SOC but only the medial superior olive (MSO) and the lateral superior olive (LSO) receive input from both cochlear nuclei. The MSO is made up of neurons which receive input from the low-frequency fibers of the left and right AVCN. The result of having input from both cochleas is an increase in the firing rate of the MSO units. The neurons in the MSO are sensitive to the difference in the arrival time of sound at each ear, also known as the interaural time difference (ITD). Research shows that if stimulation arrives at one ear before the other, many of the MSO units will have increased discharge rates. The axons from the MSO continue to higher parts of the pathway via the ipsilateral lateral lemniscus tract.(Yost, 2000) The lateral lemniscus (LL) is the main auditory tract in the brainstem connecting SOC to the inferior colliculus. The dorsal nucleus of the lateral lemniscus (DNLL) is a group of neurons separated by lemniscus fibres, these fibres are predominantly destined for the inferior colliculus (IC). In studies using an unanesthetized rabbit the DNLL was shown to alter the sensitivity of the IC neurons and may alter the coding of interaural timing differences (ITDs) in the IC.(Kuwada et al., 2005) The ventral nucleus of the lateral lemniscus (VNLL) is a chief source of input to the inferior colliculus. Research using rabbits shows the discharge patterns, frequency tuning and dynamic ranges of VNLL neurons supply the inferior colliculus with a variety of inputs, each enabling a different function in the analysis of sound.(Batra & Fitzpatrick, 2001) In the inferior colliculus (IC) all the major ascending pathways from the olivary complex and the central nucleus converge. The IC is situated in the midbrain and consists of a group of nuclei the largest of these is the central nucleus of inferior colliculus (CNIC). The greater part of the ascending axons forming the lateral lemniscus will terminate in the ipsilateral CNIC however a few follow the commissure of Probst and terminate on the contralateral CNIC. The axons of most of the CNIC cells form the brachium of IC and leave the brainstem to travel to the ipsilateral thalamus. Cells in different parts of the IC tend to be monaural, responding to input from one ear, or binaural and therefore respond to bilateral stimulation. The spectral processing that occurs in the AVCN and the ability to process binaural stimuli, as seen in the SOC, are replicated in the IC. Lower centres of the IC extract different features of the acoustic signal such as frequencies, frequency bands, onsets, offsets, changes in intensity and localisation. The integration or synthesis of acoustic information is thought to start in the CNIC.(Yost, 2000) Effect of a hearing loss A number of studies have looked into the effect of hearing loss on interaural time differences. In their review of localisation and lateralisation studies, Durlach, Thompson, and Colburn (1981), cited in Moore (1996) found a "clear trend for poor localization and lateralization in people with unilateral or asymmetrical cochlear damage". This is due to the difference in performance between the two ears. In support of this, they did not find significant localisation problems in individuals with symmetrical cochlear losses. In addition to this, studies have been conducted into the effect of hearing loss on the threshold for interaural time differences. The normal human threshold for detection of an ITD is up to a time difference of 10 μs. Studies by Gabriel, Koehnke, & Colburn (1992), Häusler, Colburn, & Marr (1983) and Kinkel, Kollmeier, & Holube (1991) (cited by Moore, 1996) have shown that there can be great differences between individuals regarding binaural performance. It was found that unilateral or asymmetric hearing losses can increase the threshold of ITD detection in patients. This was also found to apply to individuals with symmetrical hearing losses when detecting ITDs in narrowband signals. However, ITD thresholds seem to be normal for those with symmetrical losses when listening to broadband sounds. See also Sound localization References Further reading Feddersen, W. E., Sandel, T. T., Teas, D. C., Jeffress, L. A. (1957) Localization of high frequency tones. Journal of the Acoustical Society of America. 29: 988–991. Fitzpatrick, D. C., Batra, R., Kuwada, S. (1997). Neurons Sensitive to InterauralTemporal Disparities in the Medial Part of the Ventral Nucleus of the Lateral Lemniscus. The Journal of Neurophysiology. 78: 511–515. Franken TP, Roberts MT, Wei L, NL NLG, Joris PX. In vivo coincidence detection in mammalian sound localization generates phase delays. Nature neuroscience. 2015;18(3):444-452. doi:10.1038/nn.3948. Gelfand, S. A. (2004) Hearing: An Introduction to Psychological and Physiological Acoustics. 4th Edition New York: Marcel Dekker. Kuwada, S., Fitzpatrick, D. C., Batra, R., Ostapoff, E. M. ( 2005). Sensitivity to Interaural Time Difference in the Dorsal Nucleus of the Lateral Lemniscus of the Unanesthetized Rabbit: Comparison with other structures. Journal of Neurophysiology. 95: 1309–1322. Moore, B. (1996) Perceptual Consequences of Cochlear Hearing Loss and their Implications for the Design of Hearing Aids. Ear and Hearing. 17(2):133-161 Moore, B. C. (2004) An Introduction to the Psychology of Hearing. 5th Edition London: Elsevier Academic Press. Woodworth, R. S. (1938) Experimental Psychology. New York: Holt, Rinehart, Winston. Yost, W. A. (2000) Fundamentals of Hearing: An Introduction. 4th Edition San Diego: Academic Press. External links Calculation of phase angle (phase difference) from time delay (time of arrival ITD) and frequency Audio engineering
Interaural time difference
Engineering
3,470
2,998,794
https://en.wikipedia.org/wiki/Direct%20numerical%20control
Direct numerical control (DNC), also known as distributed numerical control (also DNC), is a common manufacturing term for networking CNC machine tools. On some CNC machine controllers, the available memory is too small to contain the machining program (for example machining complex surfaces), so in this case the program is stored in a separate computer and sent directly to the machine, one block at a time. If the computer is connected to a number of machines it can distribute programs to different machines as required. Usually, the manufacturer of the control provides suitable DNC software. However, if this provision is not possible, some software companies provide DNC applications that fulfill the purpose. DNC networking or DNC communication is always required when CAM programs are to run on some CNC machine control. DNC TITAN, MICRO DNC, DNC USB Wireless DNC is also used in place of hard-wired versions. Controls of this type are very widely used in industries with significant sheet metal fabrication, such as the automotive, appliance, and aerospace industries. MICRO DNC 2 Currently there are some devices designed to support DNC function for CNC machines, see popular DNC devices on the market here: https://dncdevice.com History 1950s-1970s Programs had to be walked to NC controls, generally on paper tape. NC controls had paper tape readers precisely for this purpose. Many companies were still punching programs on paper tape well into the 1980s, more than twenty-five years after its elimination in the computer industry. 1980s The focus in the 1980s was mainly on reliably transferring NC programs between a host computer and the control. The Host computers would frequently be Sun Microsystems, HP, Prime, DEC or IBM type computers running a variety of CAD/CAM software. DNC companies offered machine tool links using rugged proprietary terminals and networks. For example, DLog offered an x86 based terminal, and NCPC had one based on the 6809. The host software would be responsible for tracking and authorising NC program modifications. Depending on program size, for the first time operators had the opportunity to modify programs at the DNC terminal. No time was lost due to broken tapes, and if the software was correctly used, an operator running incorrect or out of date programs became a thing of the past. Older controls frequently had no port capable of receiving programs such as an RS-232 or RS-422 connector. In these cases, a device known as a Behind The Reader or BTR card was used. The connection between the control's tape reader and the internal processor was interrupted by a microprocessor based device which emulated the paper tape reader's signals, but which had a serial port connected to the DNC system. As far as the control was concerned, it was receiving from the paper tape unit as it always had; in fact it was the BTR or Reader Emulation card which was transmitting. A switch was frequently added to permit the paper tape reader to be used as a backup. 1990s to present The PC explosion in the late 1980s and early 1990s signalled the end of the road for proprietary DNC terminals. With some exceptions, CNC manufacturers began migrating to PC-based controls running DOS, Windows or OS/2 which could be linked in to existing networks using standard protocols. Customers began migrating away from expensive minicomputer and workstation based CAD/CAM toward more cost-effective PC-based solutions. Users began to demand more from their DNC systems than secure upload/download and editing. PC-based systems which could accomplish these tasks based on standard networks began to be available at minimal or no cost. In some cases, users no longer needed a DNC "expert" to implement shop floor networking, and could do it themselves. However, the task can still be a challenge based on the CNC Control wiring requirements, parameters and NC program format. To remain competitive, therefore, DNC companies moved their offerings upmarket into DNC Networking, Shop Floor Control or SFC, Manufacturing Execution Systems or MES. These terms encompass concepts such as real-time Machine Monitoring, Graphics, Tool Management, Traveler Management and Scheduling. Instead of merely acting as a repository for programs, DNC systems aim to give operators at the machine an integrated view of all the information (both textual and graphical) they require in order to carry out a manufacturing operation, and give management timely information as to the progress of each step. DNC systems are frequently directly integrated with corporate CAD/CAM, ERP and Computer-aided Process Planning CAPP systems. Special protocols A challenge when interfacing into machine tools is that in some cases special protocols are used. Two well-known examples are Mazak's Mazatrol and Heidenhain's LSV2 protocol. Many DNC systems offer support for these protocols. Another protocol is DNC2 which is found on Fanuc controls. DNC2 allows advanced interchange of data with the control, such as tooling offsets, tool life information and machine status as well as automated transfer without operator intervention. Machine monitoring One of the issues involved in machine monitoring is whether or not it can be accomplished automatically in a practical way. In the 1980s monitoring was typically done by having a menu on the DNC terminal where the operator had to manually indicate what was being done by selecting from a menu, which has obvious drawbacks. There have been advances in passive monitoring systems where the machine condition can be determined by hardware attached in such a way as not to interfere with machine operations (and potentially void warranties). Many modern controls allow external applications to query their status using a special protocol. MTConnect is one prominent attempt to augment the existing world of proprietary systems with an open-source, industry-standard protocol using XML schemas. The end goal being to achieve higher levels of manufacturing business intelligence and workflow automation. Alternatives Smaller facilities will typically use a portable PC or laptop to avoid the expense of a fully networked DNC system. In the past Facit Walk Disk and a similar device from Mazak were very popular. Footnotes Computer-aided engineering Industrial automation Embedded systems Machine tools
Direct numerical control
Technology,Engineering
1,257
42,276,954
https://en.wikipedia.org/wiki/Exotic%20affine%20space
In algebraic geometry, an exotic affine space is a complex algebraic variety that is diffeomorphic to for some n, but is not isomorphic as an algebraic variety to . An example of an exotic is the Koras–Russell cubic threefold, which is the subset of defined by the polynomial equation References Algebraic varieties Diffeomorphisms
Exotic affine space
Mathematics
71
366,136
https://en.wikipedia.org/wiki/Pontryagin%20duality
In mathematics, Pontryagin duality is a duality between locally compact abelian groups that allows generalizing Fourier transform to all such groups, which include the circle group (the multiplicative group of complex numbers of modulus one), the finite abelian groups (with the discrete topology), and the additive group of the integers (also with the discrete topology), the real numbers, and every finite-dimensional vector space over the reals or a -adic field. The Pontryagin dual of a locally compact abelian group is the locally compact abelian topological group formed by the continuous group homomorphisms from the group to the circle group with the operation of pointwise multiplication and the topology of uniform convergence on compact sets. The Pontryagin duality theorem establishes Pontryagin duality by stating that any locally compact abelian group is naturally isomorphic with its bidual (the dual of its dual). The Fourier inversion theorem is a special case of this theorem. The subject is named after Lev Pontryagin who laid down the foundations for the theory of locally compact abelian groups and their duality during his early mathematical works in 1934. Pontryagin's treatment relied on the groups being second-countable and either compact or discrete. This was improved to cover the general locally compact abelian groups by Egbert van Kampen in 1935 and André Weil in 1940. Introduction Pontryagin duality places in a unified context a number of observations about functions on the real line or on finite abelian groups: Suitably regular complex-valued periodic functions on the real line have Fourier series and these functions can be recovered from their Fourier series; Suitably regular complex-valued functions on the real line have Fourier transforms that are also functions on the real line and, just as for periodic functions, these functions can be recovered from their Fourier transforms; and Complex-valued functions on a finite abelian group have discrete Fourier transforms, which are functions on the dual group, which is a (non-canonically) isomorphic group. Moreover, any function on a finite abelian group can be recovered from its discrete Fourier transform. The theory, introduced by Lev Pontryagin and combined with the Haar measure introduced by John von Neumann, André Weil and others depends on the theory of the dual group of a locally compact abelian group. It is analogous to the dual vector space of a vector space: a finite-dimensional vector space and its dual vector space are not naturally isomorphic, but the endomorphism algebra (matrix algebra) of one is isomorphic to the opposite of the endomorphism algebra of the other: via the transpose. Similarly, a group and its dual group are not in general isomorphic, but their endomorphism rings are opposite to each other: . More categorically, this is not just an isomorphism of endomorphism algebras, but a contravariant equivalence of categories – see . Definition A topological group is a locally compact group if the underlying topological space is locally compact and Hausdorff; a topological group is abelian if the underlying group is abelian. Examples of locally compact abelian groups include finite abelian groups, the integers (both for the discrete topology, which is also induced by the usual metric), the real numbers, the circle group T (both with their usual metric topology), and also the p-adic numbers (with their usual p-adic topology). For a locally compact abelian group , the Pontryagin dual is the group of continuous group homomorphisms from to the circle group . That is, The Pontryagin dual is usually endowed with the topology given by uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from to ). For example, Pontryagin duality theorem Canonical means that there is a naturally defined map  ; more importantly, the map should be functorial in . For the multiplicative character of the group , the canonical isomorphism is defined on as follows: That is, In other words, each group element is identified to the evaluation character on the dual. This is strongly analogous to the canonical isomorphism between a finite-dimensional vector space and its double dual, , and it is worth mentioning that any vector space is an abelian group. If is a finite abelian group, then but this isomorphism is not canonical. Making this statement precise (in general) requires thinking about dualizing not only on groups, but also on maps between the groups, in order to treat dualization as a functor and prove the identity functor and the dualization functor are not naturally equivalent. Also the duality theorem implies that for any group (not necessarily finite) the dualization functor is an exact functor. Pontryagin duality and the Fourier transform Haar measure One of the most remarkable facts about a locally compact group is that it carries an essentially unique natural measure, the Haar measure, which allows one to consistently measure the "size" of sufficiently regular subsets of . "Sufficiently regular subset" here means a Borel set; that is, an element of the σ-algebra generated by the compact sets. More precisely, a right Haar measure on a locally compact group is a countably additive measure μ defined on the Borel sets of which is right invariant in the sense that for an element of and a Borel subset of and also satisfies some regularity conditions (spelled out in detail in the article on Haar measure). Except for positive scaling factors, a Haar measure on is unique. The Haar measure on allows us to define the notion of integral for (complex-valued) Borel functions defined on the group. In particular, one may consider various Lp spaces associated to the Haar measure . Specifically, Note that, since any two Haar measures on are equal up to a scaling factor, this -space is independent of the choice of Haar measure and thus perhaps could be written as . However, the -norm on this space depends on the choice of Haar measure, so if one wants to talk about isometries it is important to keep track of the Haar measure being used. Fourier transform and Fourier inversion formula for L1-functions The dual group of a locally compact abelian group is used as the underlying space for an abstract version of the Fourier transform. If , then the Fourier transform is the function on defined by where the integral is relative to Haar measure on . This is also denoted . Note the Fourier transform depends on the choice of Haar measure. It is not too difficult to show that the Fourier transform of an function on is a bounded continuous function on which vanishes at infinity. The inverse Fourier transform of an integrable function on is given by where the integral is relative to the Haar measure on the dual group . The measure on that appears in the Fourier inversion formula is called the dual measure to and may be denoted . The various Fourier transforms can be classified in terms of their domain and transform domain (the group and dual group) as follows (note that is Circle group): As an example, suppose , so we can think about as by the pairing If is the Lebesgue measure on Euclidean space, we obtain the ordinary Fourier transform on and the dual measure needed for the Fourier inversion formula is . If we want to get a Fourier inversion formula with the same measure on both sides (that is, since we can think about as its own dual space we can ask for to equal ) then we need to use However, if we change the way we identify with its dual group, by using the pairing then Lebesgue measure on is equal to its own dual measure. This convention minimizes the number of factors of that show up in various places when computing Fourier transforms or inverse Fourier transforms on Euclidean space. (In effect it limits the only to the exponent rather than as a pre-factor outside the integral sign.) Note that the choice of how to identify with its dual group affects the meaning of the term "self-dual function", which is a function on equal to its own Fourier transform: using the classical pairing the function is self-dual. But using the pairing, which keeps the pre-factor as unity, makes self-dual instead. This second definition for the Fourier transform has the advantage that it maps the multiplicative identity to the convolution identity, which is useful as is a convolution algebra. See the next section on the group algebra. In addition, this form is also necessarily isometric on spaces. See below at Plancherel and L2 Fourier inversion theorems. Group algebra The space of integrable functions on a locally compact abelian group is an algebra, where multiplication is convolution: the convolution of two integrable functions and is defined as This algebra is referred to as the Group Algebra of . By the Fubini–Tonelli theorem, the convolution is submultiplicative with respect to the norm, making a Banach algebra. The Banach algebra has a multiplicative identity element if and only if is a discrete group, namely the function that is 1 at the identity and zero elsewhere. In general, however, it has an approximate identity which is a net (or generalized sequence) indexed on a directed set such that The Fourier transform takes convolution to multiplication, i.e. it is a homomorphism of abelian Banach algebras (of norm ≤ 1): In particular, to every group character on corresponds a unique multiplicative linear functional on the group algebra defined by It is an important property of the group algebra that these exhaust the set of non-trivial (that is, not identically zero) multiplicative linear functionals on the group algebra; see section 34 of . This means the Fourier transform is a special case of the Gelfand transform. Plancherel and L2 Fourier inversion theorems As we have stated, the dual group of a locally compact abelian group is a locally compact abelian group in its own right and thus has a Haar measure, or more precisely a whole family of scale-related Haar measures. Since the complex-valued continuous functions of compact support on are -dense, there is a unique extension of the Fourier transform from that space to a unitary operator and we have the formula Note that for non-compact locally compact groups the space does not contain , so the Fourier transform of general -functions on is "not" given by any kind of integration formula (or really any explicit formula). To define the Fourier transform one has to resort to some technical trick such as starting on a dense subspace like the continuous functions with compact support and then extending the isometry by continuity to the whole space. This unitary extension of the Fourier transform is what we mean by the Fourier transform on the space of square integrable functions. The dual group also has an inverse Fourier transform in its own right; it can be characterized as the inverse (or adjoint, since it is unitary) of the Fourier transform. This is the content of the Fourier inversion formula which follows. In the case the dual group is naturally isomorphic to the group of integers and the Fourier transform specializes to the computation of coefficients of Fourier series of periodic functions. If is a finite group, we recover the discrete Fourier transform. Note that this case is very easy to prove directly. Bohr compactification and almost-periodicity One important application of Pontryagin duality is the following characterization of compact abelian topological groups: That being compact implies is discrete or that being discrete implies that is compact is an elementary consequence of the definition of the compact-open topology on and does not need Pontryagin duality. One uses Pontryagin duality to prove the converses. The Bohr compactification is defined for any topological group , regardless of whether is locally compact or abelian. One use made of Pontryagin duality between compact abelian groups and discrete abelian groups is to characterize the Bohr compactification of an arbitrary abelian locally compact topological group. The Bohr compactification of is , where H has the group structure , but given the discrete topology. Since the inclusion map is continuous and a homomorphism, the dual morphism is a morphism into a compact group which is easily shown to satisfy the requisite universal property. Categorical considerations Pontryagin duality can also profitably be considered functorially. In what follows, LCA is the category of locally compact abelian groups and continuous group homomorphisms. The dual group construction of is a contravariant functor LCA → LCA, represented (in the sense of representable functors) by the circle group as In particular, the double dual functor is covariant. A categorical formulation of Pontryagin duality then states that the natural transformation between the identity functor on LCA and the double dual functor is an isomorphism. Unwinding the notion of a natural transformation, this means that the maps are isomorphisms for any locally compact abelian group , and these isomorphisms are functorial in . This isomorphism is analogous to the double dual of finite-dimensional vector spaces (a special case, for real and complex vector spaces). An immediate consequence of this formulation is another common categorical formulation of Pontryagin duality: the dual group functor is an equivalence of categories from LCA to LCAop. The duality interchanges the subcategories of discrete groups and compact groups. If is a ring and is a left –module, the dual group will become a right –module; in this way we can also see that discrete left –modules will be Pontryagin dual to compact right –modules. The ring of endomorphisms in LCA is changed by duality into its opposite ring (change the multiplication to the other order). For example, if is an infinite cyclic discrete group, is a circle group: the former has so this is true also of the latter. Generalizations Generalizations of Pontryagin duality are constructed in two main directions: for commutative topological groups that are not locally compact, and for noncommutative topological groups. The theories in these two cases are very different. Dualities for commutative topological groups When is a Hausdorff abelian topological group, the group with the compact-open topology is a Hausdorff abelian topological group and the natural mapping from to its double-dual makes sense. If this mapping is an isomorphism, it is said that satisfies Pontryagin duality (or that is a reflexive group, or a reflective group). This has been extended in a number of directions beyond the case that is locally compact. In particular, Samuel Kaplan showed in 1948 and 1950 that arbitrary products and countable inverse limits of locally compact (Hausdorff) abelian groups satisfy Pontryagin duality. Note that an infinite product of locally compact non-compact spaces is not locally compact. Later, in 1975, Rangachari Venkataraman showed, among other facts, that every open subgroup of an abelian topological group which satisfies Pontryagin duality itself satisfies Pontryagin duality. More recently, Sergio Ardanza-Trevijano and María Jesús Chasco have extended the results of Kaplan mentioned above. They showed that direct and inverse limits of sequences of abelian groups satisfying Pontryagin duality also satisfy Pontryagin duality if the groups are metrizable or -spaces but not necessarily locally compact, provided some extra conditions are satisfied by the sequences. However, there is a fundamental aspect that changes if we want to consider Pontryagin duality beyond the locally compact case. Elena Martín-Peinador proved in 1995 that if is a Hausdorff abelian topological group that satisfies Pontryagin duality, and the natural evaluation pairing is (jointly) continuous, then is locally compact. As a corollary, all non-locally compact examples of Pontryagin duality are groups where the pairing is not (jointly) continuous. Another way to generalize Pontryagin duality to wider classes of commutative topological groups is to endow the dual group with a bit different topology, namely the topology of uniform convergence on totally bounded sets. The groups satisfying the identity under this assumption are called stereotype groups. This class is also very wide (and it contains locally compact abelian groups), but it is narrower than the class of reflective groups. Pontryagin duality for topological vector spaces In 1952 Marianne F. Smith noticed that Banach spaces and reflexive spaces, being considered as topological groups (with the additive group operation), satisfy Pontryagin duality. Later B. S. Brudovskiĭ, William C. Waterhouse and K. Brauner showed that this result can be extended to the class of all quasi-complete barreled spaces (in particular, to all Fréchet spaces). In the 1990s Sergei Akbarov gave a description of the class of the topological vector spaces that satisfy a stronger property than the classical Pontryagin reflexivity, namely, the identity where means the space of all linear continuous functionals endowed with the topology of uniform convergence on totally bounded sets in (and means the dual to in the same sense). The spaces of this class are called stereotype spaces, and the corresponding theory found a series of applications in Functional analysis and Geometry, including the generalization of Pontryagin duality for non-commutative topological groups. Dualities for non-commutative topological groups For non-commutative locally compact groups the classical Pontryagin construction stops working for various reasons, in particular, because the characters don't always separate the points of , and the irreducible representations of are not always one-dimensional. At the same time it is not clear how to introduce multiplication on the set of irreducible unitary representations of , and it is even not clear whether this set is a good choice for the role of the dual object for . So the problem of constructing duality in this situation requires complete rethinking. Theories built to date are divided into two main groups: the theories where the dual object has the same nature as the source one (like in the Pontryagin duality itself), and the theories where the source object and its dual differ from each other so radically that it is impossible to count them as objects of one class. The second type theories were historically the first: soon after Pontryagin's work Tadao Tannaka (1938) and Mark Krein (1949) constructed a duality theory for arbitrary compact groups known now as the Tannaka–Krein duality. In this theory the dual object for a group is not a group but a category of its representations . The theories of first type appeared later and the key example for them was the duality theory for finite groups. In this theory the category of finite groups is embedded by the operation of taking group algebra (over ) into the category of finite dimensional Hopf algebras, so that the Pontryagin duality functor turns into the operation of taking the dual vector space (which is a duality functor in the category of finite dimensional Hopf algebras). In 1973 Leonid I. Vainerman, George I. Kac, Michel Enock, and Jean-Marie Schwartz built a general theory of this type for all locally compact groups. From the 1980s the research in this area was resumed after the discovery of quantum groups, to which the constructed theories began to be actively transferred. These theories are formulated in the language of C*-algebras, or Von Neumann algebras, and one of its variants is the recent theory of locally compact quantum groups. One of the drawbacks of these general theories, however, is that in them the objects generalizing the concept of a group are not Hopf algebras in the usual algebraic sense. This deficiency can be corrected (for some classes of groups) within the framework of duality theories constructed on the basis of the notion of envelope of topological algebra. See also Peter–Weyl theorem Cartier duality Stereotype space Notes Citations References Harmonic analysis Duality theories Theorems in analysis Fourier analysis Lp spaces
Pontryagin duality
Mathematics
4,167
20,021,742
https://en.wikipedia.org/wiki/Pearloid
Pearloid is a plastic that is intended to resemble mother of pearl. It is commonly used in making musical instruments, especially for pickguards, electric guitar inlays, and accordions. Production Pearloid is produced by swirling together chunks of celluloid in a solvent, then curing, which gives it a mother of pearl effect. It is sliced and bonded to or inlaid in other materials, such as the wood of guitar necks. Use Pearloid is used in any context where genuine mother of pearl or abalone might be used, as it is much cheaper and doesn't deplete the supply of the natural material. Gibson uses it as a substitute for the mother of pearl inlays in the fretboards on most of its guitars. Various colored versions are often used on items intended to have a retro appearance. See also Cultured pearl Imitation pearl References Cellulose Pearls Plastics Thermoplastics
Pearloid
Physics
189
78,312,540
https://en.wikipedia.org/wiki/2MASS%20J05581644%E2%80%934501559
2MASS J0558 (also known as 2MASS J05581644–4501559) is a young red dwarf. It has one planetary-mass object orbiting it at a separation of 1043 astronomical units. The host star The primary was observed at the Southern African Large Telescope (SALT) and the spectrum agrees with an M4V spectral type. The star shows several features of youth, such as H-alpha emission and likely shallower absorption due to TiO, CaH and FeH. TESS shows a variability with a period of 1.56 days and several flares. A short rotation period and the frequency of flares also agree with a young age. The researchers use the rotation period to determine an age limit of 120-650 Million years (Myr). The higher brightness in ultraviolet and maybe x-ray from GALEX and ROSAT are also in agreement with this age. The researchers find that the star could be a member of the 30-100 Myr old Octans-Near moving group, but the nature of this moving group is disputed. Planetary system CWISEP J055816.68-450233.6 was first identified as a possible proper motion object by the CatWISE team in 2020 in WISE data and with Spitzer follow-up. But this team was not able to confirm the motion of this object. Follow-up observations with Magellan/FIRE showed a near-infrared spectral type of T8.5 and it was mentioned for the first time that it could be a companion to 2MASS J0558 (see table 2). In 2024 it was discovered that this T-dwarf co-moves with 2MASS J0558 from WISE/NEOWISE data and it was given the name 0558B. The pair is separated by 38.67 arcseconds. The paper also mentions which citizen scientist from the backyard worlds project discovered this object. The discoverers are the CatWISE team, Arttu Sainio, Dan Caselden and Jim Walla. The spectrum does not show strong peculiarities and the companion was classified as a T8. It does show enhanced K-band spectrum, but it is not clear if this is a sign of youth. From the age of the primary, the secondary has a mass of 6-12 , meaning it is below the deuterium-burning limit. This makes this object a planetary-mass companion. See also List of directly imaged exoplanets List of exoplanets discovered in 2024 References Exoplanets discovered in 2024 M-type main-sequence stars T-type brown dwarfs Pictor 2MASS objects
2MASS J05581644–4501559
Astronomy
548
14,861,779
https://en.wikipedia.org/wiki/View%20factor
In radiative heat transfer, a view factor, , is the proportion of the radiation which leaves surface that strikes surface . In a complex 'scene' there can be any number of different objects, which can be divided in turn into even more surfaces and surface segments. View factors are also sometimes known as configuration factors, form factors, angle factors or shape factors. Relations Summation Radiation leaving a surface within an enclosure is conserved. Because of this, the sum of all view factors from a given surface, , within the enclosure is unity as defined by the summation rule where is the number of surfaces in the enclosure. Any enclosure with surfaces has a total view factors. For example, consider a case where two blobs with surfaces A and B are floating around in a cavity with surface C. All of the radiation that leaves A must either hit B or C, or if A is concave, it could hit A. 100% of the radiation leaving A is divided up among A, B, and C. Confusion often arises when considering the radiation that arrives at a target surface. In that case, it generally does not make sense to sum view factors as view factor from A and view factor from B (above) are essentially different units. C may see 10% of A 's radiation and 50% of B 's radiation and 20% of C 's radiation, but without knowing how much each radiates, it does not even make sense to say that C receives 80% of the total radiation. Reciprocity The reciprocity relation for view factors allows one to calculate if one already knows and is given as where and are the areas of the two surfaces. Self-viewing For a convex surface, no radiation can leave the surface and then hit it later, because radiation travels in straight lines. Hence, for convex surfaces, For concave surfaces, this doesn't apply, and so for concave surfaces Superposition The superposition rule (or summation rule) is useful when a certain geometry is not available with given charts or graphs. The superposition rule allows us to express the geometry that is being sought using the sum or difference of geometries that are known. View factors of differential areas Taking the limit of a small flat surface gives differential areas, the view factor of two differential areas of areas and at a distance s is given by: where and are the angle between the surface normals and a ray between the two differential areas. The view factor from a general surface to another general surface is given by: Similarly the view factor is defined as the fraction of radiation that leaves and is intercepted by , yielding the equation The view factor is related to the etendue. Example solutions For complex geometries, the view factor integral equation defined above can be cumbersome to solve. Solutions are often referenced from a table of theoretical geometries. Common solutions are included in the following table: Nusselt analog A geometrical picture that can aid intuition about the view factor was developed by Wilhelm Nusselt, and is called the Nusselt analog. The view factor between a differential element dAi and the element Aj can be obtained projecting the element Aj onto the surface of a unit hemisphere, and then projecting that in turn onto a unit circle around the point of interest in the plane of Ai. The view factor is then equal to the differential area dAi times the proportion of the unit circle covered by this projection. The projection onto the hemisphere, giving the solid angle subtended by Aj, takes care of the factors cos(θ2) and 1/r2; the projection onto the circle and the division by its area then takes care of the local factor cos(θ1) and the normalisation by π. The Nusselt analog has on occasion been used to actually measure form factors for complicated surfaces, by photographing them through a suitable fish-eye lens. (see also Hemispherical photography). But its main value now is essentially in building intuition. See also Radiosity, a matrix calculation method for solving radiation transfer between a number of bodies. Gebhart factor, an expression to solve radiation transfer problems between any number of surfaces. References External links A large number of 'standard' view factors can be calculated with the use of tables that are commonly provided in heat transfer textbooks. List of view factors for specific geometry cases View3D, a computer program (FOSS) for calculating view factors in 2D and 3D. Heat transfer
View factor
Physics,Chemistry
905
75,038,618
https://en.wikipedia.org/wiki/Oppo%20Find%20X6
The Oppo Find X6 is a series of two Android-based smartphones manufactured by Oppo as part of its flagship Find X series. Unveiled as successors to the Oppo Find X5 series, both phones were unveiled on 21 March 2023. Currently, the Find X6 series is available for sale only in mainland China. Lineup The Find X6 series consists of two devices - the regular Find X6 and the top of the line Find X6 Pro. The Find X6 features a curved display with a variable refresh rate from 40 Hz to 120 Hz, either 12 GB or 16GB of RAM, and storage options from 256 GB to 512 GB. The Find X6 Pro flagship comes with a curved LTPO3 display that offers a variable refresh rate starting at 1 HZ and a higher 1440p resolution, either 12 GB or 16GB of RAM, and storage options from 256 GB to 512 GB. Both phones feature 10-bit HDR10+ capable displays, but the Find X6 Pro's battery capacity is the largest in the lineup and comes with upgraded cameras compared to the Find X6. Design Both the Find X6 and the Find X6 Pro feature curved displays and aluminium frames. However, only the Find X6 Pro's screen is protected by Corning Gorilla Glass Victus 2. The Find X6 comes in either Black, Green or Gold colourways. The Green and Gold variants were manufactured with Oppo's patented Oppo Glow process, while the Black variant features a mirrored glass rear. The Find X6 is also IP64 protected. The more advanced Find X6 Pro features IP68 water and dust resistance. Its colour options are Black, Green and Brown, with the Brown variant being the only one that is uniquely crafted with a dual tone vegan leather and glass rear. The Black and Green variants are fitted with matte glass backs. Specifications Hardware The Find X6 is powered by MediaTek Dimensity 9200 and operates on Octa-core (1x3.05 GHz Cortex-X3 & 3x2.85 GHz Cortex-A715 & 4x1.80 GHz Cortex-A510), an upgrade from its predecessor the Find X5. The flagship Find X6 Pro uses Snapdragon 8 Gen 2, the highest specced Snapdragon chip in 2023. It operates on a more advanced octa-core system (1x3.2 GHz Cortex-X3 & 2x2.8 GHz Cortex-A715 & 2x2.80 GHz Cortex-A710 & 3x2.0 GHz Cortex-A510). Both the Find X6 and the Find X6 Pro offer UFS 4.0 without expandable storage, as well as 256 GB or 512 GB of ROM paired with either 12 or 16 GB of RAM. Both phones include Dolby Atmos stereo speakers with active noise cancellation, and have no audio jack. Biometric options include an optical fingerprint scanner and facial recognition. Camera While both the Find X6 and the Find X6 Pro are equipped with identical 32MP front-facing Sony IMX709 cameras, subsequent software updates have enabled the latter to shoot videos in 4K resolution. The Find X6 has a slightly inferior rear camera setup, utilising the 50 MP Sony IMX890 as the main sensor and the 50 MP Isocell JN1 as the ultrawide sensor. The Sony IMX890 is also used as the periscope telephoto lens with 2.8x optical zoom and 6x hybrid zoom. The Find X6 Pro, on the other hand, features the 1-inch type Sony IMX989 main sensor, while both the ultrawide and the 2.8x-periscope telephoto lens use the Sony IMX890, giving rise to the claim of having 'Three Main Cameras' that offer parity in image quality across focal lengths. Both phones also feature software-based tuning co-developed with Hasselblad and the custom-made MariSilicon X image processing NPU. In the end of 2023, it was the 8th best smartphone camera in the world according to DxOMark. Battery The Find X6 and Find X6 Pro's battery capacity are 4800 mAh and 5000 mAh respectively. The Find X6 supports up to 80 W wired charging, while the Find X6 Pro is capable of up to 100W wired charging. In addition, the Find X6 Pro supports 50W wireless charging, whereas the Find X6 misses out on wireless charging support. Oppo claims that its proprietary battery technology allows the Find X6 series to retain 80% of their battery capacity after 1,600 charging cycles. Software The Find X6 and Find X6 Pro run on ColorOS 13.1, which is based on Android 13. See also List of large sensor camera phones References External links Find X5 Android (operating system) devices Mobile phones with multiple rear cameras Mobile phones with 4K video recording Flagship smartphones Mobile phones introduced in 2023
Oppo Find X6
Technology
1,041
2,230,989
https://en.wikipedia.org/wiki/Space%20Shuttle%20thermal%20protection%20system
The Space Shuttle thermal protection system (TPS) is the barrier that protected the Space Shuttle Orbiter during the extreme heat of atmospheric reentry. A secondary goal was to protect from the heat and cold of space while in orbit. Materials The TPS covered essentially the entire orbiter surface, and consisted of seven different materials in varying locations based on amount of required heat protection: Reinforced carbon–carbon (RCC), used in the nose cap, the chin area between the nose cap and nose landing gear doors, the arrowhead aft of the nose landing gear door, and the wing leading edges. Used where reentry temperature exceeded . High-temperature reusable surface insulation (HRSI) tiles, used on the orbiter underside. Made of coated LI-900 silica ceramics. Used where reentry temperature was below 1,260 °C. Fibrous refractory composite insulation (FRCI) tiles, used to provide improved strength, durability, resistance to coating cracking and weight reduction. Some HRSI tiles were replaced by this type. Flexible Insulation Blankets (FIB), a quilted, flexible blanket-like surface insulation. Used where reentry temperature was below . Low-temperature Reusable Surface Insulation (LRSI) tiles, formerly used on the upper fuselage, but were mostly replaced by FIB. Used in temperature ranges roughly similar to FIB. Toughened unipiece fibrous insulation (TUFI) tiles, a stronger, tougher tile which came into use in 1996. Used in high and low temperature areas. Felt reusable surface insulation (FRSI). White Nomex felt blankets on the upper payload bay doors, portions of the mid fuselage and aft fuselage sides, portions of the upper wing surface and a portion of the OMS/RCS pods. Used where temperatures stayed below . Each type of TPS had specific heat protection, impact resistance, and weight characteristics, which determined the locations where it was used and the amount used. The shuttle TPS had three key characteristics that distinguished it from the TPS used on previous spacecraft: Reusable Previous spacecraft generally used ablative heat shields which burned off during reentry and so could not be reused. This insulation was robust and reliable, and the single-use nature was appropriate for a single-use vehicle. By contrast, the reusable shuttle required a reusable thermal protection system. Lightweight Previous ablative heat shields were very heavy. For example, the ablative heat shield on the Apollo Command Module comprised about 15% of the vehicle weight. The winged shuttle had much more surface area than previous spacecraft, so a lightweight TPS was crucial. Fragile The only known technology in the early 1970s with the required thermal and weight characteristics was also so fragile, due to the very low density, that one could easily crush a TPS tile by hand. Purpose [[File:Ststpstile.jpg|thumb|Discovery'''s under wing surfaces are protected by thousands of High-Temperature Reusable Insulation tiles.]] The orbiter's aluminum structure could not withstand temperatures over without structural failure. Aerodynamic heating during reentry would push the temperature well above this level in areas, so an effective insulator was needed. Reentry heating Reentry heating differs from the normal atmospheric heating associated with jet aircraft, and this governed TPS design and characteristics. The skin of high-speed jet aircraft can also become hot, but this is from frictional heating due to atmospheric friction, similar to warming one's hands by rubbing them together. The orbiter reentered the atmosphere as a blunt body by having a very high (40°) angle of attack, with its broad lower surface facing the direction of flight. Over 80% of the heating the orbiter experiences during reentry is caused by compression of the air ahead of the hypersonic vehicle, in accordance with the basic thermodynamic relation between pressure and temperature. A hot shock wave was created in front of the vehicle, which deflected most of the heat and prevented the orbiter's surface from directly contacting the peak heat. Therefore, reentry heating was largely convective heat transfer between the shock wave and the orbiter's skin through superheated plasma. The key to a reusable shield against this type of heating is very low-density material, similar to how a thermos bottle inhibits convective heat transfer. Some high-temperature metal alloys can withstand reentry heat; they simply get hot and re-radiate the absorbed heat. This technique, called heat sink thermal protection, was planned for the X-20 Dyna-Soar winged space vehicle. However, the amount of high-temperature metal required to protect a large vehicle like the Space Shuttle Orbiter would have been very heavy and entailed a severe penalty to the vehicle's performance. Similarly, ablative TPS would be heavy, possibly disturb vehicle aerodynamics as it burned off during reentry, and require significant maintenance to reapply after each mission. Unfortunately, TPS tile, which was originally specified never to take debris strikes during launch, in practice also needed to be closely inspected and repaired after each landing, due to damage potentially incurred during ascent, even before new on-orbit inspection policies were established following the loss of Space Shuttle Columbia. However, the average replacement rate was still low, with Discovery for example still having about 18,000 of its 24,000 tiles be the original at the end of its career. Detailed description The TPS was a system of different protection types, not just silica tiles. They are in two basic categories: tile TPS and non-tile TPS. The main selection criteria used the lightest weight protection capable of handling the heat in a given area. However, in some cases a heavier type was used if additional impact resistance was needed. The FIB blankets were primarily adopted for reduced maintenance, not for thermal or weight reasons. Much of the shuttle was covered with LI-900 silica tiles, made from essentially very pure quartz sand. The insulation prevented heat transfer to the underlying orbiter aluminium skin and structure. These tiles were such poor heat conductors that one could hold one by the edges while it was still red hot. There were about 24,300 unique tiles individually fitted on the vehicle, for which the orbiter has been called "the flying brickyard". Researchers at University of Minnesota and Pennsylvania State University are performing the atomistic simulations to obtain accurate description of interactions between atomic and molecular oxygen with silica surfaces to develop better high-temperature oxidation-protection systems for leading edges on hypersonic vehicles. The tiles were not mechanically fastened to the vehicle, but glued. Since the brittle tiles could not flex with the underlying vehicle skin, they were glued to Nomex felt Strain Isolation Pads (SIPs) with room temperature vulcanizing (RTV) silicone adhesive, which were in turn glued to the orbiter skin. These isolated the tiles from the orbiter's structural deflections and expansions. Gluing on the 24,300 tiles required nearly two man-years of work for every flight, partly due to the fact that the glue dried quickly and new batches needed to be produced after every couple of tiles. An ad-hoc remedy that involved technicians spitting in the glue to slow down the drying process was common practice until 1988, when a tile-hazard study revealed that spit weakened the adhesive's bonding strength. Tile types High-temperature reusable surface insulation (HRSI) The black HRSI tiles provided protection against temperatures up to . There were 20,548 HRSI tiles which covered the landing gear doors, external tank umbilical connection doors, and the rest of the orbiter's under surfaces. They were also used in areas on the upper forward fuselage, parts of the orbital maneuvering system pods, vertical stabilizer leading edge, elevon trailing edges, and upper body flap surface. They varied in thickness from , depending upon the heat load encountered during reentry. Except for closeout areas, these tiles were normally square. The HRSI tile was composed of high purity silica fibers. Ninety percent of the volume of the tile was empty space, giving it a very low density () making it light enough for spaceflight. The uncoated tiles were bright white in appearance and looked more like a solid ceramic than the foam-like material that they were. The black coating on the tiles was Reaction Cured Glass (RCG) of which tetraboron silicide and borosilicate glass were some of several ingredients. RCG was applied to all but one side of the tile to protect the porous silica and to increase the heat sink properties. The coating was absent from a small margin of the sides adjacent to the uncoated (bottom) side. To waterproof the tile, dimethylethoxysilane was injected into the tiles by syringe. Densifying the tile with tetraethyl orthosilicate (TEOS) also helped to protect the silica and added additional waterproofing. An uncoated HRSI tile held in the hand feels like a very light foam, less dense than styrofoam, and the delicate, friable material must be handled with extreme care to prevent damage. The coating feels like a thin, hard shell and encapsulates the white insulating ceramic to resolve its friability, except on the uncoated side. Even a coated tile feels very light, lighter than a same-sized block of styrofoam. As expected for silica, they are odorless and inert. HRSI was primarily designed to withstand transition from areas of extremely low temperature (the void of space, about ) to the high temperatures of re-entry (caused by interaction, mostly compression at the hypersonic shock, between the gases of the upper atmosphere & the hull of the Space Shuttle, typically around ). Fibrous Refractory Composite Insulation Tiles (FRCI) The black FRCI tiles provided improved durability, resistance to coating cracking and weight reduction. Some HRSI tiles were replaced by this type. Toughened unipiece fibrous insulation (TUFI) A stronger, tougher tile which came into use in 1996. TUFI tiles came in high temperature black versions for use in the orbiter's underside, and lower temperature white versions for use on the upper body. While more impact resistant than other tiles, white versions conducted more heat which limited their use to the orbiter's upper body flap and main engine area. Black versions had sufficient heat insulation for the orbiter underside but had greater weight. These factors restricted their use to specific areas. Low-temperature reusable surface insulation (LRSI) White in color, these covered the upper wing near the leading edge. They were also used in selected areas of the forward, mid, and aft fuselage, vertical tail, and the OMS/RCS pods. These tiles protected areas where reentry temperatures are below . The LRSI tiles were manufactured in the same manner as the HRSI tiles, except that the tiles were square and had a white RCG coating made of silica compounds with shiny aluminium oxide. The white color was by design and helped to manage heat on orbit when the orbiter was exposed to direct sunlight. These tiles were reusable for up to 100 missions with refurbishment (100 missions was also the design lifetime of each orbiter). They were carefully inspected in the Orbiter Processing Facility after each mission, and damaged or worn tiles were immediately replaced before the next mission. Fabric sheets known as gap fillers were also inserted between tiles where necessary. These allowed for a snug fit between tiles, preventing excess plasma from penetrating between them, yet allowing for thermal expansion and flexing of the underlying vehicle skin. Prior to the introduction of FIB blankets, LRSI tiles occupied all of the areas now covered by the blankets, including the upper fuselage and the whole surface of the OMS pods. This TPS configuration was only used on Columbia and Challenger. Non-tile TPS Flexible Insulation Blankets/Advanced Flexible Reusable Insulation (FIB/AFRSI) Developed after the initial delivery of Columbia and first used on the OMS pods of Challenger. This white low-density fibrous silica batting material had a quilt-like appearance, and replaced the vast majority of the LRSI tiles. They required much less maintenance than LRSI tiles yet had about the same thermal properties. After their limited use on Challenger, they were used much more extensively beginning with Discovery and replaced many of the LRSI tiles on Columbia after the loss of Challenger. Reinforced carbon-carbon (RCC) The light gray material which withstood reentry temperatures up to protected the wing leading edges and nose cap. Each of the orbiters' wings had 22 RCC panels about thick. T-seals between each panel allowed for thermal expansion and lateral movement between these panels and the wing. RCC was a laminated composite material made from carbon fibres impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate was pyrolized to convert the resin to pure carbon. This was then impregnated with furfural alcohol in a vacuum chamber, then cured and pyrolized again to convert the furfural alcohol to carbon. This process was repeated three times until the desired carbon-carbon properties were achieved. To provide oxidation resistance for reuse capability, the outer layers of the RCC were coated with silicon carbide. The silicon-carbide coating protected the carbon-carbon from oxidation. The RCC was highly resistant to fatigue loading that was experienced during ascent and entry. It was stronger than the tiles and was also used around the socket of the forward attach point of the orbiter to the External Tank to accommodate the shock loads of the explosive bolt detonation. RCC was the only TPS material that also served as structural support for part of the orbiter's aerodynamic shape: the wing leading edges and the nose cap. All other TPS components (tiles and blankets) were mounted onto structural materials that supported them, mainly the aluminium frame and skin of the orbiter. Nomex Felt Reusable Surface Insulation (FRSI) This white, flexible fabric offered protection at up to . FRSI covered the orbiter's upper wing surfaces, upper payload bay doors, portions of the OMS/RCS pods, and aft fuselage. Gap fillers Gap fillers were placed at doors and moving surfaces to minimize heating by preventing the formation of vortices. Doors and moving surfaces created open gaps in the heat protection system that had to be protected from heat. Some of these gaps were safe, but there were some areas on the heat shield where surface pressure gradients caused a crossflow of boundary layer air in those gaps. The filler materials were made of either white AB312 fibers or black AB312 cloth covers (which contain alumina fibers). These materials were used around the leading edge of the nose cap, windshields, side hatch, wing, trailing edge of elevons, vertical stabilizer, the rudder/speed brake, body flap, and heat shield of the shuttle's main engines. On STS-114, some of this material was dislodged and determined to pose a potential safety risk. It was possible that the gap filler could cause turbulent airflow further down the fuselage, which would result in much higher heating, potentially damaging the orbiter. The cloth was removed during a spacewalk during the mission. Weight considerations While reinforced carbon–carbon had the best heat protection characteristics, it was also much heavier than the silica tiles and FIBs, so it was limited to relatively small areas. In general the goal was to use the lightest weight insulation consistent with the required thermal protection. Density of each TPS type: Total area and weight of each TPS type (used on Orbiter 102, pre-1996): Early TPS problems Slow tile application Tiles often fell off and caused much of the delay in the launch of STS-1, the first shuttle mission, which was originally scheduled for 1979 but did not occur until April 1981. NASA was unused to lengthy delays in its programs, and was under great pressure from the government and military to launch soon. In March 1979 it moved the incomplete Columbia, with 7,800 of the 31,000 tiles missing, from the Rockwell International plant in Palmdale, California to Kennedy Space Center in Florida. Beyond creating the appearance of progress in the program, NASA hoped that the tiling could be finished while the rest of the orbiter was prepared. This was a mistake; some of the Rockwell tilers disliked Florida and soon returned to California, and the Orbiter Processing Facility was not designed for manufacturing and was too small for its 400 workers. Each tile used cement that required 16 hours to cure. After the tile was affixed to the cement, a jack held it in place for another 16 hours. In March 1979 it took each worker 40 hours to install one tile; by using young, efficient college students during the summer the pace sped up to 1.8 tiles per worker per week. Thousands of tiles failed stress tests and had to be replaced. By fall NASA realized that the speed of tiling would determine the launch date. The tiles were so problematic that officials would have switched to any other thermal protection method, but none other existed. Because it had to be ferried without all tiles the gaps were filled with material to maintain the Shuttle's aerodynamics while in transit. Concern over "zipper effect" The tile TPS was an area of concern during shuttle development, mainly concerning adhesion reliability. Some engineers thought a failure mode could exist whereby one tile could detach, and resulting aerodynamic pressure would create a "zipper effect" stripping off other tiles. Whether during ascent or reentry, the result would be disastrous. Concern over debris strikes Another problem was ice or other debris impacting the tiles during ascent. This had never been fully and thoroughly solved, as the debris had never been eliminated, and the tiles remained susceptible to damage from it. NASA's final strategy for mitigating this problem was to aggressively inspect for, assess, and address any damage that may occur, while on orbit and before reentry, in addition to on the ground between flights. Early tile repair plans These concerns were sufficiently great that NASA did significant work developing an emergency-use tile repair kit which the STS-1 crew could use before deorbiting. By December 1979, prototypes and early procedures were completed, most of which involved equipping the astronauts with a special in-space repair kit and a jet pack called the Manned Maneuvering Unit, or MMU, developed by Martin Marietta. Another element was a maneuverable work platform which would secure an MMU-propelled spacewalking astronaut to the fragile tiles beneath the orbiter. The concept used electrically controlled adhesive cups which would lock the work platform into position on the featureless tile surface. About one year before the 1981 STS-1 launch, NASA decided the repair capability was not worth the additional risk and training, so discontinued development. There were unresolved problems with the repair tools and techniques; also further tests indicated the tiles were unlikely to come off. The first shuttle mission did suffer several tile losses, but they were in non-critical areas, and no "zipper effect" occurred. Columbia accident and aftermath On February 1, 2003, the Space Shuttle Columbia was destroyed on reentry due to a failure of the TPS. The investigation team found and reported that the probable cause of the accident was that during launch, a piece of foam debris punctured an RCC panel on the left wing's leading edge and allowed hot gases from the reentry to enter the wing and disintegrate the wing from within, leading to eventual loss of control and breakup of the shuttle. The Space Shuttle's thermal protection system received a number of controls and modifications after the disaster. They were applied to the three remaining shuttles, Discovery, Atlantis and Endeavour in preparation for subsequent mission launches into space. On 2005's STS-114 mission, in which Discovery made the first flight to follow the Columbia accident, NASA took a number of steps to verify that the TPS was undamaged. The Orbiter Boom Sensor System, a new extension to the Remote Manipulator System, was used to perform laser imaging of the TPS to inspect for damage. Prior to docking with the International Space Station, Discovery performed a Rendezvous Pitch Maneuver, simply a 360° backflip rotation, allowing all areas of the vehicle to be photographed from ISS. Two gap fillers were protruding from the orbiter's underside more than the nominally allowed distance, and the agency cautiously decided it would be best to attempt to remove the fillers or cut them flush rather than risk the increased heating they would cause. Even though each one protruded less than , it was believed that leaving them could cause heating increases of 25% upon reentry. Because the orbiter did not have any handholds on its underside (as they would cause much more trouble with reentry heating than the protruding gap fillers of concern), astronaut Stephen K. Robinson worked from the ISS's robotic arm, Canadarm2. Because the TPS tiles were quite fragile, there had been concern that anyone working under the vehicle could cause more damage to the vehicle than was already there, but NASA officials felt that leaving the gap fillers alone was a greater risk. In the event, Robinson was able to pull the gap fillers free by hand, and caused no damage to the TPS on Discovery. Tile donations , with the impending Space Shuttle retirement, NASA was donating TPS tiles to schools, universities, and museums for the cost of shipping—US$23.40 each. About 7000 tiles were available on a first-come, first-served basis, but limited to one each per institution. See also Space Shuttle program Space Shuttle Columbia disaster Columbia Accident Investigation Board References "When the Space Shuttle finally flies", article written by Rick Gore. National Geographic (pp. 316–347. Vol. 159, No. 3. March 1981). http://www.datamanos2.com/columbia/natgeomar81.htmlSpace Shuttle Operator's Manual, by Kerry Mark Joels and Greg Kennedy (Ballantine Books, 1982).The Voyages of Columbia: The First True Spaceship, by Richard S. Lewis (Columbia University Press, 1984).A Space Shuttle Chronology, by John F. Guilmartin and John Mauer (NASA Johnson Space Center, 1988).Space Shuttle: The Quest Continues, by George Forres (Ian Allan, 1989).Information Summaries: Countdown! NASA Launch Vehicles and Facilities, (NASA PMS 018-B (KSC), October 1991).Space Shuttle: The History of Developing the National Space Transportation System, by Dennis Jenkins (Walsworth Publishing Company, 1996).U.S. Human Spaceflight: A Record of Achievement, 1961–1998. NASA – Monographs in Aerospace History No. 9, July 1998.Space Shuttle Thermal Protection System'' by Gary Milgrom. February, 2013. Free iTunes ebook download. https://itunes.apple.com/us/book/space-shuttle-thermal-protection/id591095660?mt=11 Notes External links https://web.archive.org/web/20060909094330/http://www-pao.ksc.nasa.gov/kscpao/nasafact/tps.htm https://web.archive.org/web/20110707103505/http://ww3.albint.com/about/research/Pages/protectionSystems.aspx http://science.ksc.nasa.gov/shuttle/technology/sts-newsref/sts_sys.html https://web.archive.org/web/20160307090308/http://science.ksc.nasa.gov/shuttle/nexgen/Nexgen_Downloads/Shuttle_Gordon_TPS-PUBLIC_Appendix.pdf Space Shuttle program Thermal protection Atmospheric entry
Space Shuttle thermal protection system
Engineering
5,026
59,625,475
https://en.wikipedia.org/wiki/John%20A.%20Osborn
John A. Osborn (1939–2000) was an inorganic chemist who made many contributions to organometallic chemistry. Obsorn received his PhD under the mentorship of Geoffrey Wilkinson. During that degree Osborn contributed to the development of Wilkinson's catalyst. His thesis studies ranged widely. In 1967, he took a faculty position at Harvard University. At Harvard, he supervised the PhD theses of Richard Schrock, John Shapley, and Jay Labinger. During this time, the chemistry of [M(diene)(PR3)2]+ was advanced (M = Rh, Ir), laying the foundation for many subsequent developments. In 1975, Osborn took a faculty position at the Université Louis-Pasteur in Strasbourg, France, where he further broadened his research. References 1939 births 2000 deaths Alumni of Imperial College London 20th-century English chemists Inorganic chemists
John A. Osborn
Chemistry
183
32,892,599
https://en.wikipedia.org/wiki/Yakov%20Geronimus
Yakov Lazarevich Geronimus, sometimes spelled J. Geronimus (; February 6, 1898, Rostov – July 17, 1984, Kharkov) was a Russian mathematician known for contributions to theoretical mechanics and the study of orthogonal polynomials. The Geronimus polynomials are named after him. References Geronimus, Yakov Lazarevich (1898–1984) National University of Kharkiv alumni Russian mathematicians Mathematical analysts
Yakov Geronimus
Mathematics
91
10,995,067
https://en.wikipedia.org/wiki/Rhenium%20pentachloride
Rhenium pentachloride is an inorganic compound with the formula . This red-brown solid is paramagnetic. Structure and preparation Rhenium pentachloride has a bioctahedral structure and can be described as Cl4Re(μ-Cl)2ReCl4. The (μ-Cl)2 part of this formula indicates that two chloride ligands are bridging ligands, i.e. they connect to two Re atoms. The Re-Re distance is 3.74 Å. The motif is similar to that seen for tantalum pentachloride. This compound was first prepared in 1933, a few years after the discovery of rhenium. The preparation involves chlorination of rhenium at temperatures up to 900 °C. The material can be purified by sublimation. ReCl5 is one of the most oxidized binary chlorides of Re. It does not undergo further chlorination. ReCl6 has been prepared from rhenium hexafluoride. Rhenium heptafluoride is known but not the heptachloride. Uses and reactions It degrades in air to a brown liquid. Although rhenium pentachloride has no commercial applications, it is of historic significance as one of the early catalysts for olefin metathesis. Reduction gives trirhenium nonachloride. Oxygenation affords the Re(VII) oxychloride: ReCl5 + 3 Cl2O → ReO3Cl + 5 Cl2 Comproportionation of the penta- and trichloride gives rhenium tetrachloride. References Rhenium compounds Chlorides Metal halides Substances discovered in the 1930s
Rhenium pentachloride
Chemistry
363
426,188
https://en.wikipedia.org/wiki/Radioallergosorbent%20test
A radioallergosorbent test (RAST) is a blood test using radioimmunoassay test to detect specific IgE antibodies in order to determine the substances a subject is allergic to. This is different from a skin allergy test, which determines allergy by the reaction of a person's skin to different substances. Medical uses The two most commonly used methods of confirming allergen sensitization are skin testing and allergy blood testing. Both methods are recommended by the NIH guidelines and have similar diagnostic value in terms of sensitivity and specificity. Advantages of the allergy blood test range from: excellent reproducibility across the full measuring range of the calibration curve, it has very high specificity as it binds to allergen specific IgE, and extremely sensitive too, when compared with skin prick testing. In general, this method of blood testing (in-vitro, out of body) vs skin-prick testing (in-vivo, in body) has a major advantage: it is not always necessary to remove the patient from an antihistamine medication regimen, and if the skin conditions (such as eczema) are so widespread that allergy skin testing cannot be done. Allergy blood tests, such as ImmunoCAP, are performed without procedure variations, and the results are of excellent standardization. Adults and children of any age can take an allergy blood test. For babies and very young children, a single needle stick for allergy blood testing is often more gentle than several skin tests. However, skin testing techniques have improved. Most skin testing does not involve needles and typically skin testing results in minimal patient discomfort. Drawbacks to RAST and ImmunoCAP techniques do exist. Compared to skin testing, ImmunoCAP and other RAST techniques take longer to perform and are less cost effective. Several studies have also found these tests to be less sensitive than skin testing for the detection of clinically relevant allergies. False positive results may be obtained due to cross-reactivity of homologous proteins or by cross-reactive carbohydrate determinants (CCDs). In the NIH food guidelines issued in December 2010 it was stated that "The predictive values associated with clinical evidence of allergy for ImmunoCAP cannot be applied to other test methods." With over 4000 scientific articles using ImmunoCAP and showing its clinical value, ImmunoCAP is perceived as "Gold standard" for in vitro IgE testing Method The RAST is a radioimmunoassay test to detect specific IgE antibodies to suspected or known allergens for the purpose of guiding a diagnosis about allergy. IgE is the antibody associated with Type I allergic response: for example, if a person exhibits a high level of IgE directed against pollen, the test may indicate the person is allergic to pollen (or pollen-like) proteins. A person who has outgrown an allergy may still have a positive IgE years after exposure. The suspected allergen is bound to an insoluble material and the patient's serum is added. If the serum contains antibodies to the allergen, those antibodies will bind to the allergen. Radiolabeled anti-human IgE antibody is added where it binds to those IgE antibodies already bound to the insoluble material. The unbound anti-human IgE antibodies are washed away. The amount of radioactivity is proportional to the serum IgE for the allergen. RASTs are often used to test for allergies when: a physician advises against the discontinuation of medications that can interfere with test results or cause medical complications; a patient has severe skin conditions such as widespread eczema or a patient has such a high sensitivity level to suspected allergens that any administration of those allergens might result in potentially serious side effects. Scale The RAST is scored on a scale from 0 to 6: History The market-leading RAST methodology was invented and marketed in 1974 by Pharmacia Diagnostics AB, Uppsala, Sweden, and the acronym RAST is actually a brand name. In 1989, Pharmacia Diagnostics AB replaced it with a superior test named the ImmunoCAP Specific IgE blood test, which literature may also describe as: CAP RAST, CAP FEIA (fluorenzymeimmunoassay), and Pharmacia CAP. A review of applicable quality assessment programs shows that this new test has replaced the original RAST in approximately 80% of the world's commercial clinical laboratories, where specific IgE testing is performed. The newest version, the ImmunoCAP Specific IgE 0–100, is the only specific IgE assay to receive FDA approval to quantitatively report to its detection limit of 0.1kU/L. This clearance is based on the CLSI/NCCLS-17A Limits of Detection and Limits of Quantitation, October 2004 guideline.The guidelines for diagnosis and management of food allergy issues by the National Institute of Health state that: In 2010 the United States National Institute of Allergy and Infectious Diseases recommended that the RAST measurements of specific immunoglobulin E for the diagnosis of allergy be abandoned in favor of testing with more sensitive fluorescence enzyme-labeled assays. See also Prausnitz-Küstner test Skin allergy test References External links Allergies - Americal Academy of Allergy Asthma and Immunology Allergy Blood Testing - Lab Tests Online Blood tests Immunologic tests
Radioallergosorbent test
Chemistry,Biology
1,145
67,803,756
https://en.wikipedia.org/wiki/Guluronic%20acid
Guluronic acid is a uronic acid monosaccharide that may be derived from gulose. -Guluronic acid is a C-3 epimer of -galacturonic acid and a C-5 epimer of -mannuronic acid. Along with -mannuronic acid, -guluronic acid is a component of alginic acid, a polysaccharide found in brown algae. α-L-Guluronic acid has been found to bind divalent metal ions (such as calcium and strontium) through the carboxylate moiety and through the axial-equatorial-axial arrangement of hydroxyl groups found around the ring. References Uronic acids
Guluronic acid
Chemistry
154
50,985,694
https://en.wikipedia.org/wiki/Cuphophyllus%20canescens
Cuphophyllus canescens is a species of agaric (gilled mushroom) in the family Hygrophoraceae, known from North America. In its wide sense (including the recently separated C. atlanticus) it has been assessed as globally "vulnerable" on the IUCN Red List of Threatened Species. Taxonomy The species was first described from North Carolina in 1942 by American mycologists Alexander H. Smith and Lexemuel Ray Hesler as Hygrophorus canescens. It was transferred to the genus Cuphophyllus by French mycologist Marcel Bon in 1990, at which time it was thought also to occur in northern Europe. As a result of molecular research, based on cladistic analysis of DNA sequences, Cuphophyllus canescens has, however, been found to be restricted to North America. Similar species Cuphophyllus atlanticus is very similar, but is said to have a pure gray to bluish gray cap and (microscopically) larger, subglobose spores. See also List of fungi by conservation status References Hygrophoraceae Fungi of North America Taxa named by Alexander H. Smith Taxa named by Lexemuel Ray Hesler Fungus species
Cuphophyllus canescens
Biology
251
3,117,447
https://en.wikipedia.org/wiki/Eta%20Leonis
Eta Leonis (η Leo, η Leonis) is a third-magnitude blue supergiant star in the constellation Leo, about away. Properties Eta Leonis is a blue supergiant with the stellar classification A0Ib. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Though its apparent magnitude is 3.5, making it a relatively dim star to the naked eye, it is nearly 20,000 times more luminous than the Sun, with an absolute magnitude of -5.60. The Hipparcos astrometric data has estimated the distance of Eta Leonis to be roughly 390 parsecs from Earth, or 1,270 light years away. It is believed to be in a blue loop phase. Eta Leonis is apparently a multiple star system, but the number of components and their separation is uncertain. References External links Jim Kaler's Stars: Eta Leonis Leo (constellation) Leonis, Eta A-type supergiants BD+17 2171 Leonis, 30 3975 87737 049583
Eta Leonis
Astronomy
227
53,912,017
https://en.wikipedia.org/wiki/Matrilin
Matrilins are proteoglycan-associated proteins that are major components of extracellular matrix of various tissues. They include: Matrilin-1 Matrilin-2 Matrilin-3 Matrilin-4 Matrilin-1 and -3 are expressed near exclusively in skeletal tissues. Matrilin-2 and -4 have a much broader distribution and are also found in loose connective tissue. References Extracellular matrix proteins
Matrilin
Chemistry
93
48,287,017
https://en.wikipedia.org/wiki/NGC%206356
NGC 6356 is a globular cluster located in the constellation Ophiuchus. It is designated as a II in the Shapley–Sawyer Concentration Class and was discovered by the German-born British astronomer William Herschel on 18 June 1784. The star cluster is more dense and bright towards the middle. NGC 6356 is located 80' north east of the brighter NGC 6333. It is at a distance of 49,600 light years away from Earth. The cluster is relatively metal-rich and therefore has a large amount of interstellar dust in its core. See also List of NGC objects (6001–7000) List of NGC objects References External links Globular clusters 6356 Ophiuchus
NGC 6356
Astronomy
147
24,701,425
https://en.wikipedia.org/wiki/Kanade%E2%80%93Lucas%E2%80%93Tomasi%20feature%20tracker
In computer vision, the Kanade–Lucas–Tomasi (KLT) feature tracker is an approach to feature extraction. It is proposed mainly for the purpose of dealing with the problem that traditional image registration techniques are generally costly. KLT makes use of spatial intensity information to direct the search for the position that yields the best match. It is faster than traditional techniques for examining far fewer potential matches between the images. The registration problem The traditional image registration problem can be characterized as follows: Given two functions and , representing pixel values at each location in two images, respectively, where is a vector. We wish to find the disparity vector that minimizes some measure of the difference between and , for in some region of interest . Some measures of the difference between and : L1 norm: L2 norm: Negative of normalized correlation: Basic description of the registration algorithm The KLT feature tracker is based on two papers: In the first paper, Lucas and Kanade developed the idea of a local search using gradients weighted by an approximation to the second derivative of the image. One-dimensional case If is the displacement between two images and then the approximation is made that so that This approximation to the gradient of the image is only accurate if the displacement of the local area between the two images to be registered is not too large. The approximation to depends on . For combining the various estimates of at various values of , it is natural to average them: The average can be further improved by weighting the contribution of each term to it, which is inversely proportional to an estimate of , where For the purpose of facilitating the expression, a weighting function is defined: The average with weighting is thereby: Upon obtaining the estimate can be moved by the estimate of . The procedure is applied repeatedly, yielding a type of Newton–Raphson iteration. The sequence of estimates will ideally converge to the best . The iteration can be expressed by An alternative derivation The derivation above cannot be generalized well to two dimensions for the 2-D linear approximation occurs differently. This can be corrected by applying the linear approximation in the form: to find the which minimizes the L2 norm measure of the difference (or error) between the curves, where the error can be expressed as: To minimize the error with respect to , partially differentiate and set it to zero: This is basically the same as the 1-D case, except for the fact that the weighting function And the iteration form with weighting can be expressed as: Performance To evaluate the performance of the algorithm, we are naturally curious about under what conditions and how fast the sequence of 's converges to the real . Consider the case: Both versions of the registration algorithm will converge to the correct for , i.e. for initial misregistrations as large as one-half wavelength. The range of convergence can be improved by suppressing high spatial frequencies in the image, which could be achieved by smoothing the image, that will also undesirably suppress small details of it. If the window of smoothing is much larger than the size of the object being matched, the object may be suppressed entirely, so that a match would be no longer possible. Since lowpass-filtered images can be sampled at lower resolution with no loss of information, a coarse-to-fine strategy is adopted. A low-resolution smoothed version of the image can be used to obtain an approximate match. Applying the algorithm to higher resolution images will refine the match obtained at lower resolution. As smoothing extends the range of convergence, the weighting function improves the accuracy of approximation, speeding up the convergence. Without weighting, the calculated displacement of the first iteration with falls off to zero as the displacement approaches one-half wavelength. Implementation The implementation requires the calculation of the weighted sums of the quantities and over the region of interest Although cannot be calculated exactly, it can be estimated by: where is chosen appropriately small. Some sophisticated technique can be used for estimating the first derivatives, but in general such techniques are equivalent to first smoothing the function, and then taking the difference. Generalization to multiple dimensions The registration algorithm for 1-D and 2-D can be generalized to more dimensions. To do so, we try to minimize the L2 norm measure of error: where and are n-dimensional row vectors. A linear approximation analogous: And partially differentiate with respect to : which has much the same form as the 1-D version. Further generalizations The method can also be extended to take into account registration based on more complex transformations, such as rotation, scaling, and shearing, by considering where is a linear spatial transform. The error to be minimized is then To determine the amount to adjust and to adjust , again, use the linear approximation: The approximation can be used similarly to find the error expression, which becomes quadratic in the quantities to be minimized with respect to. After figuring out the error expression, differentiate it with respect to the quantities to be minimized and set the results zero, yielding a set of linear equations, then solve them. A further generalization is designed for accounting for the fact that the brightness may be different in the two views, due to the difference of the viewpoints of the cameras or to differences in the processing of the two images. Assume the difference as linear transformation: where represents a contrast adjustment and represents a brightness adjustment. Combining this expression with the general linear transformation registration problem: as the quantity to minimize with respect to and Detection and tracking of point features In the second paper Tomasi and Kanade used the same basic method for finding the registration due to the translation but improved the technique by tracking features that are suitable for the tracking algorithm. The proposed features would be selected if both the eigenvalues of the gradient matrix were larger than some threshold. By a very similar derivation, the problem is formulated as where is the gradient. This is the same as the last formula of Lucas–Kanade above. A local patch is considered a good feature to track if both of the two eigenvalues ( and ) of are larger than a threshold. A tracking method based on these two papers is generally considered a KLT tracker. Improvements and variations In a third paper, Shi and Tomasi proposed an additional stage of verifying that features were tracked correctly. An affine transformation is fit between the image of the currently tracked feature and its image from a non-consecutive previous frame. If the affine compensated image is too dissimilar the feature is dropped. The reasoning is that between consecutive frames a translation is a sufficient model for tracking but due to more complex motion, perspective effects, etc. a more complex model is required when frames are further apart. Using a similar derivation as for the KLT, Shi and Tomasi showed that the search can be performed using the formula where is a matrix of gradients, is a vector of affine coefficients and is an error vector. Compare this to . References See also Kanade–Tomasi features in the context of feature detection Lucas–Kanade method, an optical flow algorithm derived from reference 1. Motion in computer vision
Kanade–Lucas–Tomasi feature tracker
Physics
1,435
59,006,692
https://en.wikipedia.org/wiki/Trajectory%20inference
Trajectory inference or pseudotemporal ordering is a computational technique used in single-cell transcriptomics to determine the pattern of a dynamic process experienced by cells and then arrange cells based on their progression through the process. Single-cell protocols have much higher levels of noise than bulk RNA-seq, so a common step in a single-cell transcriptomics workflow is the clustering of cells into subgroups. Clustering can contend with this inherent variation by combining the signal from many cells, while allowing for the identification of cell types. However, some differences in gene expression between cells are the result of dynamic processes such as the cell cycle, cell differentiation, or response to an external stimuli. Trajectory inference seeks to characterize such differences by placing cells along a continuous path that represents the evolution of the process rather than dividing cells into discrete clusters. In some methods this is done by projecting cells onto an axis called pseudotime which represents the progression through the process. Methods Since 2015, more than 50 algorithms for trajectory inference have been created. Although the approaches taken are diverse there are some commonalities to the methods. Typically, the steps in the algorithm consist of dimensionality reduction to reduce the complexity of the data, trajectory building to determine the structure of the dynamic process, and projection of the data onto the trajectory so that cells are positioned by their development through the process and cells with similar expression profiles are situated near each other. Trajectory inference algorithms differ in the specific procedure used for dimensionality reduction, the kinds of structures that can be used to represent the dynamic process, and the prior information that is required or can be provided. Dimensionality reduction The data produced by single-cell RNA-seq can consist of thousands of cells each with expression levels recorded across thousands of genes. In order to efficiently process data with such high dimensionality many trajectory inference algorithms employ a dimensionality reduction procedure such as principal component analysis (PCA), independent component analysis (ICA), or t-SNE as their first step. The purpose of this step is to combine many features of the data into a more informative measure of the data. For example, a coordinate resulting from dimensionality reduction could combine expression levels from many genes that are associated with the cell cycle into one value that represents a cell's position in the cell cycle. Such a transformation corresponds to dimensionality reduction in the feature space, but dimensionality reduction can also be applied to the sample space by clustering together groups of similar cells. Trajectory building Many methods represent the structure of the dynamic process via a graph-based approach. In such an approach the vertices of the graph correspond to states in the dynamic process, such as cell types in cell differentiation, and the edges between the nodes correspond to transitions between the states. The creation of the trajectory graph can be accomplished using k-nearest neighbors or minimum spanning tree algorithms. The topology of the trajectory refers to the structure of the graph and different algorithms are limited to creation of graph topologies of a particular type such as linear, branching, or cyclic. Use of prior information Some methods require or allow for the input of prior information which is used to guide the creation of the trajectory. The use of prior information can lead to more accurate trajectory determination, but poor priors can lead the algorithm astray or bias results towards expectations. Examples of prior information that can be used in trajectory inference are the selection of start cells that are at the beginning of the trajectory, the number of branches in the trajectory, and the number of end states for the trajectory. Software MARGARET MARGARET employs a deep unsupervised metric learning approach for inferring the cellular latent space and cell clusters. The trajectory is modeled using a cluster-connectivity graph to capture complex trajectory topologies. MARGARET utilizes the inferred trajectory for determining terminal states and inferring cell-fate plasticity using a scalable Absorbing Markov chain model. Monocle Monocle first employs a differential expression test to reduce the number of genes then applies independent component analysis for additional dimensionality reduction. To build the trajectory Monocle computes a minimum spanning tree, then finds the longest connected path in that tree. Cells are projected onto the nearest point to them along that path. p-Creode p-Creode finds the most likely path through a density-adjusted k-nearest neighbor graph. Graphs from an ensemble are scored with a graph similarity metric to select the most representative topology.  p-Creode has been tested on a range of single-cell platforms, including mass cytometry, multiplex immunofluorescence, and single-cell RNA-seq. No prior information is required. Slingshot Slingshot takes cluster labels as input and then orders these clusters into lineages by the construction of a minimum spanning tree. Paths through the tree are smoothed by fitting simultaneous principal curves and a cell's pseudotime value is determined by its projection onto one or more of these curves. Prior information, such as initial and terminal clusters, is optional. TSCAN TSCAN performs dimensionality reduction using principal component analysis and clusters cells using a mixture model. A minimum spanning tree is calculated using the centers of the clusters and the trajectory is determined as the longest connected path of that tree. TSCAN is an unsupervised algorithm that requires no prior information. Wanderlust/Wishbone Wanderlust was developed for analysis of mass cytometry data, but has been adapted for single-cell transcriptomics applications. A k-nearest neighbors algorithm is used to construct a graph which connects every cell to the cell closest to it with respect to a metric such as Euclidean distance or cosine distance. Wanderlust requires the input of a starting cell as prior information. Wishbone is built on Wanderlust and allows for a bifurcation in the graph topology, whereas Wanderlust creates a linear graph. Wishbone combines principal component analysis and diffusion maps to achieve dimensionality reduction then also creates a KNN graph. Waterfall Waterfall performs dimensionality reduction via principal component analysis and uses a k-means algorithm to find cell clusters. A minimal spanning tree is built between the centers of the clusters. Waterfall is entirely unsupervised, requiring no prior information, and produces linear trajectories. References External links A collection of 50+ trajectory inference methods within a common interface A table of tools for the analysis of single-cell RNA-seq data Single-cell RNA-seq pseudotime estimation algorithms DNA sequencing Molecular biology techniques Biotechnology
Trajectory inference
Chemistry,Biology
1,313
21,161,332
https://en.wikipedia.org/wiki/Niridazole
Niridazole is a schistosomicide. It is used to treat schistosomiasis, the helmintic disease caused by certain flatworms (trematodes) from the genus Schistosoma (formerly Bilharzia). It is also known by its trade name Ambilhar. It is usually given as tablets. Niridazole has central nervous system toxicity and can cause dangerous side effects, such as hallucinations. Also, it may cause allergic reactions in sensitive people. However, it is one of the most effective schistosomicide drugs. It has recently also been investigated for use in the treatment of periodontitis. Mechanism of action Niridazole is rapidly concentrated in the parasite and inhibits oogenesis and spermatogenesis. The compound also inhibits the phosphofructokinase enzyme, leading to glycogen depletion and hepatic shift. References Antiparasitic agents IARC Group 2B carcinogens Imidazolidinones Nitrothiazoles
Niridazole
Biology
222
37,905,049
https://en.wikipedia.org/wiki/C19H20FN3
{{DISPLAYTITLE:C19H20FN3}} The molecular formula C19H20FN3 (molar mass: 309.38 g/mol, exact mass: 309.1641 u) may refer to: Fluperlapine, or fluoroperlapine Gevotroline (WY-47,384) Molecular formulas
C19H20FN3
Physics,Chemistry
78
46,307,473
https://en.wikipedia.org/wiki/Code%20Louisville
Code Louisville is a public–private partnership program in Louisville, Kentucky, with the aim of fostering software developers to bolster technological innovation in the region. It received national attention in April 2015 when President Barack Obama visited the region to announce TechHire and promote the value of the federal government working with local governments. Purpose Code Louisville is a public–private partnership program in Louisville that began in November 2013. Metro Louisville Department of Economic Growth and Innovation, Greater Louisville Inc., EnterpriseCorp, the Louisville Free Public Library, and KentuckianaWorks are partnered with local employers partner who hire graduates of the program. It is a free 12-week online coding course open to those with a library card. It aims to foster software developers in the area, so as to bolster a technological innovation in the region. The program works in collaboration with Treehouse, including a prerequisite course. Effectiveness The program has been lauded for its success compared to similar programs in other cities. In April 2015, President Obama visited Louisville to praise the program and to use it as an example of the federal TechHire initiative that provides grants to similar programs. In 2015, it was announced that Code Louisville would attempt to create a program to teach others cities how to run similar programs. See also Public-private partnerships in the United States References Economy of Louisville, Kentucky Public–private partnership projects in the United States 2013 establishments in Kentucky Computer science education Education in Louisville, Kentucky Projects established in 2013
Code Louisville
Technology
294
9,385,162
https://en.wikipedia.org/wiki/BSI%20Group
The British Standards Institution (BSI) is the national standards body of the United Kingdom. BSI produces technical standards on a wide range of products and services and also supplies standards certification services for business and personnel. History BSI was founded as the Engineering Standards Committee in London in 1901. It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving a Royal Charter in 1929. In 1998 a revision of the Charter enabled the organization to diversify and acquire other businesses, and the trading name was changed to BSI Group. The Group now operates in 195 countries. The core business remains standards and standards related services, although the majority of the Group's revenue comes from management systems assessment and certification work. In 2021, BSI appointed its first female chief executive officer, Susan Taylor Martin. Activities BSI produces British Standards, and, as the UK's National Standards Body, is also responsible for the UK publication, in English, of international and European standards. BSI is obliged to adopt and publish all European Standards as identical British Standards (prefixed BS EN) and to withdraw pre-existing British Standards that are in conflict. However, it has the option to adopt and publish international standards (prefixed BS ISO or BS IEC). In response to commercial demands, BSI also produces commissioned standards products such as Publicly Available Specifications, (PASs), Private Standards and Business Information Publications. These products are commissioned by individual organizations and trade associations to meet their needs for standardized specifications, guidelines, codes of practice etc. Because they are not subject to the same consultation and consensus requirements as formal standards, the lead time is shorter. BSI also publishes standards-related books, CD-ROMs, subscription and web-based products as well as providing training on standards-related issues. Management systems assessment and certification With 80,000 clients, BSI is one of the world's largest certification bodies. It audits and provides certification to companies worldwide who implement management systems standards. BSI also runs training courses that cover the implementation and auditing requirements of national and international management systems standards. It is independently accredited and assesses a wide range of standards and other specifications including: Testing Services and Healthcare Within Testing Services, BSI's best known product in the UK is the Kitemark, a registered certification mark first used in 1903. The Kitemark – which is recognized by 82% of UK adults – signifies products or services which have been assessed and tested as meeting the requirements of the related specification or standard within a Kitemark scheme. BSI also conducts testing of products for a range of certifications, including for CE marking. CE marking must be applied to a wide range of products intended for sale in the European Economic Area. Frequently manufacturers or importers need a third-party certification of their product from an accredited or 'Notified' body. BSI holds Notified Body status for 15 EU Directives, including construction products, marine equipment, pressurised equipment and personal protective equipment. BSI also conducts testing for manufacturers developing new products and has facilities to test across a wide range of sectors, including construction, fire safety, electrical and electronic and engineering products. Within Healthcare, BSI provides regulatory and quality management reviews and product certification for medical device manufacturers in Europe, the United States, Australia, Japan, Taiwan, Canada and China. It is the market leader in the US, the world's biggest healthcare market. Acquisitions Starting in 1998, BSI Group has adopted a policy of international growth through acquisition as follows: 1998: CEEM, USA and International Standards Certification Pte Ltd, Singapore 2002: KPMG's certification business in North America 2003: BSI Pacific Ltd, Hong Kong 2004: KPMG's certification business in the Netherlands 2006: Nis Zert, Germany; Entropy International Ltd, Canada & UK; Benchmark Certification Pty Ltd, Australia; ASI-QS, UK 2009: Supply Chain Security Division of First Advantage Corp. USA; Certification International S.r.l, Italy; EUROCAT, Germany 2010: GLCS, the leading certifier of gas related consumer equipment in the UK and one of the top three in Europe, the certification business of BS Services Italia S.r.l. (BSS); Systems Management Indonesia (SMI). 2013: 9 May 2013 – NCS International and its daughter company NCSI Americas, Inc. 2015: 24 January – EORM, a US consultancy specialising in environmental, health, safety (EHS) and sustainability services 2015: 30 January – the management systems certification business of PwC in South Africa 2015: 3 June – Hill County Environmental Inc, a US environmental and engineering services consultancy 2016: 4 April – Espion Ltd and Espion UK, experts at managing and securing corporate information 2016: 15 August – Atrium Environmental Health and Safety Services LLC, experts in occupational safety, industrial safety and environmental compliance 2016: 22 September – Creative Environment Solutions (CES) Corp., an Environmental and Safety consulting firm 2016: 4 October – Info-Assure Ltd, a leading provider of cyber security and information assurance 2016: 15 December – Quantum Management Group Inc, a US environmental, health and safety (EHS) consultancy 2017: 5 December – Neville Clarke, the Business Process Improvement Expert 2018: 8 November – AirCert GmbH, a specialist aerospace certification company located in Munich, Germany 2019: 3 April – AppSec Consulting, a US cybersecurity and information resilience company 2021: 1 February – Q-Audit, a JAS-ANZ accredited healthcare auditing body based in Sydney, Australia and Auckland, New Zealand. BSI Identify In 2021, BSI Group, supported by the Construction Products Association, led the development of a system known as BSI Identify, which has been established in response to Dame Judith Hackitt's recommendation that BSI Identify uses new Digital Object Identifier (DOI) technology "to deliver a unique, constant, and interoperable identifier", known as a BSI UPIN, "which can be assigned to products to help UK manufacturers to directly manage information about their products in the supply chain". The aim of the BSI Identify programme is that "wherever you are with [a] product, you can take a snapshot of the QR code with your mobile device and it will immediately take you to the product technical data sheet. You can see exactly what product it is, you can answer any questions about it, you can see installation advice etc." Arms See also Notes and references External links BSI Group United Kingdom BSI Group Certification marks Companies based in the London Borough of Hounslow Electrical safety standards organizations Trade associations based in the United Kingdom International Electrotechnical Commission United Kingdom Ig Nobel laureates Organizations established in 1901 Standards organisations in the United Kingdom 1901 establishments in the United Kingdom
BSI Group
Mathematics,Engineering
1,399
21,313,650
https://en.wikipedia.org/wiki/Exponential%20field
In mathematics, an exponential field is a field with a further unary operation that is a homomorphism from the field's additive group to its multiplicative group. This generalizes the usual idea of exponentiation on the real numbers, where the base is a chosen positive real number. Definition A field is an algebraic structure composed of a set of elements, F, two binary operations, addition (+) such that F forms an abelian group with identity 0F and multiplication (·), such that F excluding 0F forms an abelian group under multiplication with identity 1F, and such that multiplication is distributive over addition, that is for any elements a, b, c in F, one has . If there is also a function E that maps F into F, and such that for every a and b in F one has then F is called an exponential field, and the function E is called an exponential function on F. Thus an exponential function on a field is a homomorphism between the additive group of F and its multiplicative group. Trivial exponential function There is a trivial exponential function on any field, namely the map that sends every element to the identity element of the field under multiplication. Thus every field is trivially also an exponential field, so the cases of interest to mathematicians occur when the exponential function is non-trivial. Exponential fields are sometimes required to have characteristic zero as the only exponential function on a field with nonzero characteristic is the trivial one. To see this first note that for any element x in a field with characteristic p > 0, Hence, taking into account the Frobenius endomorphism, And so E(x) = 1 for every x. Examples The field of real numbers R, or as it may be written to highlight that we are considering it purely as a field with addition, multiplication, and special constants zero and one, has infinitely many exponential functions. One such function is the usual exponential function, that is , since we have and , as required. Considering the ordered field R equipped with this function gives the ordered real exponential field, denoted . Any real number gives an exponential function on R, where the map satisfies the required properties. Analogously to the real exponential field, there is the complex exponential field, . Boris Zilber constructed an exponential field Kexp that, crucially, satisfies the equivalent formulation of Schanuel's conjecture with the field's exponential function. It is conjectured that this exponential field is actually Cexp, and a proof of this fact would thus prove Schanuel's conjecture. Exponential rings The underlying set F may not be required to be a field but instead allowed to simply be a ring, R, and concurrently the exponential function is relaxed to be a homomorphism from the additive group in R to the multiplicative group of units in R. The resulting object is called an exponential ring. An example of an exponential ring with a nontrivial exponential function is the ring of integers Z equipped with the function E which takes the value +1 at even integers and −1 at odd integers, i.e., the function This exponential function, and the trivial one, are the only two functions on Z that satisfy the conditions. Open problems Exponential fields are much-studied objects in model theory, occasionally providing a link between it and number theory as in the case of Zilber's work on Schanuel's conjecture. It was proved in the 1990s that Rexp is model complete, a result known as Wilkie's theorem. This result, when combined with Khovanskiĭ's theorem on pfaffian functions, proves that Rexp is also o-minimal. On the other hand, it is known that Cexp is not model complete. The question of decidability is still unresolved. Alfred Tarski posed the question of the decidability of Rexp and hence it is now known as Tarski's exponential function problem. It is known that if the real version of Schanuel's conjecture is true then Rexp is decidable. See also Ordered exponential field Notes Model theory Field (mathematics) Algebraic structures
Exponential field
Mathematics
857
14,909,851
https://en.wikipedia.org/wiki/Philips%20%3AYES
The Philips :YES was a home computer/personal computer released by Philips Austria, in 1985. It was not fully IBM PC compatible, a reason given for its commercial failure. The system was only sold in limited quantities. Technical specifications Microprocessor: Intel 80186 @ 8 MHz ROM: 192 KiB RAM: 128 to 640 KiB Keyboard: mechanical, with 93 keys Operating system: DOS Plus (in 64 KiB ROM), MS-DOS, Concurrent DOS Storage: two 3½-inch drives, 720 KiB each. One or two optional external 3½-inch or 5¼-inch drives. Video modes: A0: Text, 40 columns × 25 rows, 8 colours A1: Text, 80 columns × 25 rows, 8 colours A2: Text, 80 columns × 25 rows, 2 colours + intensity G0: 160 × 252 pixels, 16 colours G1: 640 × 252 pixels, 2 colours + intensity G2: 320 × 252 pixels, 16 colours G3: 640 × 252 pixels, 4 colours The built-in graphics hardware (based on the Hitachi HD46505SP video controller) supported composite video output. An additional video module allowed output to TTL monochrome monitors, colour monitors or SCART televisions. Video RAM was shared with system RAM. Before using graphics modes, memory had to be allocated for them with the GRAPHICS or GRCHAR commands. An expansion card (the Professional Expansion Board) provided: Extra RAM. A SASI/SCSI hard-drive interface. A mouse interface. A battery-backed real-time clock. An additional expansion card was available in limited quantity (probably only sold in the Netherlands directly to Philips employees) to make it 100% IBM PC compatible. This card was made of two separate cards, one for the actual compatibility, which ended in an 8 bit ISA slot, where an Hercules Graphics Card monochrome video card was plugged in. This also meant that using this card, would require to plug the monitor into the new video card, bypassing the onboard graphical card. This expansion card made it possible to run all DOS programs (including popular games at that time). Operating system versions Known operating systems adapted for the Philips :YES include: DOS Plus 1.? in ROM (with BDOS 4.1). The BDOS in ROM does not implement the S_OSVER call, which would have returned the version number to display. DOS Plus 1.1 on disk (with BDOS 4.1) DOS Plus 1.2 on disk (with BDOS 4.1) DOS Plus 2.1 on disk (with BDOS 5.0) Concurrent DOS MS-DOS 2.?? MS-DOS 3.10 See also Rodime (manufacturer of internal hard disk) MSX-DOS References Personal computers YES Computer-related introductions in 1985
Philips :YES
Technology
580
1,145,513
https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Venus
This is a list of craters on Venus, named by the International Astronomical Union's (IAU) Working Group for Planetary System Nomenclature. All craters on Venus are named after famous women or female first names. (For features on Venus other than craters see, list of montes on Venus and List of coronae on Venus.) As of 2017, there are 900 named craters on Venus, fewer than the lunar and Martian craters but more than on Mercury. Other, non-planetary bodies with numerous named craters include Callisto (141), Ganymede (131), Rhea (128), Vesta (90), Ceres (90), Dione (73), Iapetus (58), Enceladus (53), Tethys (50) and Europa (41). For a full list, see List of craters in the Solar System. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Dropped and not approved names See also List of montes on Venus List of coronae on Venus List of geological features on Venus Notes References External links USGS: Venus nomenclature USGS: Venus Nomenclature: Craters Venus Craters
List of craters on Venus
Astronomy
251
1,920,803
https://en.wikipedia.org/wiki/Contraflexure
In solid mechanics, a point along a beam under a lateral load is known as a point of contraflexure if the bending moment about the point equals zero. In a bending moment diagram, it is the point at which the bending moment curve intersects with the zero line (i.e. where the bending moment reverses direction along the beam). Knowing the place of the contraflexure is especially useful when designing reinforced concrete or structural steel beams and also for designing bridges. Flexural reinforcement may be reduced at this point. However, to omit reinforcement at the point of contraflexure entirely is inadvisable as the actual location is unlikely to realistically be defined with confidence. Additionally, an adequate quantity of reinforcement should extend beyond the point of contraflexure to develop bond strength and to facilitate shear force transfer. See also Deformation Engineering mechanics Flexural rigidity Flexural stress Fluid mechanics Inflection point Strength of materials References Solid mechanics
Contraflexure
Physics
195
45,061,245
https://en.wikipedia.org/wiki/Flow%20Cytometry%20Standard
Flow Cytometry Standard (FCS) is a data file standard for the reading and writing of data from flow cytometry experiments. The FCS specification has traditionally been developed and maintained by the International Society for Advancement of Cytometry (ISAC). FCS used to be the only widely adopted file format in flow cytometry. Recently, additional standard file formats have been developed by ISAC. File Format The FCS file format describes a file that is a combination of textual data followed by binary data. The order of the file layout is as follows: HEADER segment TEXT segment DATA segment Optional ANALYSIS segment CRC value Optional OTHER segments The HEADER segment is an ASCII text string that begins by identifying the version of the FCS standard used, followed by three pairs of byte offsets that designate the positions of the TEXT, DATA, and ANALYSIS segments. An example header segment is given below FCS3.0 58 4380 4381 5586 0 0 Because the field width of the header segment byte positions is constrained by 8 characters, the maximum position it is capable of storing is 99,999,999. Anything beyond that is encoded as a 0 for both the start and end position, and the corresponding TEXT segment keyword is used instead. The text segment is an ASCII text string that is divided into a series of key-value pairs that are delimited by some chosen character, e.g. '|'. The first character immediately following the header segment is the delimiter. An example of a header and text segment is given below FCS3.0 58 4380 4381 5586 0 0|$BEGINANALYSIS|0|$BEGINDATA|4381|$BEGINSTEXT|0|$BTIM|08:24:37.64|$BYTEORD|1,2,3,4|$CELLS|RBC|...| To be a valid FCS file, the text segment must contain all required keywords, which describe the DATA segment format and encoding. For FCS version 3.1, the required FCS primary TEXT segment keywords are as follows: The DATA segment of the FCS file follows after the TEXT segment and is laid out event-wise (row-wise) according to the order described in the parameters (a.k.a. channels) $P1N $P2N$...PnN. An event is either an actual biological cell or some other mass that was large enough to trigger the data acquisition capturing device of the flow cytometer instrument. Data segments hold the following layout: Data Segment [Event1][Event2][Event3]...[Event$TOT] Each event is laid out according to the number of bytes described by $PnB for each parameter. These bytes are to be interpreted according to the combination specified by $BYTEORD and $DATATYPE. Event [$P1B][$P2B][$P3B]...[$PnB] Data structure Flow cytometry data is typically saved for analysis in the form of an array, with fluorescence and scatter channels represented in columns, and individual "events" (most of which are cells) forming the rows. The number of events acquired from each sample usually ranges between the low thousands and the low millions. History The first version of a Flow Cytometry Standard (FCS) was developed in 1984. Since then, FCS became the standard file format supported by all flow cytometry software and hardware vendors. FCS is a binary file format with three main segments: a text segment containing meta data in keyword/value pairs structures, a data segment usually containing a matrix of detected expression values (so called list mode format), and a rarely used analysis segment. Over the years, updates were incorporated to adapt to technological advancements in both flow cytometry and computing technologies. In 1990, FCS 2.0 was introduced. Features introduced in FCS 2.0 included the option of multiple data sets within a data file, the use of different byte orders accommodating hardware variations on different computing platforms, and basic compensation and scaling information. FCS 2.0 was followed by FCS 3.0 in 1997, which introduced the possibility of storing data sets larger than 100MB. The latest version, FCS 3.1, was introduced in 2010. It retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. They include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about the origins of the data file, and support for plate and well identification in high throughput, plate based experiments. See also Flow cytometry Flow cytometry bioinformatics References Flow cytometry Bioinformatics
Flow Cytometry Standard
Chemistry,Engineering,Biology
1,041
34,536,095
https://en.wikipedia.org/wiki/Thermotoga%20maritima
Thermotoga maritima is a hyperthermophilic, anaerobic organism that is a member of the order Thermotogales. T. maritima is well known for its ability to produce hydrogen (clean energy) and it is the only fermentative bacterium that has been shown to produce hydrogen more than the Thauer limit (>4 mol H2 /mol glucose). It employs [FeFe]-hydrogenases to produce hydrogen gas (H2) by fermenting many different types of carbohydrates. History First discovered in the sediment of a marine geothermal area near Vulcano, Italy, Thermotoga maritima resides in hot springs as well as hydrothermal vents. The ideal environment for the organism is a water temperature of , though it is capable of growing in waters of . Thermotoga maritima is the only bacterium known to grow at this high a temperature; the only other organisms known to live in environments this extreme are members of the domain Archaea. The hyperthermophilic abilities of T. maritima, along with its deep lineage, suggests that it is potentially a very ancient organism. Physical attributes Thermotoga maritima is a non-sporulating, rod shaped, gram-negative bacterium. When viewed under a microscope, it can be seen to be encased in a sheath-like envelope which resembles a toga, hence the "toga" in its name. Metabolism As an anaerobic fermentative chemoorganotrophic organism, T. maritima catabolizes sugars and polymers and produces carbon dioxide (CO2) and hydrogen (H2) gas as by-products of fermentation. T. maritima is also capable of metabolizing cellulose as well as xylan, yielding H2 that could potentially be utilized as an alternative energy source to fossil fuels. Additionally, this species of bacteria is able to reduce Fe(III) to produce energy using anaerobic respiration. Various flavoproteins and iron-sulphur proteins have been identified as potential electron carriers for use during cellular respiration. However, when growing with sulfur as the final electron acceptor, no ATP is produced. Instead, this process eliminates inhibitory H2 produced from fermentative growth. Collectively, these attributes indicate that T. maritima has become resourceful and capable of metabolizing a host of substances in order to carry out its life processes. Clean energy (biohydrogen) from T. maritima Energy is a growing need of the world and it is expected to grow in the next 20 years. Among various energy sources, hydrogen serves as the best energy carrier due to its higher energy content per unit weight. T. maritima is one of fermentative bacteria that produces hydrogen to levels that approach the thermodynamic limit (4 mol H2/ mol glucose). However, similar to other fermentative bacteria, the biohydrogen yield in this bacterium does not go beyond 4 mol H2 / glucose (Thaeur limit) because of its inherent nature to use more energy for its own cell division to grow rapidly than producing H2. Because of these reasons fermentative bacteria have not been thought to produce higher amounts of hydrogen at a commercial scale. Overcoming this limit by improving the conversion of sugar to H2 could lead to a superior H2 producing biological system that may supersede fossil fuel-based H2 production. Metabolic engineering in this bacterium led to development of strains of T. maritima that surpassed the Thauer limit of hydrogen production. One of the strains, also known as Tma200, produced 5.77 mol H2/ mol glucose which is the highest yield so far reported in a fermentative bacterium. In this strain, energy redistribution, and metabolic rerouting through the pentose phosphate pathway (PPP) generated excess reductants while uncoupling growth from hydrogen synthesis. Uncoupling of growth from product formation has been viewed as a viable strategy to maximize the product yield which has been achieved in the higher hydrogen producing bacterium. Similar strategies can be adopted for other hydrogen producing bacterium to maximize product yields. Hydrogenase activity Hydrogenases are metalloenzymes that catalyze the reversible hydrogen conversion reaction: H2 ⇄ 2 H++ 2 e−. A Group C [FeFe]-hydrogenase from Thermotoga maritima (TmHydS) has showed modest hydrogen conversion activity and reduced sensitivity to the enzyme's inhibitor, CO, in comparison to Group A prototypical and bifurcating [FeFe]-hydrogenases. The TmHydS has a hydrogenase domain with distinct amino acid modifications in the active site pocket, including the presence of a Per-Arnt-Sim (PAS) domain. Genomic composition The genome of T. maritima consists of a single circular 1.8 megabase chromosome encoding for 1877 proteins. Within its genome it has several heat and cold shock proteins that are most likely involved in metabolic regulation and response to environmental temperature changes. It shares 24% of its genome with members of the Archaea; the highest percentage overlap of any bacteria. This similarity suggests horizontal gene transfer between Archaea and ancestors of T. maritima and could help to explain why T. maritima is capable of surviving in such extreme temperatures and conditions. The genome of T. maritima has been sequenced multiple times. Genome resequencing of T. maritima MSB8 genomovar DSM3109 determined that the earlier sequenced genome was an evolved laboratory variant of T. maritima with an approximately 8-kb deletion. Moreover, a variety of duplicated genes and direct repeats in its genome suggest their role in intra-molecular homologous recombination leading to genes deletion. A strain with a 10-kb gene deletion has been developed using the experimental microbial evolution in T. maritima. Genetic system of Thermotoga maritima Thermotoga maritima has a great potential in hydrogen synthesis because it can ferment a wide variety of sugars and has been reported to produce the highest amount of H2 (4 mol H2/ mol glucose). Due to lack of a genetic system for the past 30 years majority of the studies have been either focused on heterologous gene expression in E. coli or predicting models since a gene knockout mutant of T. maritima remained unavailable. Developing a genetic system for T. maritima has been a challenging task primarily because of a lack of a suitable heat-stable selectable marker. Recently, the most reliable genetic system based on pyrimidine biosynthesis has been established in T. maritima. This newly developed genetic system relies upon a pyrE− mutant that was isolated after cultivating T. maritima on a pyrimidine biosynthesis inhibiting drug called 5-fluoroorotic acid (5-FOA). The pyrE− mutant is an auxotrophic mutant for uracil. The pyrE from a distantly related genus of T. maritima rescued the uracil auxotrophy of the pyrE− mutant of T. maritima and has been proven to be a suitable marker. For the first time, the use of this marker allowed the development of an arabinose (araA) mutant of T. maritima. This mutant explored the role of the pentose phosphate pathway of T. maritima in hydrogen synthesis. The genome of T. maritima possesses direct repeats that have developed into paralogs. Due to lack of a genetic system the true function of these paralogs has remained unknown. Recently developed genetic system in T. maritima has been very useful to determine the function of the ATPase protein (MalK) of the maltose transporter that is present in a multi-copy (three copies) fashion. The gene disruptions of all three putative ATPase encoding subunit (malK) and phenotype have concluded that only one of the three copies serves as an ATPase function of the maltose transporter. It is interesting to know that T. maritima has several paralogs of many genes and the true function of these genes is now dependent upon the use of the recently developed system. The newly developed genetic system in T. maritima has a great potential to make T. maritima as a host for hyperthermophilic bacterial gene expression studies. Protein expression in this model organism is promising to synthesize fully functional protein without any treatment. Evolution Thermotoga maritima contains homologues of several competence genes, suggesting that it has an inherent system of internalizing exogenous genetic material, possibly facilitating genetic exchange between this bacterium and free DNA. Based on phylogenetic analysis of the small sub-unit of its ribosomal RNA, it has been recognized as having one of the deepest lineages of Bacteria. Furthermore, its lipids have a unique structure that differs from all other bacteria. References External links Thermotoga maritima genome Sequenced genome of Thermotoga maritima Type strain of Thermotoga maritima at BacDive - the Bacterial Diversity Metadatabase Thermotogota Organisms living on hydrothermal vents Bacteria described in 1986
Thermotoga maritima
Biology
1,961
15,065,614
https://en.wikipedia.org/wiki/MED28
Mediator of RNA polymerase II transcription subunit 28 is an enzyme that in humans is encoded by the MED28 gene. It forms part of the Mediator complex. Function Subunit Med28 of the Mediator may function as a scaffolding protein within Mediator by maintaining the stability of a submodule within the head module, and components of this submodule act together in a gene-regulatory programme to suppress smooth muscle cell differentiation. Thus, mammalian Mediator subunit Med28 functions as a repressor of smooth muscle-cell differentiation, which could have implications for disorders associated with abnormalities in smooth muscle cell growth and differentiation, including atherosclerosis, asthma, hypertension, and smooth muscle tumours. Interactions MED28 has been shown to interact with Merlin, Grb2 and MED26. See also Mediator complex References Further reading Protein families
MED28
Biology
177
37,994,198
https://en.wikipedia.org/wiki/Mu%20Muscae
Mu Muscae, Latinized from μ Muscae, is a solitary star in the southern constellation of Musca. It is visible to the naked eye as a faint, orange-hued star with an apparent visual magnitude of around 4.75. Based upon an annual parallax shift of 7.21 mas as seen from Earth, it is located about 450 light years from the Sun. The star is drifting further away with a radial velocity of +37 km/s. This is an evolved K-type giant star with a stellar classification of K4 III, having exhausted the supply of hydrogen at its core then cooled and expanded to 53 times the Sun's radius. It most likely on the red giant branch, rather than the asymptotic giant branch, and shows no signs of mass loss. Mu Muscae is a type Lb, oxygen-rich irregular variable with a small amplitude that ranges in visual magnitude between 4.71 and 4.76. It is radiating 602 times the luminosity of the Sun from its photosphere at an effective temperature of 3,930 K. References K-type giants Slow irregular variables Musca Muscae, Mu Durchmusterung objects 102584 057581 4530
Mu Muscae
Astronomy
257
40,032,774
https://en.wikipedia.org/wiki/Rafael%20Moure-Eraso
Rafael Moure-Eraso (born May 2, 1946) is a former chairman and chief executive of the U.S. Chemical Safety and Hazard Investigation Board (CSB). Early life Moure-Eraso was born in Cali, Colombia, in 1946. He grew up in Bogotá where he was educated by Augustinian friars and at the University of Los Andes. Education He received his B.Sc. in chemical engineering from the University of Pittsburgh in 1967 and M.Sc. in chemical engineering from Bucknell University in 1970. He received his M.Sc. and Ph.D. from the University of Cincinnati in Environmental Health-Industrial Hygiene in 1974 and 1982. Career For over 30 years, Moure-Eraso has been involved in workplace safety issues. Prior to joining the CSB Moure-Eraso served as a member of the National Advisory Committee on Occupational Health (NACOSH) for the Occupational Safety and Health Administration (OSHA) and as a member of the Scientific Advisory Committee of the National Institute for Occupational Safety and Health (NIOSH). Moure-Eraso has also worked as a chemical engineer for Rohm and Haas and the Dow Chemical Company. He was a faculty member at the University of Massachusetts Lowell for 22 years and chairman of the university's Department of Work Environment for 5 years. He has also served as an industrial hygienist engineer with the national offices of the United Automobile Workers union and the Oil, Chemical and Atomic Workers International Union. Chemical Safety Board Moure-Eraso was nominated by President Barack Obama to the U.S. Chemical Safety Board in March 2010 and confirmed by the Senate in June 2010. In March 2015, he was called to testify in front of the House Oversight Committee regarding the management of the Chemical Safety Board. Following that testimony fourteen members of the committee issued a letter to the White House calling on the president to use his statutory authority to remove Moure-Eraso from his position as chairman of the CSB. The letter cited a pattern of retaliation against whistleblowers, disenfranchisement of fellow board members, low morale in the organization, and possible violations of the Federal Records Act by using personal email accounts for official business. Moure-Eraso told the Los Angeles Times: "A lot of it is political. The mission of the organization is to produce good reports that make a difference for safety. We are doing that. I can show that we are producing the best reports ever produced in the agency. I stand by that. All of this other talk is peripheral... There have been a lot of accusations, but none of those have ever ended in any findings. The Office of Special Counsel has made no recommendations. Anybody can claim actions against whistleblowers, but there’s no evidence of this. To just say it is not enough. What I would like to be judged for is the quality of the product and the fulfillment of our mission. There will always be people complaining. But they are all rumors." He resigned his post on March 26, 2015. Further reading References 1946 births Swanson School of Engineering alumni Living people University of Los Andes (Colombia) alumni Bucknell University alumni University of Cincinnati alumni University of Massachusetts Lowell faculty United States Chemical Safety and Hazard Investigation Board
Rafael Moure-Eraso
Chemistry
668
2,228,872
https://en.wikipedia.org/wiki/Fileset
In computing, a fileset is a set of computer files linked by defining property or common characteristic. There are different types of fileset though the context will usually give the defining characteristic. Sometimes it is necessary to explicitly state the fileset type to avoid ambiguity, an example is the emacs editor which explicitly mentions its Version Control (VC) fileset type to distinguish from its "named files" fileset type. Fileset types While there is probably no classification of fileset types some common usage cases do emerge: A fileset type where the set of files in the fileset are simply enumerated or selected, as an example in the way named filesets are constructed in emacs. The set of files included in a software installation package is used in both the AIX operating system installation packaging system, and the HP-UX packaging system. For fileset types relating to filesystems there may be a relationship to directories. In terms of Namespace Database (NSDB) Protocol for Federated File Systems: In coding forms some libraries may define a fileset object type, typically as a case specific name Fileset, or FileSet which is used to hold an object which references a set of files. Specific examples Fileset has several meanings and usages, depending on the context. Some examples are: In the AIX operating system installation packaging system it is the smallest individually installable unit (a collection of files that provides a specific function). DCE/DFS uses the term fileset to define a tree containing directories, files, and mount points (links to other DFS filesets). A DFS fileset is also a unit of administrative control. Properties such as data location, storage quota, and replication are controlled at this level of granularity. The concept of a fileset in DFS is essentially identical to the concept of a volume in AFS. The glamor filesystem uses the same concept of filesets. Filesets are lightweight components compared to file systems, so management of a file set is easier. In IBM GPFS represents a set of files within a file system which have an independent inode space. References Computing terminology
Fileset
Technology
444
14,320,541
https://en.wikipedia.org/wiki/Preben%20Maegaard
Preben Maegaard (25 September 1935 – 25 March 2021) was a Danish renewable energy pioneer, author and expert. Since the oil crisis in 1974 he worked for the transition from fossil fuels to renewable energy. Preben Maegaard was co-founder of the Nordic Folkecenter for Renewable Energy, established in 1983, and its director 1984 till 2013 (www.folkecenter.net). Life and work Preben Maegaard works locally, nationally and internationally at the organisational, political and technological levels within broad spectrum of renewable energy technologies. From 1979 to 1984 Preben Maegaard was chairman of the Danish Renewable Energy Association (OVE), since 1991, vice-president of Eurosolar (the European Renewable Energy Association) and in 2006, appointed senior vice-president. Since 1992 he has been co-ordinator of the European Solar Prize in Denmark and member of the European Solar Prize Jury. In 1995 he became Member of Senate, UTER, Technical University, Havana. In 1996 and the following years, Preben Maegaard was member of the board of EUROSUN, an intergroup set up by the European Parliament. Since 1999, he has been a board member of the European Renewable Energies Federation (EREF) and Renewable Energy Adviser to the President of Mali, Alpha Konare, leading to the establishment of the Mali Folkecenter. In 2001, he became a chairperson of the Committee of the World Council for Renewable Energy (WCRE). When the World Wind Energy Association (WWEA) was founded in 2001, he became its first president, a position he held till 2005. In May 2006, the World Wind Energy Institute (WWEI) was initiated in Kingston, Canada, involving seven institutes from China, Brazil, Cuba, Canada, Russia, Egypt and Denmark. Preben Maegaard was appointed the first president of the WWEI. He was co-founder of the SolarSuperState Association when it was founded in 2012 and became its first president. As the director of the Folkecenter he has been responsible for the technological innovation of windmills, including design, construction and implementation of sizes from 20 to 525 kW, farm biogas digesters from 50 to 1000 m3 as well as integrated energy systems including hydrogen and biofuels for transport. The technological development and implementation activities took place in cooperation with DS Trade and Industry (www.ds-net.dk). The Folkecenter has under his leadership provided transfer of renewable energy technology to many countries, and set up numerous pilot projects worldwide. In 2002, the liberal government suspended the national renewable energy development and implementation activities and programs in which Preben Maegaard held positions. Danish State support to the Folkecenter received since 1983 was suspended as well. Since 2005, Folkecenter has received funding from the Energifonden and since 2012 support from the Danish state. Preben Maegaard served on several Danish national governmental committees and councils for the development and implementation of renewable energy as member of: Renewable Energy Steering Group (1981-1991); National Board of Technology; National Renewable Energy Council in Denmark (1991-1996); National Committee Biomass for Energy (1985-1989); National Committee for Solar Energy (1995-2002); National Committee for Wave Power (1997-2002); National Committee for Hydrogen from Renewable for Transport Purposes (1998-2002). For over three decades, Maegaard was conference director, organiser, speaker and/or participant of numerous national and international seminars, workshops and conferences, chairman of the World Wind Energy Conferences: WWEC2003 in Cape Town, WWEC2004 in Beijing and WWEC2005 in Melbourne (www.wwindea.org). Preben Maegaard was author and/or co-author of numerous reports, books, articles and periodicals in Danish, English, German and Japanese within the field of renewable energy and sustainable development and has received a number of awards. In March 2010 Preben Maegaard was featured in documentary film The Fourth Revolution: Energy. Maegaard died on 25 March 2021, at the age of 85. Professional career Education 1957 - 1962: Economist, graduated in Microeconomics, Human Resources and International Trade. Copenhagen School of Business and Economics. Studies, Law and Ethnography, Copenhagen University. 1962 - 1965: Economist, Statistical Planning Bureau, Ministry of Housing Early Career 1965 - 1970: Lecturer, Adult Education, Social Sciences and Human Ecology, Open University Courses and Danish Folk High Schools 1970 - 1975: Manager, NORDENFJORD College of Innovative Productions and Human Development 1975 - 1983: Consultant and Project Implementation Manager for Small and Medium Size Enterprises, Renewable Energy. Project management | Organisation: Dansk Smedemesterforening (DS Trade & Industry) 1976 - 1982: Cooperation about development of construction manuals. 11, 22, 55 kW wind turbines for the Danish market. Marketing strategies. Association of 2000 members, 20 companies directly involved. Coordination, testing and approval. 1980 - 1982: Coordination of project group design engineering and product implementation. 100 kW wind turbine (Rudbjerg). Supported by Danish Agency of Industry. 1981 - 1985: Farm biogas, 50, 100, 200 cubic meter. Cooperation about development of construction manuals. Marketing strategies. Four companies directly involved. Coordination, testing and approval. Supported by Danish Agency of Industry Founder and director | Organisation: Nordic Folkecenter for Renewable Energy 1984 - 1992: Development of construction manual. Wind turbine, 150, 200, 275 and 525 kW. Corporation development of manufacturing incl. subcontractors. 1987 - 1988: Large-scale solar collector for direct heating. Supported by Danish Agency of Industry. 1987 - 1989: Local island production and usage. Prototype development. Wind turbine project Bornholm, Denmark, 9 x 99 kW. Local production within the Baltic Power organization. 1988 - 1995: Concept development, technology research, project development, coordination and advising of Danish local cogeneration scheme. The target group was local consumer owned district heating associations. Appx. 200 installations with a grand total of 2.700 MW by 1995. 1990 - 1992: Organization of local production of wind power in Poland. Local production organization, building and testing. 95 kW. Supported by Danish Agency of Industry 1991 - 1994: Technology transfer of Hydrogen-electrolyze system from Ukraine to Denmark. 1992: Solar technology, Hungary. Technology transfer. Partly local production. Training of local staff. 1992 - 1993: University of Pernambuco, Recife, Brazil. Transfer and installation of FC type 75 kW wind turbine for island wind-diesel operation. Supported by Danish Agency of Industry 1996 - 1997: 600 kW wind turbine project in Kaliningrad. Supported by Danish Environmental Protection Agency. 1997- 1999: 10 kW wind turbine tech to Bayamo, Cuba, Technical University in Havana. Technology transfer, erection and testing. Local production. 1998: Home power, Transylvania, Rumania, 3 kW wind turbine and solar cell system. Remote area, stand alone system. Supported by European Commission. 1998: Biogas plant, Kaunas, Lithuania, Development, production, installation and commissioning using local producers. Supported by Danish Environmental Protection Agency. 1998 - 2002: Advisor to the President of Mali, Alpha Konaré. Implementation of village electrification in the Sikasso region. Building and operation of RE training center in Tabaccoro, Mali. Supported by Danida 1990 - 1995: Information and marketing of renewable energy and technology transfer of Danish know-how. Mobile exhibition campaigns and information to eight East European countries. Committee member in Danish board technology. 2001: Biogas plant, Yubetsu, Japan, Development, production in Denmark. Installation and commissioning using local assistance. Coordination for Kawasaki Engineering. 2009 - main speaker at an event organised in Uganda to discuss solar energy organised by Joint Energy Environment Projects (JEEP). (As at July 2024, the Nordic Folkecenter is a partner and funder of JEEP) Awards 1978, Solar Prize, Danish Organisation for Renewable Energy (OVE) 1985, Plum Environmental Prize, Plum Fonden 1987, Environmental Prize, Association of Danish Engineers 1992, Wind Energy Prize, Denmark's Windmill Owners Association 1997, European Solar Prize, EUROSOLAR 2002, The GAIA Prize, Gaia Trust 2002, MERKUR Pioneer Prize, MERKUR cooperative Bank 2005, Nuclear-Free Future Award 2008, World Wind Energy Award, World Wind Energy Association 2010, CommunityPower Award, Toronto, Canada Later publications Beuse, E.; Boldt, J.; Maegaard, P.; Meyer, N.I.; Windeleff, J.; Ostergaard, I. (2000) Vedvarende energi i Danmark. En krønike om 25 opvækstår 1975 – 2000, OVE's Forlag; Aarhus Maegaard, P.; Kruse, J. (2001) Energien fra Thy. Fra de lokale til det globale, Nordvestjysk Folkecenter for Vedvarende Energi; Hurup Thy Maegaard, P. (2003) Wind Energy for the Future, Keynote paper prepared for the 13th Islamic Academy of Science, IAS, Conference; Kuching, Malaysia El Bassam, N.; Maegaard, P. (2004) Integrated Renewable Energy for Rural Communities, ELSEVIER; Amsterdam and London. Maegaard, P. (2004) Die Kraft der Sonne in den Dienst der Menschheit stellen. In Hermann Scheer. Praktische Visionen, Ponte Press; Bochum Maegaard, P. (2004) Laudatio for Dr. Hermann Scheer, The WORLD WIND ENERGY AWARD, WWEC2004; Beijing E Thybo Andersen, Finn; Maegaard, Preben: (2006) Byvandringer 1968 ¤ Bykulturen ved vejs ende, YNKB TEMA 12; København, ISSN 1602-2815 Maegaard, P., (2010) "Wind Energy Development and Application Prospects, of Non-Grid-Connected Wind Power," in: Proceedings of 2009 World Non-Grid-Connected Wind, Power and Energy Conference. IEEE Press. Maegaard, P.: (2012) "Integrated Systems to Reduce Global Warming" in "Handbook of Climate Mitigation", Vol IV, Springer Science, New York, El Bassam, N., Maegaard, P., Schlichting, M.L., (2013) "Distributed Renewable Energies for Off-Grid Communities," Elsevier Science, New York Maegaard, P., Palz, W., Krenz, A.: (2013) "Wind Power for the World, The Rise of Modern Wind Energy", Vol 2, Part I, Pan Stanford, Singapore Maegaard, P., Palz, W., Krenz, A.: (2013) "Wind Power for the World, International Reviews and Developments", Vol 2, Part II, Pan Stanford, Singapore References 1935 births 2021 deaths Danish environmentalists Energy engineers
Preben Maegaard
Engineering
2,271
11,921,616
https://en.wikipedia.org/wiki/Virtual%20Link%20Aggregation%20Control%20Protocol
Virtual LACP (VLACP) is an Avaya extension of the Link Aggregation Control Protocol to provide a Layer 2 handshaking protocol which can detect end-to-end failure between two physical Ethernet interfaces. It allows the switch to detect unidirectional or bi-directional link failures irrespective of intermediary devices and enables link recovery in less than one second. With VLACP, far-end failures can be detected, which allows a Link aggregation trunk to fail over properly when end-to-end connectivity is not guaranteed for certain links through the internet in an aggregation group. When a remote link failure is detected, the change is propagated to the partner port. See also MLT SMLT RSMLT External links Virtual Link Aggregation Control Protocol (VLACP) Retrieved 29 July 2011 ERS-8600 All Documentation -Retrieved 29 July 2011 VSP 7000 Command Line Interface Commands Reference, Command: vlacp Retrieved 1 May 2020 Avaya Ethernet Link protocols Network protocols Network topology Nortel protocols
Virtual Link Aggregation Control Protocol
Mathematics,Technology
212
12,528,854
https://en.wikipedia.org/wiki/Hermite%20constant
In mathematics, the Hermite constant, named after Charles Hermite, determines how long a shortest element of a lattice in Euclidean space can be. The constant γn for integers n > 0 is defined as follows. For a lattice L in Euclidean space Rn with unit covolume, i.e. vol(Rn/L) = 1, let λ1(L) denote the least length of a nonzero element of L. Then is the maximum of λ1(L) over all such lattices L. The square root in the definition of the Hermite constant is a matter of historical convention. Alternatively, the Hermite constant γn can be defined as the square of the maximal systole of a flat n-dimensional torus of unit volume. Example The Hermite constant is known in dimensions 1–8 and 24. For n = 2, one has γ2 = . This value is attained by the hexagonal lattice of the Eisenstein integers. The constants for the missing values are conjectured. Estimates It is known that A stronger estimate due to Hans Frederick Blichfeldt is where is the gamma function. See also Loewner's torus inequality References Systolic geometry Geometry of numbers Mathematical constants
Hermite constant
Mathematics
257
70,590,809
https://en.wikipedia.org/wiki/VCX%20score
VCX score is a smartphone camera benchmarking score described as "designed to reflect the user experience regarding the image quality and the performance of a camera in a mobile device". developed by a non-profit organization - VCX-Forum. VCX scores are used by specialist media and by VCX-Forum members to showcase the benchmarking of smartphones, as well as market photography technology. VCX scoring methodology has been cited in various published books and independent imaging organizations: Book: Camera image quality benchmarking Article in Journal of Electronic Imaging - VCX: An industry initiative to create an objective camera module evaluation for mobile devices. Article in Journal of Electronic Imaging - VCX Version 2020: Further development of a transparent and objective evaluation scheme for mobile phone cameras. Service Provider VCX-Forum (where VCX is an acronym for Valued Camera eXperience) is an independent, non-governmental, standard-setting organization for image quality measurement and benchmarking (VCX score). Its members are drawn from mobile phone manufacturers, mobile operators, imaging labs, mobile and computer chipset manufacturers, sensor manufacturers, device manufacturers, software companies, equipment providers, and camera & accessory manufacturers among others. VCX-Score methodology Tenets VCX score methodologies are based on the 5 Tenets: VCX-Forum test measurements shall ensure the out-of-the-box experience VCX-Forum shall remain 100% objective VCX-Forum shall remain open and transparent VCX-Forum shall employ an independent imaging lab for testing VCX-Forum shall seek continuous improvement Parameters To ensure the test results accurately reflect the user experience, the image quality is evaluated for five parameters: Spatial Resolution Texture loss – the ability of the device to reproduce low contrast, fine details Sharpening – the ability of the device to sharpen with minimum distracting artifacts Noise – the ability of the device to suppress noise while minimizing obfuscation of details Dynamic range – the ability of the device to capture maximum contrast in a scene Color Reproduction – the ability of the device to capture colors closely matching the original scene Setup The device under test is mounted on a tripod on rails to keep the reproduction scale constant between devices under test The entire lab is temperature-controlled to standard room temperature (23 °C ± 2 °C) The device under test is expected to: reproduce reflective test targets like the "TE42-LL" (TE42-LL target in A1066 and A 460 (Selfie) in 4:3 and 16:9); reproduce transmissive TE269B test target (for dynamic range measurements); and reproduce test charts while mounted on a hand simulation device (a device which simulates the movement of the human hand to measure the motion stabilization apparatus of the device, based on ISO 20954-2). The device under test is then used to capture a series of images and video in various controlled lighting conditions A detailed description of the setup and procedure is available as a whitepaper on the VCX-Forum website. as well as in the book, Camera Image Quality Benchmarking, page 318, section 9.4.3 Labs and testing Tests and benchmarks are conducted by independent labs. The test procedure, metrics, and weighting are dictated by the standard developed by VCX-Forum. Benchmark publication VCX scores are published on the VCX-Forum website. Parts of this publication are often reproduced in specialist media and smartphone vendor social media channels as part of their marketing campaign. Criticism Metrics and weighting VCX-Forum claims that all test measurements must ensure the out-of-the-box experience (Tenet 1 of VCX-Forum) but does not specify what happens when the devices are updated later on. VCX-Forum claims to be objective (Tenet 2 of VCX-Forum) but uses subjective components for the formation of the weighting itself. This subjective base is claimed to have come from blind tests for which no evidence has been provided on the website. Despite the claim that VCX is an open and transparent standard (Tenet 3 of VCX-forum), the details of weighting and scoring are only visible to members of the VCX-Forum. Most measurements are done with the device on a tripod and aimed at test charts. This does not reflect the common user scenario that VCX-Forum claims to reflect. References External links Knowledge management Free Internet forum software Technology assessment organisations Standards organizations based in Europe International Organization for Standardization Non-profit organisations based in North Rhine-Westphalia Digital photography Photographic lenses Smartphones Product testing Metrics Benchmarks (computing)
VCX score
Mathematics,Technology
931
3,922,818
https://en.wikipedia.org/wiki/Gonadotropin-releasing%20hormone%20receptor
The gonadotropin-releasing hormone receptor (GnRHR), also known as the luteinizing hormone releasing hormone receptor (LHRHR), is a member of the seven-transmembrane, G-protein coupled receptor (GPCR) family. It is the receptor of gonadotropin-releasing hormone (GnRH). Agonist binding to the GnRH receptor activates the Gq/11 family of heterotrimeric G proteins. The GnRHR is expressed on the surface of pituitary gonadotrope cells as well as lymphocytes, breast, ovary, and prostate. This receptor is a 60 kDa G protein-coupled receptor and resides primarily in the pituitary and is responsible for eliciting the actions of GnRH after its release from the hypothalamus. Upon activation, the LHRHr stimulates tyrosine phosphatase and elicits the release of LH from the pituitary. Evidence exists showing the presence of GnRH and its receptor in extrapituitary tissues as well as a role in progression of some cancers. Function Following binding of GnRH, the GnRHR associates with G-proteins that activate a phosphatidylinositol (PtdIns)-calcium second messenger system. Activation of the GnRHR ultimately causes the release of follicle stimulating hormone (FSH) and luteinizing hormone (LH). Genes There are two major forms of the GNRHR, each encoded by a separate gene (GNRHR and GNRHR2). Alternative splicing of the GNRHR gene, GNRHR, results in multiple transcript variants encoding different isoforms. More than 18 transcription initiation sites in the 5' region and multiple polyA signals in the 3' region have been identified for GNRHR. Regulation The GnRHR responds to GnRH as well as to synthetic GnRH agonists. Agonists stimulate the receptor, however prolonged exposure leads to a downregulation effect resulting in hypogonadism, an effect that is often medically utilized. GnRH antagonists block the receptor and inhibit gonadotropin release. GnRHRs are further regulated by the presence of sex hormones as well as activin and inhibin. Ligands Agonists Peptides Azagly-nafarelin Buserelin Deslorelin Fertirelin GnRH Gonadorelin Goserelin Histrelin Lecirelin Leuprorelin Nafarelin Peforelin Triptorelin Antagonists Peptides Abarelix Cetrorelix Degarelix Ganirelix Ozarelix Non-peptides Elagolix Linzagolix Opigolix Relugolix Sufugolix Pharmacoperones Current research is looking into pharmacoperones, or chemical chaparones that promote the shuttling of mature Gonadotropin-releasing hormone receptor (GNRHR) protein to the cell surface, leading to a functional protein. Gonadotropin-releasing hormone receptor function has been shown to be deleteriously effected by point mutations in its gene. Some of these mutations, when expressed, cause the receptor to remain in the cytosol. An approach to rescue receptor function utilizes pharmacoperones or molecular chaperones, which are typically small molecules that rescue misfolded proteins to the cell surface. These interact with the receptor to restore cognate receptor function devoid of antagonist or agonist activity. This approach, when effective, should increase therapeutic reach. Pharmacoperones have been identified that restore function of Gonadotropin-releasing hormone receptor. Clinical implications Defects in the GnRHR are a cause of hypogonadotropic hypogonadism (HH). Normal puberty begins between ages 8 and 14 in girls and between 9 and 14 in boys. Puberty, however, for some children can come much sooner (precocious puberty) or much later (delayed puberty). In some cases puberty never occurs and thereby contributes to the estimated 35-70 million infertile couples worldwide. Among children, the abnormally early or late onset of puberty exerts intense emotional and social stress that too often goes untreated. The timely onset of puberty is regulated by many factors and one factor that is often referred to as the master regulator of puberty and reproduction is GnRH. This peptide hormone is produced in the hypothalamus but gets secreted and acts upon GnRHRs in the anterior pituitary to exert its effects on reproductive maturation. Understanding how GnRHR functions has been key to developing clinical strategies to treat reproductive-related disorders. See also GnRH modulator References External links G protein-coupled receptors Gonadotropin-releasing hormone and gonadotropins
Gonadotropin-releasing hormone receptor
Chemistry
1,028
23,755,432
https://en.wikipedia.org/wiki/Oxirene
Oxirene is a heterocyclic chemical compound which contains an unsaturated three-membered ring containing two carbon atoms and one oxygen atom. The molecule was synthesized in low temperature ices and detected upon sublimation by isomer selective photoionization reflectron time-of-flight mass spectrometry. Quantum chemical computational techniques found the configuration to be extremely strained and proposed an antiaromatic 4π electron system, as such oxirene is expected to be very high energy. Experimental indications exist that substituted oxirenes (as intermediates or transition states) may be involved in carbonylcarbene rearrangements observed in the Wolff rearrangement. Computational evidence also point to the intermediacy of oxirenes in the ozonolysis of alkynes. References Oxygen heterocycles Ethers Three-membered rings Antiaromatic compounds
Oxirene
Chemistry
182
2,522,588
https://en.wikipedia.org/wiki/Unusual%20number
In number theory, an unusual number is a natural number n whose largest prime factor is strictly greater than . A k-smooth number has all its prime factors less than or equal to k, therefore, an unusual number is non--smooth. Relation to prime numbers All prime numbers are unusual. For any prime p, its multiples less than p2 are unusual, that is p, ... (p-1)p, which have a density 1/p in the interval (p, p2). Examples The first few unusual numbers are 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17, 19, 20, 21, 22, 23, 26, 28, 29, 31, 33, 34, 35, 37, 38, 39, 41, 42, 43, 44, 46, 47, 51, 52, 53, 55, 57, 58, 59, 61, 62, 65, 66, 67, ... The first few non-prime (composite) unusual numbers are 6, 10, 14, 15, 20, 21, 22, 26, 28, 33, 34, 35, 38, 39, 42, 44, 46, 51, 52, 55, 57, 58, 62, 65, 66, 68, 69, 74, 76, 77, 78, 82, 85, 86, 87, 88, 91, 92, 93, 94, 95, 99, 102, ... Distribution If we denote the number of unusual numbers less than or equal to n by u(n) then u(n) behaves as follows: Richard Schroeppel stated in the HAKMEM (1972), Item #29 that the asymptotic probability that a randomly chosen number is unusual is ln(2). In other words: References External links Integer sequences
Unusual number
Mathematics
384
509,661
https://en.wikipedia.org/wiki/Thermogenin
Thermogenin (called uncoupling protein by its discoverers and now known as uncoupling protein 1, or UCP1) is a mitochondrial carrier protein found in brown adipose tissue (BAT). It is used to generate heat by non-shivering thermogenesis, and makes a quantitatively important contribution to countering heat loss in babies which would otherwise occur due to their high surface area-volume ratio. Mechanism UCP1 belongs to the UCP family which are transmembrane proteins that decrease the proton gradient generated in oxidative phosphorylation. They do this by increasing the permeability of the inner mitochondrial membrane, allowing protons that have been pumped into the intermembrane space to return to the mitochondrial matrix and hence dissipating the proton gradient. UCP1-mediated heat generation in brown fat uncouples the respiratory chain, allowing for fast substrate oxidation with a low rate of ATP production. UCP1 is related to other mitochondrial metabolite transporters such as the adenine nucleotide translocator, a proton channel in the mitochondrial inner membrane that permits the translocation of protons from the mitochondrial intermembrane space to the mitochondrial matrix. UCP1 is restricted to brown adipose tissue, where it provides a mechanism for the enormous heat-generating capacity of the tissue. UCP1 is activated in the brown fat cell by fatty acids and inhibited by nucleotides. Fatty acids are released by the following signaling cascade: Sympathetic nervous system terminals release Norepinephrine onto a Beta-3 adrenergic receptor on the plasma membrane. This activates adenylyl cyclase, which catalyses the conversion of ATP to cyclic AMP (cAMP). cAMP activates protein kinase A, causing its active C subunits to be freed from its regulatory R subunits. Active protein kinase A, in turn, phosphorylates triacylglycerol lipase, thereby activating it. The lipase converts triacylglycerols into free fatty acids, which activate UCP1, overriding the inhibition caused by purine nucleotides (GDP and ADP). During the termination of thermogenesis, thermogenin is inactivated and residual fatty acids are disposed of through oxidation, allowing the cell to resume its normal energy-conserving state. UCP1 is very similar to the ATP/ADP Carrier protein, or Adenine Nucleotide Translocator (ANT). The proposed alternating access model for UCP1 is based on the similar ANT mechanism. The substrate comes in to the half open UCP1 protein from the cytoplasmic side of the membrane, the protein closes the cytoplasmic side so the substrate is enclosed in the protein, and then the matrix side of the protein opens, allowing the substrate to be released into the mitochondrial matrix. The opening and closing of the protein is accomplished by the tightening and loosening of salt bridges at the membrane surface of the protein. Substantiation for this modelling of UCP1 on ANT is found in the many conserved residues between the two proteins that are actively involved in the transportation of substrate across the membrane. Both proteins are integral membrane proteins, localized to the inner mitochondrial membrane, and they have a similar pattern of salt bridges, proline residues, and hydrophobic or aromatic amino acids that can close or open when in the cytoplasmic or matrix state. Structure The atomic structure of human uncoupling protein 1 UCP1 has been solved by cryogenic-electron microscopy. The structure has the typical fold of a member of the SLC25 family. UCP1 is locked in a cytoplasmic-open state by guanosine triphosphate in a pH-dependent manner, preventing proton leak. Evolution UCP1 is expressed in brown adipose tissue, which is functionally found only in eutherians. The UCP1, or thermogenin, gene likely arose in an ancestor of modern vertebrates, but did not initially allow for our vertebrate ancestor to use non-shivering thermogenesis for warmth. It wasn't until heat generation was adaptively selected for in placental mammal descendants of this common ancestor that UCP1 evolved its current function in brown adipose tissue to provide additional warmth. While UCP1 plays a key thermogenic role in wide range placental mammals, particularly those with small body size and those that hibernate, the UCP1 gene has lost functionality in several large-bodied lineages (e.g. horses, elephants, sea cows, whales and hyraxes) and lineages with low metabolic rates (e.g. pangolins, armadillos, sloths and anteaters). Recent discoveries of non-heat-generating orthologues of UCP1 in fish and marsupials, other descendants of the ancestor of modern vertebrates, show that this gene was passed on to all modern vertebrates, but aside from placental mammals, none have heat producing capability. This further suggests that UCP1 had a different original purpose and in fact phylogenetic and sequence analyses indicate that UCP1 is likely a mutated form of a dicarboxylate carrier protein that adapted for thermogenesis in placental mammals. History Researchers in the 1960s investigating brown adipose tissue, found that in addition to producing more heat than typical of other tissues, brown adipose tissue seemed to short circuit, or uncouple, respiration coupling. Uncoupling protein 1 was discovered in 1976 by David G. Nicholls, Vibeke Bernson, and Gillian Heaton, and the discovery was published in 1978 and shown to be the protein responsible for this uncoupling effect. UCP1 was later purified for the first time in 1980 and was first cloned in 1988. Uncoupling protein two (UCP2), a homolog of UCP1, was identified in 1997. UCP2 localizes to a wide variety of tissues, and is thought to be involved in regulating reactive oxygen species (ROS). In the past decade, three additional homologs of UCP1 have been identified, including UCP3, UCP4, and UCP5 (also known as BMCP1 or SLC25A14). Clinical relevance Methods of delivering UCP1 to cells by gene transfer therapy or methods of its upregulation have been an important line of enquiry in research into the treatment of obesity, due to their ability to dissipate excess metabolic stores. See also 2,4-Dinitrophenol (A synthetic small-molecule proton shuttle with similar effects) References Further reading External links Seaweed anti-obesity tablet hope (BBC - Thermogenin mentioned as part of process) Cellular respiration Mitochondria
Thermogenin
Chemistry,Biology
1,422
18,486,537
https://en.wikipedia.org/wiki/Guanazodine
Guanazodine is an anti-hypertensive sympatholytic drug. References Adrenergic release inhibitors Azocanes Guanidines
Guanazodine
Chemistry
34
33,386,360
https://en.wikipedia.org/wiki/Envihab
Envihab is a medical research facility of the Institute of Aerospace Medicine of the German Aerospace Center (DLR) where the effects of diverse environmental conditions on humans are being analyzed and explored as well as possible countermeasures are being developed. The name Envihab is a combination of the words “environment” and “habitat” (Latin: home/living space). The concept of Envihab is to deal with complex problems of a life support system and the interaction between humans and the environment from a medical, biological and psychological point of view. Major focus will be put on research topics that deal with providing for health and performance of humans. The modular house-in-house concept makes it possible to use the different units and the technical equipment without leaving the building. 12 test persons can permanently be exposed to equal and controlled environmental conditions. Total floor space of :envihab is approx. 3.500 square meters. Modules The 5 modules of Envihab are all linked with each other via the medical core area. They are lockable individually as well as in combination of different modules to control parameters like acoustics, climate (temperature, humidity, light), oxygen content and pressure. Test persons can be isolated, immobilized and purposely exposed to stress. In addition, psychological and physiological methods for rehabilitation and as countermeasures i.e. against the effects of immobilization/zero gravity will be analyzed. Envihab is designed to hold/house large training and simulation devices. Outstanding features will be the human centrifuge in the center of :envihab as well as areas where the amount of oxygen can be reduced and an area where pressure can be used to simulate altitudes of up to 5,500 meters. EnviBio The module EnviBio („Envihab – Biology“) provides a laboratory environment for microbiological research. The basic configuration and the steric features of the module support research up to safety class S2 (GenTSV) and under cleanroom conditions class 8. The three main topics investigated in EnviBio are Microbiological environment monitoring: analysis of the diversity of microbial burden and its evolution, also including the influence of humans Investigation on components of bioregenerative life support systems Development and preparation of biological space experiments EnviFit The sojourn of astronauts in space elicits substantial deconditioning effects within the cardiovascular system and equally great losses of bone and muscle mass. Astronauts share these problems with the older population, and especially with the bed-ridden and the immobilised. An aspect that becomes more important, in particular in the light of the demographic development in Germany. Even the young generation is affected. The incurring costs of akinesia and related disorders, such as osteoporosis, cardiovascular events, stroke and cancer are immense. Hence, both the astronaut as well as immobilised and elderly people warrant the developing of efficient countermeasures. Accordingly, preventive interventions need to be identified in order mitigate or even prevent the negative side-effects of immobilisation. EnviFit („Envihab – Fitness“) is an experimental station that is organized in different modules, and in which physiological alterations such as immobilisation and microgravity can be simulated. One way to do this is by 6° head-down-tilt bed-rest studies. Another method is the application of dry water immersion which means that the test persons float on a water-impermeable canvas cover and thus becomes ‘weightless’. EnviMeet EnviMeet is the area which will be accessible for the public. The visitors will be informed about the goals, projects and studies of Envihab as well as the functional elements in order to get an authentic impression of vibrant science. EnviSim EnviSim is a worldwide unique experimental station where it will be possible to simulate environmental conditions precisely. Thus, typical characteristics of diverse surroundings (cabin, space station, field hospital, etc.) will be generated realistically. On the one hand, analyses under uncommon and extreme conditions will be feasible, on the other hand, analyses on the human behaviour in defined and familiar environments will be far more economic, e.g. due to the abdication of flight tests. In addition, it will also be possible to modify single parameters of the surrounding explicitly so that the effect of a single factor as well as the interactions of multiple factors can be analysed regarding the impact on the human being. EnviSim is planned to be a multi purpose area to explore diverse problems in the aerospace field. EnviSim is also planned to be a habitat making it possible for crews to live there for a longer period. For the exploration of long-term space missions a control center will be installed which simulates authentic flights and space stations. Following research topics will be part of EnviSim: Decision behaviour and performance in a team under work loading Modification of social structures under isolation Team formation and efficiency Impact of altered communicational opportunities on team work Test of digital expert systems for performance and social conflicts EnviRec Long-term habitation in artificial or isolated work environments entail the risk of a deprivation of impulses including fatal dangers for the human psychology and performance. The main focus of EnviRec (“Envihab – Recreation”) is the exploration of diverse approaches of compensating the deprivation of impulses. Therefore, a presentation of authentic environmental effects will be created as a way for recovery for crew members. In touch with long-term studies of the EnviSim module and interdisciplinary research, the effects on crew members, team work and the supervision of the system as a whole will be quantified. Following questions will be analysed in EnviRec: Test of virtual methods that support relieving stress Virtual reality and immersion Impact of virtual methods on performance and social behaviour Research Envihab is designed for long-term medical habitat research which in general will strengthen Cologne and North Rhine-Westphalia as a location for aerospace business and science and will sharpen the profile of the DLR as a national and international research center of excellence in the field of aerospace medicine. Supported by the Regionale 2010 a research facility will be built that will supply scientific progress and economic surplus based on results that can be used by the industry. At the same time Envihab will increase the public's awareness for centralquestions of the future concerning life on Earth. The new research facility of the DLR will create the basis for conducting integrative research in the field of life sciences on the highest international level. Prior to conducting an experiment in orbit, the parameters of the project are being investigated on Earth. This applies to technical and material science experiments and is even more important when humans in space are involved or medical experiments are conducted. In the future, the scientists of the Institute of Aerospace Medicine of the German Aerospace Center (DLR) will have a modern and ideal research facility customized to their needs at their disposal to conduct their studies: the Envihab. Architecture Within the scope of the Regionale 2010, the architect's office Glass Kramer Löbbert won the pan-European architectural contest of the Envihab project at the German Aerospace Center (DLR) in Cologne/Germany. The building consists of two levels: the user level which is located on the ground floor and the technical level housing the entire building supply units which is located on the second floor. The user level surrounding / grouped around the modules, the dividable auditorium and the exhibition hall /space will be accessible to the public. References http://www.dlr.de/envihab/desktopdefault.aspx/ http://www.dlr.de/me/ Spaceflight Medical research institutes in Germany Medical and health organisations based in North Rhine-Westphalia Organisations based in Cologne
Envihab
Astronomy
1,607
73,252,205
https://en.wikipedia.org/wiki/Bizen%20pottery%20kiln%20ruins
is an archaeological site consisting of the remains of kilns for firing Bizen ware pottery from the end of the Muromachi period to the Edo period located in the Imbe neighborhood of the city of Bizen, Okayama Prefecture, in the San'yo region of Japan. The site is divided into three locations: The Inbe Minami-Ogama site, Inbe Nishi-Ogama site and the Inbe Kita-Ogama site. The Inbe Minami-Ogama site has been protected by the central government as a National Historic Site since 1959, and the other two sites were added in 2009. Overview Bizen ware has ties to Sue pottery from the Heian period in the 6th century, and made its appearance during the Kamakura period of the 14th century, when changes in lifestyles have led to a demand for hard, durable and practical pottery for everyday use. Bizen was considered one of the Six Ancient Kilns by the scholar Koyama Fujio. Until the construction of the large kilns, small kilns were scattered throughout the mountains of Uraibe and Kumayama areas of former Bizen Province. However, as the popularity of Bizen ware increased, large kilns were built to meet the need for mass production from the late Muromachi period. The rustic taste of Bizen ware came to be prized in the Japanese tea ceremony that flourished in the Momoyama period, and the large kilns reached their peak during the Momoyama period and the early Edo period. During the Edo period, the Ikeda clan daimyo of Okayama Domain continued to support the kilns and gave special privileges to families who operated them. However, after the middle of the Edo period, ceramics began to be fired all over the country, and the sales of Bizen ware gradually declined, eliminating the need for mass production. From the mid-Edo period onwards, maintenance of the kilns became an increasing burden and by the Tenpō era (1830-1843) in the latter part of the Edo period, the large kilns were abandoned in favor of smaller and more economic kilns. Inbe Minami-Ogama site The is located at the northern foot of Mount Kayabara, approximately 200 meters south of Inbe Station on the JR West Ako Line. It consists of three kilns: the east kiln, the central kiln, and the west kiln, along with a midden where damaged items and kiln tools have been discarded. The east kiln is the largest kiln in Japan, with a total length of 54 meters and a width of 5 meters. The central kiln is about 30 meters long and 2.3 meters wide. The west kiln is about 31 meters long and 2.8 meters wide. The kilns have a noborigama tunnel structure with an arched ceiling extending up the slope of the mountain. These kilns were used to mass-produce pots, jars, sake bottles, mortars, and other daily items. The amount of firewood required for one firing was up to 60 tons, and the kilns could fire 34,000 to 35,000 items at one time. Further west of the west kiln is the site of the Tenpō kiln, which was built around 1840 and was in operation until around 1885–1886. Approximately 18.5 meters long and 4.5 meters wide. In addition, a small Heian period kiln with a total length of about 4 to 5 meters and a width of less than 1 meter was found at an elevation of 70 meters halfway up the mountain. The site was designated as a national historic site on May 13, 1959. Inbe Nishi-Ogama site The is located at the eastern foot of Mount Iou, about 600 meters northwest of Inbe Station. Currently, the traces of three kilns have been confirmed, the largest of which is about 40 meters long and about 4 meters wide. Similar to the Minami Ogama site, the kilns at this site mainly produced miscellaneous daily wares such as vases, jars, sake bottles, and mortars. On March 21, 1976, it was designated as a Bizen City Historic Site, and on February 12, 2009, it became part of the National Historic Site. Inbe Kita-Ogama site The is located about 300 meters north of Inbe Station, at the southern foot of Mount Furo, around Inbe Shrine. Currently, traces of four kilns have been identified. One (approximately 45 meters long and 4.7 meters wide) is separated by the road leading from Amatsu Shrine to Inbe Shrine. This kiln is presumed to have been built in the Momoyama period. The other three are built parallel to the northwest slope of Inbe Shrine. The north-westernmost kiln is confirmed to be about 47 meters long and 5.4 meters wide, and it is known from historical documents that this kiln existed during the Ōei era (1394-1427) in the early Muromachi period. However, among the excavated items, no items before the latter half of the Muromachi period have been found. As with the Inbe Minami-Ogama and Inbe Nishi-Ogama kilns, the Inbe Kita-Ogama mainly produced miscellaneous daily wares. The kiln remained in operation into the mid-Edo period; however when it needed repairs in 1720, it was not possible to raise the necessary funding, and the kiln was reduced in size to a third of its original length. On October 6, 1971, it was designated as a Bizen City Historic Site, and on February 12, 2009, it became part of the National Historic Site. See also List of Historic Sites of Japan (Okayama) References External links Okayama Prefecture Official home page Okayama Tourist Information official site Bizen, Okayama Muromachi period Japanese pottery kiln sites History of Okayama Prefecture Historic Sites of Japan Bizen Province
Bizen pottery kiln ruins
Chemistry,Engineering
1,240
4,006,161
https://en.wikipedia.org/wiki/Sodium%20diuranate
Sodium diuranate, also known as the yellow oxide of uranium, is an inorganic chemical compound with the chemical formula . It is a sodium salt of a diuranate anion. It forms a hexahydrate . Sodium diuranate is commonly referred to by the initials SDU. Along with ammonium diuranate it was a component in early yellowcakes. The ratio of the two compounds is determined by process conditions; however, yellowcake is now largely a mix of uranium oxides. Preparation In the classical procedure for extracting uranium, pitchblende is broken up and mixed with sulfuric and nitric acids. The uranium dissolves to form uranyl sulfate and sodium carbonate is added to precipitate impurities. If the uranium in the ore is in the tetravalent oxidation state, an oxidiser is added to oxidise it to the hexavalent oxidation state, and sodium hydroxide is then added to make the uranium precipitate as sodium diuranate. The alkaline process of milling uranium ores involves precipitating sodium uranate from the pregnant leaching solution to produce the semi-refined product referred to as yellowcake. These older methods of extracting uranium from its uraninite ores has been replaced in current practice by such procedures as solvent extraction, ion exchange, and volatility methods. Sodium uranate may be obtained in the amorphous form by heating together urano-uranic oxide and sodium chlorate; or by heating sodium uranyl acetate or carbonate. The crystalline form is produced by adding the green oxide in small quantities to fused sodium chloride, or by dissolving the amorphous form in fused sodium chloride, and allowing crystallization to take place. It yields reddish-yellow to greenish-yellow prisms or leaflets. Uses In the past it was widely used to produce uranium glass or vaseline glass, the sodium salt dissolving easily into the silica matrix during the firing of the initial melt. It was also used in porcelain dentures to give them a fluorescence similar to that of natural teeth and once used in pottery to produce ivory to yellow shades in glazes. It was added to these products as a mix with cerium oxide. The final uranium composition was from 0.008 to 0.1% by weight uranium with an average of about 0.02%. The practice appears to have stopped in the late 1980s. References External links NRC Glossary Sodium Uranate Heats of Formation MSDS Sodium compounds Uranates Nuclear materials
Sodium diuranate
Physics
529
11,816,901
https://en.wikipedia.org/wiki/Drechslera%20andersenii
Drechslera andersenii is a fungus that is a plant pathogen. It was originally found on the leaves of Lolium perenne (perennial ryegrass) in Great Britain. It was also found on Italian ryegrass. It was found in China in 2018. References External links USDA ARS Fungal Database Fungal plant pathogens and diseases Pleosporaceae Fungi described in 1985 Fungus species
Drechslera andersenii
Biology
80
72,320,235
https://en.wikipedia.org/wiki/PW%20Telescopii
PW Telescopii, also known as HD 183806 or simply PW Tel, is a solitary variable star located in the southern constellation Telescopium. It has an average apparent magnitude of 5.58, making it faintly visible to the naked eye. Based on parallax measurements from the Gaia satellite, the star is estimated to be 395 light years distant. It appears to be approaching the Solar System with a heliocentric radial velocity of . The value is somewhat constrained, having an uncertainty of 26%. At its current distance, PW Tel's brightness is diminished by 0.05 magnitudes due to interstellar dust. PW Tel was first noticed to vary in brightness in observations taken in 1978 by Pierre Renson. The star was confirmed to be variable and was given the variable star designation PW Telescopii in 1981. Further observations by Jean Manfroid in 1985 improved earlier data, including the star's period. PW Tel is an α2 CVn variable that has an amplitude of 0.011 magnitudes within the visual passband and a period of 2.92 days. With a stellar classification of A0 Vp (SrCr), PW Tel is a chemically peculiar A-type main-sequence star. It has been recognised as an Ap star, as indicated by the "p" suffix, since the early 20th century, and it shows an overabundance of strontium and chromium in its spectrum. The abundance of some metals in the spectrum is several hundred times higher than in the Sun and it has an overall metallicity of [Fe/H] = 1.09, but this only reflects levels of those elements in the photosphere, not the whole star. Like most such stars it spins relatively slowly, with a projected rotational velocity of . With 2.8 times the mass of the Sun and 3.4 times its radius, PV Tel radiates 100 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it a bluish-white hue. PW Tel is metal enriched, having an iron abundance over 10 times that of the Sun. References Ap stars A-type main-sequence stars Telescopium Telescopii, 63 Telescopii, PW CD-13296 183806 96178 7416
PW Telescopii
Astronomy
493
48,994,586
https://en.wikipedia.org/wiki/Design42Day
Design42Day is a company based in Milan, Italy, specialized in the research, selection and promotion of design on an international scale. History and personnel Design42Day began as a recreational blog in 2007 by Riccardo Capuzzo, the Art Director and Editor-in-Chief. Patrick Abbattista, who joined the project in 2009 as Co-Founder and Head of Sales and Marketing, left the company in 2015 to launch DesignWanted , currently one of the most influential voices on Instagram about Architecture & Design. Camilla Rettura held the position of Executive Editor from 2012 till 2014. In 2012, Design42Day contributed to the foundation of the “Bocconi Students 4 Design” association in Bocconi University and participated in the launching of their first event with Italian designer Fabio Novembre. Design42Day’s online magazine focuses on five different areas of design, including fashion, industrial design, visual design, transportation design and architecture. Special projects Design42Day is involved in several special projects. Since 2011, they have a dedicated section on the online magazine reserved for the Istituto Europeo di Design where they showcase interviews and news about talented students. Furthermore, they have played an active role in bringing both the Istituto Europeo di Design and the Adobe Design Achievement Awards to the Moscow Design Week in 2012. The same year, Design42Day has collaborated closely with the Electrolux Design Lab for the search of a Social Media Correspondent to cover Electrolux’s event in the fall. Design42Day has launched a project dedicated to the presentation of international talents to sell in their personal corner within the White Gallery lifestyle store in Rome, Italy. International events In 2011, Design42Day finalized ten media partnerships with various design and fashion weeks around the world. In 2012, the number of media partnerships grew to thirteen, including the Red Dot Design Award. Partners Zooppa PCHouse.com.cn Istituto Europeo di Design 2011 Riga Fashion Week Modalisboa Moscow Design Week Belgrade Fashion Week BCN Design Week Beijing Design Week Adobe Design Achievement Awards Colombiamoda Sofia Design Week New Designers Electrolux Design Lab 2012 Riga Fashion Week Modalisboa AGideas International Talent Support Mercedes-Benz Kiev Fashion Days Red Dot Adobe Design Achievement Awards Sofia Design Week New Designers Electrolux Design Lab World Brand Congress Belarus Fashion Week 2013 IDE Design Mission Middle East 2013 See also List of companies of Italy References Further reading "Intervista a Patrick Abbattista, il fondatore di DesignWanted". VILLEGIARDINI, 17 January 2018 "Intervista a Patrick Abbattista, di DesignWanted". Mashable Social Media Day, September 2018 External links 2007 establishments in Italy Design companies established in 2007 Design companies of Italy Design magazines Italian companies established in 2007 Italian-language magazines Italian websites Magazines established in 2007 Magazines published in Milan
Design42Day
Engineering
583
7,514,621
https://en.wikipedia.org/wiki/Skype%20protocol
The Skype protocol is a proprietary network used for Internet telephony. Its specifications are not publicly available, and all official applications based on the protocol are closed-source. It lacks interoperability with most Voice over IP (VoIP) networks, so it requires licensing from Skype for any integration. Many attempts to reverse-engineer the protocol have been made to study its security features or to enable unofficial clients. On June 20, 2014, Microsoft announced that the old Skype protocol would be deprecated. Users had to upgrade to the 2014 version of Skype to continue accessing services, and older clients could no longer log in. As of the second week of August 2014, the new protocol, Microsoft Notification Protocol 24, was implemented to improve offline messaging and message synchronization across devices. Peer-to-peer architecture Skype pioneered peer-to-peer (P2P) technology for IP telephony. Its architecture includes supernodes, ordinary nodes, and a login server. Each client maintains a cache of reachable supernodes, while user directory data is distributed across these supernodes, organized into slots and blocks. Initially, any client with sufficient bandwidth and processing power could become a supernode. This setup posed challenges for users behind firewalls or Network Address Translation (NAT) because their connections could be used to facilitate calls between other clients. In 2012, Microsoft transitioned control of supernodes to its data centers to enhance performance and scalability, raising privacy concerns that were later highlighted by the PRISM surveillance revelations in 2013. Skype does not support IPv6, which could simplify its communication infrastructure. Communication challenges Supernodes relay communications for clients that are behind firewalls or NAT, enabling calls that would otherwise be impossible. However, issues may arise, such as: Non-derivable external port numbers or IP addresses due to NAT Firewalls blocking incoming sessions UDP issues like timeouts Port restrictions Protocol details Signaling in Skype is encrypted using RC4, but this method is considered weak because the encryption key can be recovered from the traffic. Voice data is protected with AES encryption. The Skype API allows developers to access the network for user information and call management. The code remains closed-source, and parts of the client utilize an open-source socket communication library called Internet Direct (Indy). In July 2012, a researcher revealed insights gained from reverse-engineering the Skype client. Protocol detection Various networking and security firms claim to have methods for detecting Skype's protocol. While their specific methods are proprietary, some published techniques include Pearson's chi-squared test and stochastic characterization using Naive Bayes classifiers. Obfuscation layer Skype employs RC4 to obfuscate the payload of data packets. The initialization vector (IV) is derived from a combination of the public source and destination IPs and a packet ID, transformed into an RC4 key. Notably, the misuse of RC4 can occur on TCP streams, where the first 14 bytes of a stream are XOR-ed with the RC4 stream, impacting data security. Packet structure and compression Most Skype traffic is encrypted, with commands and their parameters organized in an object list that can be compressed using a variant of arithmetic compression. Legal considerations The terms of Skype's license agreement prohibit reverse engineering. However, EU law allows for reverse engineering for interoperability purposes, and the U.S. Digital Millennium Copyright Act provides similar protections. Certain countries also permit copying for reverse engineering. Notes References External links Website containing articles and tools related to Skype protocol and behaviour analysis Repository of articles on Skype analysis Skype Architecture Inside VoIP protocols Instant messaging protocols Skype it:Protocollo Skype sk:Skype Protocol
Skype protocol
Technology
777
15,282,871
https://en.wikipedia.org/wiki/Predicate%20functor%20logic
In mathematical logic, predicate functor logic (PFL) is one of several ways to express first-order logic (also known as predicate logic) by purely algebraic means, i.e., without quantified variables. PFL employs a small number of algebraic devices called predicate functors (or predicate modifiers) that operate on terms to yield terms. PFL is mostly the invention of the logician and philosopher Willard Quine. Motivation The source for this section, as well as for much of this entry, is Quine (1976). Quine proposed PFL as a way of algebraizing first-order logic in a manner analogous to how Boolean algebra algebraizes propositional logic. He designed PFL to have exactly the expressive power of first-order logic with identity. Hence the metamathematics of PFL are exactly those of first-order logic with no interpreted predicate letters: both logics are sound, complete, and undecidable. Most work Quine published on logic and mathematics in the last 30 years of his life touched on PFL in some way. Quine took "functor" from the writings of his friend Rudolf Carnap, the first to employ it in philosophy and mathematical logic, and defined it as follows: "The word functor, grammatical in import but logical in habitat... is a sign that attaches to one or more expressions of given grammatical kind(s) to produce an expression of a given grammatical kind." (Quine 1982: 129) Ways other than PFL to algebraize first-order logic include: Cylindric algebra by Alfred Tarski and his American students. The simplified cylindric algebra proposed in Bernays (1959) led Quine to write the paper containing the first use of the phrase "predicate functor"; The polyadic algebra of Paul Halmos. By virtue of its economical primitives and axioms, this algebra most resembles PFL; Relation algebra algebraizes the fragment of first-order logic consisting of formulas having no atomic formula lying in the scope of more than three quantifiers. That fragment suffices, however, for Peano arithmetic and the axiomatic set theory ZFC; hence relation algebra, unlike PFL, is incompletable. Most work on relation algebra since about 1920 has been by Tarski and his American students. The power of relation algebra did not become manifest until the monograph Tarski and Givant (1987), published after the three important papers bearing on PFL, namely Bacon (1985), Kuhn (1983), and Quine (1976); Combinatory logic builds on combinators, higher order functions whose domain is another combinator or function, and whose range is yet another combinator. Hence combinatory logic goes beyond first-order logic by having the expressive power of set theory, which makes combinatory logic vulnerable to paradoxes. A predicate functor, on the other hand, simply maps predicates (also called terms) into predicates. PFL is arguably the simplest of these formalisms, yet also the one about which the least has been written. Quine had a lifelong fascination with combinatory logic, attested to by his introduction to the translation in Van Heijenoort (1967) of the paper by the Russian logician Moses Schönfinkel founding combinatory logic. When Quine began working on PFL in earnest, in 1959, combinatory logic was commonly deemed a failure for the following reasons: Until Dana Scott began writing on the model theory of combinatory logic in the late 1960s, almost only Haskell Curry, his students, and Robert Feys in Belgium worked on that logic; Satisfactory axiomatic formulations of combinatory logic were slow in coming. In the 1930s, some formulations of combinatory logic were found to be inconsistent. Curry also discovered the Curry paradox, peculiar to combinatory logic; The lambda calculus, with the same expressive power as combinatory logic, was seen as a superior formalism. Kuhn's formalization The PFL syntax, primitives, and axioms described in this section are largely Steven Kuhn's (1983). The semantics of the functors are Quine's (1982). The rest of this entry incorporates some terminology from Bacon (1985). Syntax An atomic term is an upper case Latin letter, I and S excepted, followed by a numerical superscript called its degree, or by concatenated lower case variables, collectively known as an argument list. The degree of a term conveys the same information as the number of variables following a predicate letter. An atomic term of degree 0 denotes a Boolean variable or a truth value. The degree of I is invariably 2 and so is not indicated. The "combinatory" (the word is Quine's) predicate functors, all monadic and peculiar to PFL, are Inv, inv, ∃, +, and p. A term is either an atomic term, or constructed by the following recursive rule. If τ is a term, then Invτ, invτ, ∃τ, +τ, and pτ are terms. A functor with a superscript n, n a natural number > 1, denotes n consecutive applications (iterations) of that functor. A formula is either a term or defined by the recursive rule: if α and β are formulas, then αβ and ~(α) are likewise formulas. Hence "~" is another monadic functor, and concatenation is the sole dyadic predicate functor. Quine called these functors "alethic." The natural interpretation of "~" is negation; that of concatenation is any connective that, when combined with negation, forms a functionally complete set of connectives. Quine's preferred functionally complete set was conjunction and negation. Thus concatenated terms are taken as conjoined. The notation + is Bacon's (1985); all other notation is Quine's (1976; 1982). The alethic part of PFL is identical to the Boolean term schemata of Quine (1982). As is well known, the two alethic functors could be replaced by a single dyadic functor with the following syntax and semantics: if α and β are formulas, then (αβ) is a formula whose semantics are "not (α and/or β)" (see NAND and NOR). Axioms and semantics Quine set out neither axiomatization nor proof procedure for PFL. The following axiomatization of PFL, one of two proposed in Kuhn (1983), is concise and easy to describe, but makes extensive use of free variables and so does not do full justice to the spirit of PFL. Kuhn gives another axiomatization dispensing with free variables, but that is harder to describe and that makes extensive use of defined functors. Kuhn proved both of his PFL axiomatizations sound and complete. This section is built around the primitive predicate functors and a few defined ones. The alethic functors can be axiomatized by any set of axioms for sentential logic whose primitives are negation and one of ∧ or ∨. Equivalently, all tautologies of sentential logic can be taken as axioms. Quine's (1982) semantics for each predicate functor are stated below in terms of abstraction (set builder notation), followed by either the relevant axiom from Kuhn (1983), or a definition from Quine (1976). The notation denotes the set of n-tuples satisfying the atomic formula Identity, , is defined as: Identity is reflexive (), symmetric (), transitive (), and obeys the substitution property: Padding, +, adds a variable to the left of any argument list. Cropping, ∃, erases the leftmost variable in any argument list. Cropping enables two useful defined functors: Reflection, S: S generalizes the notion of reflexivity to all terms of any finite degree greater than 2. N.B: S should not be confused with the primitive combinator S of combinatory logic. Cartesian product, ; Here only, Quine adopted an infix notation, because this infix notation for Cartesian product is very well established in mathematics. Cartesian product allows restating conjunction as follows: Reorder the concatenated argument list so as to shift a pair of duplicate variables to the far left, then invoke S to eliminate the duplication. Repeating this as many times as required results in an argument list of length max(m,n). The next three functors enable reordering argument lists at will. Major inversion, Inv, rotates the variables in an argument list to the right, so that the last variable becomes the first. Minor inversion, inv, swaps the first two variables in an argument list. Permutation, p, rotates the second through last variables in an argument list to the left, so that the second variable becomes the last. Given an argument list consisting of n variables, p implicitly treats the last n−1 variables like a bicycle chain, with each variable constituting a link in the chain. One application of p advances the chain by one link. k consecutive applications of p to Fn moves the k+1 variable to the second argument position in F. When n=2, Inv and inv merely interchange x1 and x2. When n=1, they have no effect. Hence p has no effect when n < 3. Kuhn (1983) takes Major inversion and Minor inversion as primitive. The notation p in Kuhn corresponds to inv; he has no analog to Permutation and hence has no axioms for it. If, following Quine (1976), p is taken as primitive, Inv and inv can be defined as nontrivial combinations of +, ∃, and iterated p. The following table summarizes how the functors affect the degrees of their arguments. Rules All instances of a predicate letter may be replaced by another predicate letter of the same degree, without affecting validity. The rules are: Modus ponens; Let α and β be PFL formulas in which does not appear. Then if is a PFL theorem, then is likewise a PFL theorem. Some useful results Instead of axiomatizing PFL, Quine (1976) proposed the following conjectures as candidate axioms. n−1 consecutive iterations of p restores the status quo ante: + and ∃ annihilate each other: Negation distributes over +, ∃, and p: + and p distributes over conjunction: Identity has the interesting implication: Quine also conjectured the rule: If is a PFL theorem, then so are , and . Bacon's work Bacon (1985) takes the conditional, negation, Identity, Padding, and Major and Minor inversion as primitive, and Cropping as defined. Employing terminology and notation differing somewhat from the above, Bacon (1985) sets out two formulations of PFL: A natural deduction formulation in the style of Frederick Fitch. Bacon proves this formulation sound and complete in full detail. An axiomatic formulation, which Bacon asserts, but does not prove, equivalent to the preceding one. Some of these axioms are simply Quine conjectures restated in Bacon's notation. Bacon also: Discusses the relation of PFL to the term logic of Sommers (1982), and argues that recasting PFL using a syntax proposed in Lockwood's appendix to Sommers, should make PFL easier to "read, use, and teach"; Touches on the group theoretic structure of Inv and inv; Mentions that sentential logic, monadic predicate logic, the modal logic S5, and the Boolean logic of (un)permuted relations, are all fragments of PFL. From first-order logic to PFL The following algorithm is adapted from Quine (1976: 300–2). Given a closed formula of first-order logic, first do the following: Attach a numerical subscript to every predicate letter, stating its degree; Translate all universal quantifiers into existential quantifiers and negation; Restate all atomic formulas of the form as . Now apply the following algorithm to the preceding result: The reverse translation, from PFL to first-order logic, is discussed in Quine (1976: 302–4). The canonical foundation of mathematics is axiomatic set theory, with a background logic consisting of first-order logic with identity, with a universe of discourse consisting entirely of sets. There is a single predicate letter of degree 2, interpreted as set membership. The PFL translation of the canonical axiomatic set theory ZFC is not difficult, as no ZFC axiom requires more than 6 quantified variables. See also Algebraic logic Footnotes References Bacon, John, 1985, "The completeness of a predicate-functor logic," Journal of Symbolic Logic 50: 903–26. Paul Bernays, 1959, "Uber eine naturliche Erweiterung des Relationenkalkuls" in Heyting, A., ed., Constructivity in Mathematics. North Holland: 1–14. Kuhn, Steven T., 1983, "An Axiomatization of Predicate Functor Logic," Notre Dame Journal of Formal Logic 24: 233–41. Willard Quine, 1976, "Algebraic Logic and Predicate Functors" in Ways of Paradox and Other Essays, revised and enlarged ed. Harvard Univ. Press: 283–307. Willard Quine, 1982. Methods of Logic, 4th ed. Harvard Univ. Press. Chpt. 45. Sommers, Fred, 1982. The Logic of Natural Language. Oxford Univ. Press. Alfred Tarski and Givant, Steven, 1987. A Formalization of Set Theory Without Variables. AMS. Jean Van Heijenoort, 1967. From Frege to Gödel: A Source Book on Mathematical Logic. Harvard Univ. Press. External links An introduction to predicate-functor logic (one-click download, PS file) by Mats Dahllöf (Department of Linguistics, Uppsala University) Algebraic logic Mathematical axioms Predicate logic
Predicate functor logic
Mathematics
2,991
13,676,622
https://en.wikipedia.org/wiki/Entity%20concept
In accounting, a business or an organization and its owners are treated as two separate parties. This is called the entity concept. The business stands apart from other organizations as a separate economic unit. It is necessary to record the business's transactions separately, to distinguish them from the owners' personal transactions. This helps to give a correct determination of the true financial condition of the business. This concept can be extended to accounting separately for the various divisions of a business in order to ascertain the financial results for each division. Under the business entity concept, a business holds separate entity and distinct from its owners. "The entity view holds the business 'enterprise to be an institution in its own right separate and distinct from the parties who furnish the funds" An example is a sole trader or proprietorship. The sole trader takes money from the business by way of 'drawings', money for their own personal use. Despite it being the sole trader's business and technically their money, there are still two aspects to the transaction: the business is 'giving' money and the individual is 'receiving' money. Even though there is no other legal distinction between the sole trader and the business, and the sole trader is liable for all of the debts of the business, business transactions may be taxed separately from personal transactions, and the proprietor of the business may also find it useful to see the financial results of the business. For these reasons, the affairs of the individuals behind a business are kept separate from the affairs of the business itself. In Anthropology The term has been coined by British anthropologist Mark Lindley-Highfield of Ballumbie Castle to describe ideas, such as ‘the West’, which are given agentive status as though they are homogeneous real things, where this entity-concept can have different symbolic values attributed to it to those of the individuals making up the group, who on an individual basis can be perceived differently. Lindley-Highfield explains it thus: ‘the discourse flows at two levels: One at which ideological disembodied concepts are seen to compete and contest, that have an agency of their own and can have agency acted out against them; and another at which people are individuals and may be distinct from the concepts held about their broader society.’ References Accounting systems
Entity concept
Technology
461
5,674,351
https://en.wikipedia.org/wiki/Stream%20capacity
The capacity of a stream or river is the total amount of sediment a stream is able to transport. This measurement usually corresponds to the stream power and the width-integrated bed shear stress across section along a stream profile. Note that capacity is greater than the load, which is the amount of sediment carried by the stream. Load is generally limited by the sediment available upstream. Stream capacity is often mistaken for the stream competency, which is a measure of the maximum size of the particles that the stream can transport, or for the total load, which is the load that a stream carries. The sediment transported by the stream depends upon the intensity of rainfall and land characteristics. See also Bed load Sediment transport Suspended load Wash load Hydrology Sedimentology
Stream capacity
Chemistry,Engineering,Environmental_science
148
35,459,322
https://en.wikipedia.org/wiki/Aquamelt
An aquamelt is a naturally hydrated polymeric material that is able to solidify at environmental temperatures through a controlled stress input (be it mechanical or chemical). They are unique in being able to “lock in” work applied to them through an alteration in hydrogen bonding, which enables them to be processed with approximately 1000 times less energy than standard polymers. This has been recently shown for an archetypal biopolymer, silk, however the mechanism for solidification is thought to be inherent to many other biological materials. Discovery and mechanism Aquamelts were defined as a new class of polymeric material as a result of a comparison between the spinning feedstock of the Chinese silkworm (Bombyx mori) and molten high-density polyethylene (HDPE) using shear induced polarised light imaging (SIPLI). The current understanding of shear induced fibrillation requires polymer chains to undergo the following series of steps i) long-chain molecules are stretched, ii) and form persistent point nuclei, which iii) align under flow into rows and then iv) grow to create a crystalline fibrils. For these fibrils to remain, the temperature of the sample must be lowered to below the polymers melt point. This process is analogous to the fibrilogenesis of natural silk-polymers in which proteins align (refold), nucleate (denature), and crystallize (aggregate). However, for silks, fibrils persist without the need for a drop in temperature. From a macromolecular perspective the two processes are thought to be similar due to a native protein's unique interaction with its closely bound water. Much like an individual polymer chain in a melt, a native protein and its closely bound water molecules may be considered not as a solution but as a single processable entity, a nanocomposite termed an "aquamelt". The differences between a typical polymer and an aquamelt are highlighted by an aquamelt's ability to solidify in response to stress at environmental temperatures. This occurs when the stress applied is sufficient to separate the closely bound water from the protein, splitting the nanocomposite. This results in conformational changes to the protein and an increased probability to form hydrogen bonding between protein chains and subsequent solidification. Multiscale structures, i.e., fibrils or foams are the result of a combination of directional stress fields and the self-assembly properties of the aquamelt. Potential uses Aquamelts offer several advantages over current solutions to synthetic polymer production. Firstly they are naturally sourced, with no reliance on oil for production and are recyclable and biodegradable. Secondly they can be processed at room temperatures and pressures resulting in only water as a by-product from the solidification process. Thirdly work calculations performed on silk and high-density polyethylene feedstocks revealed a tenfold difference in the amount of shear energy required in order to initiate solidification. When processing temperature is taken into account the difference in energy requirements to undergo solidification is a thousandfold less for aquamelts than synthetic polymers. References Polymers
Aquamelt
Chemistry,Materials_science
645
10,504,258
https://en.wikipedia.org/wiki/Free%20union
A free union is a romantic union between two or more persons without legal or religious recognition or regulation. The term has been used since the late 19th century to describe a relationship into which all parties enter, remain, and depart freely. The free union is an alternative to, or rejection or criticism of marriage, viewing it as a form of slavery and human ownership, particularly for women. According to this concept, the free union of adults is a legitimate relationship that should be respected. A free union is made between two individuals, but each individual may have several unions of their own. History Much of the contemporary tradition of free union under natural law or common law comes from anarchist rejection of marriage, seeking non-interference of either church or state in human relations. Leaving behind what was seen as law imposed by man in favor of natural law began during the late Enlightenment, when many sought to rethink the laws of property, family, and the status of women. Utopian socialist Robert Owen (1771–1858), who decried marriage as principally linked to the principle of ownership, offers a foretaste of the free union by use of the term "marriage contract in front of nature." Philosopher and feminist Mary Wollstonecraft (1759–1797) stated, "Marriage is an affirmation of the supremacy of man over woman [...] if I love a man, I want to love him while keeping my freedom." In the 1882, Élisée Reclus initiated the Anti-Marriage Movement, in accordance with which he and his partner allowed their two daughters to marry without any civil or religious ceremony, despite public and legal condemnation. Reclus had four partners throughout his lifetime, each with a different social contract. In more modern times, free unions were common among members of the Spanish anarchist CNT political party during the popular revolution that ran alongside the Spanish Civil War. The couple desiring contractual validation of their relationship would simple go to the Party Headquarters and request the forms, which would be destroyed if the relationship were to not work out. The couple however, were strongly encouraged to make it work, as separation created administrative work for the party. Additionally, many leading 20th Century intellectuals, including James Joyce, Pablo Picasso, Jean-Paul Sartre, and Simone de Beauvoir never chose to marry, or delayed it until the end of life for legal reasons. De Beauvoir said of the institution, "When we abolish the slavery of half of humanity, together with the whole system of hypocrisy that it implies, then the 'division' of humanity will reveal its genuine significance and the human couple will find its true form." Contemporary law In French law, the union libre is an agreement between adults which grants rights between parents and potential children, but holds no obligation of sexual fidelity, nor does it grant reciprocal duties and rights between partners. A free union can be between individuals of any gender, and an individual may have several concurrently, therefore making free union an option for LGBTQ or polyamorous relationships, as well as heterosexual and/or monogamous ones that do not wish to enter the contract of marriage for historical, social, or financial reasons. United States law has no exact legal equivalent of a free union, although comparisons are often made to common law marriage. In the United States, partners wishing to have legal rights without entering into a marriage contract may choose to complete documents such as a healthcare proxy, domestic partnership agreement, will, and power of attorney. Members of a free union may refer to each other as partners, spouses, or any other title, but may find themselves subject to the laws of common law marriage if they consistently refer to themselves as husband and wife according to their local jurisdiction. Roman Catholic criticism According to Catholicism, the expression "free union" includes situations such as concubinage, rejection of marriage as such, or inability to make long-term commitments. According to the Catechism of the Catholic Church, being in a "free union" is a grave offense against the dignity of marriage, which it sees as a Sacrament. However, proponents maintain that the free union acts as a public recognition of a relationship without the obligations of church or state. See also Self-uniting marriage Domestic partnership Civil union Criticism of marriage Anarchism and issues related to love and sex Gandharva marriage Cohabitation Common-law marriage Free love Intimate relationship Open relationship Polyamory References External links A Handbook on Open Relationships Unmarried Equality Free love Intimate relationships Interpersonal relationships Criticism of marriage
Free union
Biology
915
60,654,187
https://en.wikipedia.org/wiki/Twistronics
Twistronics (from twist and electronics) is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties. Materials such as bilayer graphene have been shown to have vastly different electronic behavior, ranging from non-conductive to superconductive, that depends sensitively on the angle between the layers. The term was first introduced by the research group of Efthimios Kaxiras at Harvard University in their theoretical treatment of graphene superlattices. Pablo Jarillo-Herrero, Allan H. MacDonald and Rafi Bistritzer were awarded the 2020 Wolf Prize in Physics for their theoretical and experimental work on twisted bilayer graphene. History In 2007, National University of Singapore physicist Antonio H. Castro Neto hypothesized that pressing two misaligned graphene sheets together might yield new electrical properties, and separately proposed that graphene might offer a route to superconductivity, but he did not combine the two ideas. In 2010 researchers in Eva Andrei's laboratory at Rutgers University in Piscataway, New Jersey discovered twisted bilayer graphene through its defining moiré pattern and demonstrating that the twist angle has a strong effect on the band structure by measuring greatly renormalized van Hove singularities. Also in 2010 researchers from Federico Santa María Technical University in Chile found that for a certain angle close to 1 degree the band of the electronic structure of twisted bilayer graphene became completely flat, and because of that theoretical property, they suggested that collective behavior might be possible. In 2011 Allan H. MacDonald (of University of Texas at Austin) and Rafi Bistritzer using a simple theoretical model found that for the previously found "magic angle" the amount of energy a free electron would require to tunnel between two graphene sheets radically changes. In 2017, the research group of Efthimios Kaxiras at Harvard University used detailed quantum mechanics calculations to reduce uncertainty in the twist angle between two graphene layers that can induce extraordinary behavior of electrons in this two-dimensional system. In 2018, Pablo Jarillo-Herrero, an experimentalist at Massachusetts Institute of Technology, found that the magic angle resulted in the unusual electrical properties that MacDonald and Bistritzer had predicted. At 1.1 degrees rotation at sufficiently low temperatures, electrons move from one layer to the other, creating a lattice and the phenomenon of superconductivity. Publication of these discoveries has generated a host of theoretical papers seeking to understand and explain the phenomena as well as numerous experiments using varying numbers of layers, twist angles and other materials. Subsequent works showed that electronic properties of the stack can also be strongly dependent on heterostrain especially near the magic angle allowing potential applications in straintronics. Characteristics Superconduction and insulation The theoretical predictions of superconductivity were confirmed by Pablo Jarillo-Herrero and his student Yuan Cao of MIT and colleagues from Harvard University and the National Institute for Materials Science in Tsukuba, Japan. In 2018 they verified that superconductivity existed in bilayer graphene where one layer was rotated by an angle of 1.1° relative to the other, forming a moiré pattern, at a temperature of . They created two bilayer devices that acted as an insulator instead of a conductor without a magnetic field. Increasing the field strength turned the second device into a superconductor. A further advance in twistronics is the discovery of a method of turning the superconductive paths on and off by application of a small voltage differential. Heterostructures Experiments have also been done using combinations of graphene layers with other materials that form heterostructures in the form of atomically thin sheets that are held together by the weak Van der Waals force. For example, a study published in Science in July 2019 found that with the addition of a boron nitride lattice between two graphene sheets, unique orbital ferromagnetic effects were produced at a 1.17° angle, which could be used to implement memory in quantum computers. Further spectroscopic studies of twisted bilayer graphene revealed strong electron-electron correlations at the magic angle. Electron puddling Between 2-D layers for bismuth selenide and a dichalcogenide, researchers at the Northeastern University in Boston, discovered that at a specific degrees of twist a new lattice layer, consisting of only pure electrons, would develop between the two 2-D elemental layers. The quantum and physical effects of the alignment between the two layers appears to create "puddle" regions which trap electrons into a stable lattice. Because this stable lattice consists only of electrons, it is the first non-atomic lattice observed and suggests new opportunities to confine, control, measure, and transport electrons. Ferromagnetism A three layer construction, consisting of two layers of graphene with a 2-D layer of boron nitride, has been shown to exhibit superconductivity, insulation and ferromagnetism. In 2021, this was achieved on a single graphene flake. See also Straintronics – a method for altering the properties of two-dimensional materials by introducing controlled stress Spintronics – the study of the intrinsic spin of the electron and its associated magnetic moment in solid-state devices Valleytronics – the study of local extrema, valleys, in the electronic band structure of semiconductors References Graphene Superconductivity
Twistronics
Physics,Materials_science,Engineering
1,105
68,488,480
https://en.wikipedia.org/wiki/Plutonium%20silicide
Plutonium silicide is a binary inorganic compound of plutonium and silicon with the chemical formula PuSi. The compound forms gray crystals. Synthesis Reaction of plutonium dioxide and silicon carbide: Reaction of plutonium trifluoride with silicon: Physical properties Plutonium silicide forms gray crystals of orthorhombic crystal system, space group Pnma, cell parameters: a = 0.7933 nm, b = 0.3847 nm, c = 0.5727 nm, Z = 4, TiSi type structure. At a temperature of 72 K, plutonium silicide undergoes a ferromagnetic transition. References Plutonium compounds Silicon compounds Inorganic compounds Silicides
Plutonium silicide
Chemistry
145
3,721,904
https://en.wikipedia.org/wiki/Copper%28II%29%20arsenate
Copper arsenate (Cu3(AsO4)2·4H2O, or Cu5H2(AsO4)4·2H2O), also called copper orthoarsenate, tricopper arsenate, cupric arsenate, or tricopper orthoarsenate, is a blue or bluish-green powder insoluble in water and alcohol and soluble in aqueous ammonium and dilute acids. Its CAS number is or . Uses Copper arsenate is an insecticide used in agriculture. It is also used as a herbicide, fungicide, and a rodenticide. It is also used as a poison in slug baits. Copper arsenate can also be a misnomer for copper arsenite, especially when meant as a pigment. Natural occurrences Anhydrous copper arsenate, Cu3(AsO4)2, is found in nature as the mineral lammerite. Copper arsenate tetrahydrate, Cu3(AsO4)2·4H2O, occurs naturally as the mineral rollandite. Related compounds Copper arsenate hydroxide or basic copper arsenate (Cu(OH)AsO4) is a basic variant with CAS number . It is found naturally as the mineral olivenite. It is used as an insecticide, fungicide, and miticide. Its use is banned in Thailand since 2001. See also Calcium arsenate Chromated copper arsenate Lead arsenate Paris Green (copper acetoarsenite) Scheele's Green (copper arsenite) References External links National Pollutant Inventory - Copper and compounds fact sheet Arsenates Copper(II) compounds Inorganic insecticides Arsenical herbicides Rodenticides Fungicides
Copper(II) arsenate
Chemistry,Biology
379
43,337,163
https://en.wikipedia.org/wiki/Primitive%20element%20%28co-algebra%29
In algebra, a primitive element of a co-algebra C (over an element g) is an element x that satisfies where is the co-multiplication and g is an element of C that maps to the multiplicative identity 1 of the base field under the co-unit (g is called group-like). If C is a bi-algebra, i.e., a co-algebra that is also an algebra (with certain compatibility conditions satisfied), then one usually takes g to be 1, the multiplicative identity of C. The bi-algebra C is said to be primitively generated if it is generated by primitive elements (as an algebra). If C is a bi-algebra, then the set of primitive elements form a Lie algebra with the usual commutator bracket (graded commutator if C is graded). If A is a connected graded cocommutative Hopf algebra over a field of characteristic zero, then the Milnor–Moore theorem states the universal enveloping algebra of the graded Lie algebra of primitive elements of A is isomorphic to A. (This also holds under slightly weaker requirements.) References http://www.encyclopediaofmath.org/index.php/Primitive_element_in_a_co-algebra Coalgebras
Primitive element (co-algebra)
Mathematics
269
1,267,081
https://en.wikipedia.org/wiki/Ramipril
Ramipril, sold under the brand name Altace among others, is an ACE inhibitor type medication used to treat high blood pressure, heart failure, and diabetic kidney disease. It can also be used as a preventative medication in patients over 55 years old to reduce the risk of having a heart attack, stroke or cardiovascular death in patients shown to be at high risk, such as some diabetics and patients with vascular disease. It is a reasonable initial treatment for high blood pressure. It is taken by mouth. Common side effects include headaches, dizziness, fatigue, and cough. Serious side effects may include liver problems, angioedema, kidney problems, and high blood potassium. Use in pregnancy and breastfeeding is not recommended. It is an ACE inhibitor and works by decreasing renin-angiotensin-aldosterone system activity. Ramipril was patented in 1981 and approved for medical use in 1989. It is available as a generic medication. In 2022, it was the 187th most commonly prescribed medication in the United States, with more than 2million prescriptions. Activation and binding Ramipril is a pro-drug. The molecule must be hydrolyzed by an esterase at the and form a carboxylate. This carboxylate then interacts with the positive Zn2+ ion which is located at the active site of the ACE enzyme. Ramipril is similar in structure to another ACE Inhibitor, trandolapril, but it has a second cyclopentane ring instead of a cyclohexane ring. Medical uses Medical uses include: High blood pressure (Hypertension) Congestive heart failure Following heart attack in people with evidence of heart failure People over 55 years at high risk: prevention of heart attack, stroke, cardiovascular death, or in need of revascularization procedures Prevent the onset and/or delay the progression of diabetic kidney disease, with or without proteinuria. Randomized trial evidence suggests that a maximum tolerable dose prevents cardiovascular events and death in patients with diabetic kidney disease. Contraindications Contraindications to its use include volume-depleted patients, a history of angioedema while on an ACE inhibitor, pregnancy and hypotension. People should not take ramipril (or any ACE inhibitors) if they have hyperkalemia. It is also recommended to avoid using salt-substitutes as this can further increase potassium levels in the blood. Ramipril can be considered in patients with bilateral or unilateral significant renal artery stenosis (RAS). An early rise in serum creatinine above baseline is expected after initiation of therapy with Ramipril, however, monitoring serum biochemistry and renal function after initiation is crucial. Treatment with Ramipril in some patients with significant narrowing in both kidneys can increase serum creatinine concentration (measured in the blood test), which returns to baseline upon therapy cessation. Adverse effects Shakiness Dry cough Dizziness and lightheadedness due to low blood pressure Fatigue, especially in the early stages Mouth dryness in the early stages Nausea Fainting Signs of infection (e.g., fever, chills, persistent sore throat) Chest pain Neutropenia (low white blood cells) Impotence (erectile dysfunction) Hyperkalemia Serious allergic reactions to this drug are unlikely, but immediate medical attention must be sought if they occur. Symptoms of a serious allergic reaction include, but are not limited to a rash or swelling of the face, mouth, tongue, or throat. In extreme cases, ramipril may lead to potentially fatal liver problems. Mechanism of action ACE inhibitors inhibit the actions of angiotensin converting enzyme (ACE), thereby lowering the production of angiotensin II and decreasing the breakdown of bradykinin. The decrease in angiotensin II results in relaxation of arteriole smooth muscle leading to a decrease in total peripheral resistance, reducing blood pressure as the blood is pumped through widened vessels. Its effect on bradykinin is responsible for the dry cough side effect. Ramipril, a prodrug or precursor drug, is converted to the active metabolite ramiprilat by carboxylesterase 1. Ramiprilat is mostly excreted by the kidneys. Its half-life is variable (3–16 hours), and is prolonged by heart and liver failure, as well as kidney failure. Peak effect occurs between 3 and 6 hours after dosing, with approximately 50% of this effect retained after 24 hours. Synthesis The penultimate step in the synthesis of ramipril combines an alanine derivative with a (S,S,S)-2-azabicyclo-[3.3.0]-octane-3-carboxylic acid protected as its benzyl ester. In the original patented route, these components were obtained by a multi-step process. The acid chloride forms an amide bond with the amino group of the pyrrolidine ring in the presence of triethylamine and ramipril is the product after the benzyl ester has been removed by hydrogenation. Society and culture US patent The compound was protected by a patent which was assigned to the German pharmaceutical company Hoechst AG (since merged into Aventis) on 29 October 1991. The patent was scheduled to expire on 29 October 2008. On 11 September 2007, in an appeal by the Indian company Lupin Ltd., the United States Court of Appeals for the Federal Circuit reversed a district court trial verdict and found that Aventis's patent on ramipril was invalid for "obviousness", opening this drug to generic manufacturers. Brand names Ramipril is marketed as Prilace by Arrow Pharmaceuticals in Australia, Ramipro by Westfield Pharma in the Philippines, Triatec by Sanofi-Aventis in Italy and United States and Altace by King Pharmaceuticals in the United States, Novapril by Pharmanova in Ghana, Ramitens by PharmaSwiss, Ampril by Krka in Slovenia, Corpril by Cemelog-BRS in Hungary, Piramil and Prilinda by Hemofarm in Serbia, by Lek in Poland and by Novartis and Opsonin Pharma Limited as Ramace in Bangladesh, and in Canada as Altace (Sanofi-Aventis) and Ramipril (Pharmascience). Ramipril is marketed in India under the brand names Cardace, Zigpril, Ramistar, Odipril and Zorem . Ramipril is marketed in Myanmar under brand name Endpril . Research The 2001 Heart Outcomes and Prevention Evaluation trial seemed to show ramipril possessed cardioprotective qualities which extended beyond its qualities as an antihypertensive. However, the trial and the interpretation of its results have been criticised. The Acute Infarction Ramipril Efficacy (AIRE) trial showed a 27% reduction in mortality for patients receiving ramipril for chronic heart failure following a myocardial infarction. Ramipril was found to have similar results as telmisartan, an angiotensin II receptor blocker. References ACE inhibitors Carboxamides Carboxylic acids Enantiopure drugs Ethyl esters Prodrugs Sanofi Drugs developed by Pfizer Drugs developed by AstraZeneca Wikipedia medicine articles ready to translate
Ramipril
Chemistry
1,541
14,168,475
https://en.wikipedia.org/wiki/Toxic%20shock%20syndrome%20toxin-1
Toxic shock syndrome toxin-1 (TSST-1) is a superantigen with a size of 22 kDa produced by 5 to 25% of Staphylococcus aureus isolates. It causes toxic shock syndrome (TSS) by stimulating the release of large amounts of interleukin-1, interleukin-2 and tumor necrosis factor. In general, the toxin is not produced by bacteria growing in the blood; rather, it is produced at the local site of an infection, and then enters the blood stream. Characteristics Toxic shock syndrome toxin-1 (TSST-1), a prototype superantigen secreted by a Staphylococcus aureus bacterium strain in susceptible hosts, acts on the vascular system by causing inflammation, fever, and shock. The bacterium strain that produces the TSST-1 can be found in any area of the body, but lives mostly in the vagina of infected women. TSST-1 is a bacterial exotoxin found in patients who have developed toxic shock syndrome (TSS), which can be found in menstruating women or any man or child for that matter. One-third of all TSS cases have been found in men. This statistic could possibly be due to surgical wounds or any skin wound. TSST-1 is the cause of half of non-menstrual TSS cases, and the sole cause for menstrual TSS cases. Structure In the nucleotide sequence of TSST-1, there is a 708 base-pair open-reading frame and a Shine-Dalgarno sequence which is seven base pairs downstream from the start site. In the entire nucleotide sequence, only 40 amino acids make up the signal peptide. A single signal peptide consists of a 1 to 3 basic amino acid terminus, a hydrophobic region of 15 residues, a proline (Pro) or glycine (Gly) in the hydrophobic core region, a serine (Ser) or threonine (Thr) amino acid near the carboxyl terminal end of the hydrophobic core, and an alanine (Ala) or glycine (Gly) at the cleavage site. A mature TSST-1 protein has a coding sequence of 585 base pairs. The entire nucleotide sequence was determined by Blomster-Hautamaazg, et al., as well as by other researchers with other experiments. Consisting of a single polypeptide chain, the structure of holotoxin TSST-1 is three-dimensional and consists of an alpha (α) and beta (β) domain. This three-dimensional structure of the TSST-1 protein was determined by purifying the crystals of the protein. The two domains are adjacent from each other and possess unique qualities. Domain A, the larger of the two domains, contains residues 1-17 and 90–194 in TSST-1 and consists of a long alpha (α) helix with residues 125-140 surrounded by a 5-strand beta (β) sheet. Domain B is unique because it contains residues 18–89 in TSST-1 and consists of a (β) barrel made up of 5 β-strands. Crystallography methods show that the internal β-barrel of domain B contains several hydrophobic amino acids and hydrophilic residues on the surface of the domain, which allows TSST-1 to cross mucous surfaces of epithelial cells. Even though TSST-1 consists of several hydrophobic amino acids, this protein is highly soluble in water. TSST-1 is resistant to heat and proteolysis. It has been shown that TSST-1 can be boiled for more than an hour without any presence of denaturation or direct effect on its function. Production TSST-1 is a protein encoded by the tst gene, which is part of the mobile genetic element staphylococcal pathogenicity island 1. The toxin is produced in the greatest volumes during the post-exponential phase of growth, which is similar among pyrogenic toxin superantigens, also known as PTSAgs. Oxygen is required in order to produce TSST-1, in addition to the presence of animal protein, low levels of glucose, and temperatures between . Production is optimal at pH's close to neutral and when magnesium levels are low, and is further amplified by high concentrations of S. aureus, which indicates its importance in establishing infection. TSST-1 differs from other PTSAgs in that its genetic sequence does not have a homolog with other superantigen sequences. TSST-1 does not have a cysteine loop, which is an important structure in other PTSAgs. TSST-1 is also different from other PTSAgs in its ability to cross mucous membranes, which is why it is an important factor in menstrual TSS. When the protein is translated, it is in a pro-protein form, and can only leave the cell once the signal sequence has been cleaved off. The agr (accessory gene regulator) locus is one of the key sites of positive regulation for many of the S. aureus genes, including TSST-1. Additionally, alterations in the expression of the genes ssrB and srrAB affect the transcription of TSST-1. Further, high levels of glucose inhibit transcription, since glucose acts as a catabolite repressor. Mutations Based on studies of various mutations of the protein it appears that the superantigenic and lethal portions of the protein are separate. One variant in particular, TSST-ovine or TSST-O, was important in determining the regions of biological importance in TSST-1. TSST-O does not cause TSS, and is non-mitogenic, and differs in sequence from TSST-1 in 14 nucleotides, which corresponds to 9 amino acids. Two of these are cleaved off as part of the signal sequence, and are therefore not important in the difference in function observed. From the studies observing the differences in these two proteins, it was discovered that residue 135 is critical in both lethality and mitogenicity, while mutations in residues 132 and 136 caused the protein to lose its ability to cause TSS, however there were still signs of superantigenicity. If the lysine at residue 132 in TSST-O is changed to a glutamate, the mutant regains little superantigenicity, but becomes lethal, meaning that the ability to cause TSS results from the glutamate at residue 132. The loss of activity from these mutations is not due to changes in the protein's conformation, but instead these residues appear to be critical in the interactions with T-cell receptors. Isolation Samples of TSST-1 can be purified from bacterial cultures to use in in vitro testing environments, however this is not ideal due to the large number of factors that contribute to pathenogenesis in an in vivo environment. Additionally, culturing bacteria in vitro provides an environment which is rich in nutrients, in contrast to the reality of an in vivo environment, in which nutrients tend to be more scarce. TSST-1 can be purified by preparative isoelectric focusing for use in vitro or for animal models using a mini-osmotic pump. Mechanism A superantigen such as TSST-1 stimulates human T cells that express VB 2, which may represent 5-30% of all host T cells. PTSAgs induce the VB-specific expansion of both CD4 and CD8- subsets of T-lymphocytes. TSST-1 forms homodimers in most of its known crystal forms. The SAGs show remarkably conserved architecture and are divided into the N- and C- terminal domains. Mutational analysis has mapped the putative TCR binding region of TSST-1 to a site located on the back-side groove. If the TCR occupies this site, the amino terminal alpha helix forms a large wedge between the TCR and MHC class II molecules. The wedge would physically separate the TCR from the MHC class II molecules. A novel domain may exist in the SAGs that is separate from the TCR and class II MHC-binding domains. The domain consists of residues 150 to 161 in SEB, and similar regions exist in all the other SAGs as well. In this study a synthetic peptide containing this sequence was able to prevent SAG-induced lethality in D-galactosamine-sensitized mice with staphylococcal TSST-1, as well as some other SAGs. Significant differences exist in the sequences of MHC Class II alleles and TCR Vbeta elements expressed by different species, and these differences have important effects on the interaction of PTSAgs and with MCH class II and TCR molecules. Binding site TSST-1 binds primarily to the alpha-chain of class II MHC exclusively through a low-affinity (or generic) binding site on the SAG N-terminal domain. This is opposed to other super antigens (SAGs) such as DEA and SEE, that bind to class II MHC through the low-affinity site, and to the beta-chain through a high-affinity site. This high-affinity site is a zinc-dependent site on the SAG C-terminal domain. When this site is bound, it extends over part of the binding groove, makes contacts with the bound peptide, and then binds regions of both the alpha and beta chains. MHC-binding by TSST-1 is partially peptide-dependent. Mutagenesis studies with SEA have indicated that both binding sites are required for optimal T-cell activation. These studies containing TSST-1 indicate that the TCR binding domain lies at the top of the back side of this toxin, though the complete interaction remains to be determined. There have also been indications that the TCR binding site of TSST-1 is mapped to the major groove of the central alpha helix or the short amino terminal alpha helix. Residues in the beta claw motif of TSST-1 are known to interact primarily with the invariant region of the Alpha chain of this MHC class II molecule. Residues forming minor contacts with TSST-1 were also identified in the HLA-DR1 β-chain, as well as the antigenic peptide, located in the interchain groove. The arrangement of TSST-1 with respect to the MHC class II molecule imposes steric restriction on the three component complex composed of TSST-1, MHC class II, and the TCR. Mutational analysis Initial studies of mutants revealed that residues on the back side of the central alpha helix were required for super antigenic activity. Changing the histidine at position 135 to alanine caused TSST-1 to be neither lethal or superantigenic. Changes in residues that were in close proximity to H135A, also had the effect of diminishing the lethality and superantigenic quality of these mutants. Although most of these mutants did not result in loss of antigenicity of TSST-1. Tests done using mutagenic TSST-1 toxins indicated that the lethal and superantigenic properties are separable. When Lys-132 in TSST-O was changed to a Glu, the resulting mutant became completely lethal but non superantigenic. The same results, lethal but not superantigenic, were found for TSST-1 Gly16Val. Residues Gly16, Glu132, and Gln 136, located on the back of the back-side groove of the putative TCR binding region of TSST-1, it has been proposed that they are also a part of a second functionally lethal site in the TSST-1. Notes References Bacterial toxins Superantigens Proteins
Toxic shock syndrome toxin-1
Chemistry
2,450
28,835,085
https://en.wikipedia.org/wiki/Suaeda%20calceoliformis
Suaeda calceoliformis is a species of flowering plant in the family Amaranthaceae known by several common names, including Pursh seepweed and horned seablite. Distribution The plant is native to North America, where it can be found across most of the continent except for parts of the Southeastern United States. It is a halophyte, growing in areas of high soil salinity and alkalinity, such as playas, salt flats, beaches, marshes and other wetlands, and the edges of roads that are salted in the winter. Description Suaeda calceoliformis is an annual herb with waxy green to red or striped, bicolored stems growing up to 80 centimeters long. It may grow erect to prostrate in shape, the prostrate forms being more common in higher salinity substrates because they can retain more water. The fleshy, waxy leaves are up to 4 centimeters long, linear in shape, and lie nearly against the stem instead of spreading away from it. The inflorescence is an elongated cyme of flowers shaped like a branching spike. It is dense with many tight clusters of flowers with leaflike bracts growing between them. There are three to five flowers per cluster, each with a calyx of horned sepals and no petals. The fruit is an utricle that grows within the calyx. References External links Jepson Manual Treatment Flora of North America Washington Burke Museum Photo gallery calceoliformis Halophytes Flora of Subarctic America Flora of Canada Flora of the Western United States Flora of the Great Lakes region (North America) Flora of the Great Plains (North America) Flora of the Great Basin Flora of the Northeastern United States Natural history of the Santa Monica Mountains Taxa named by William Jackson Hooker Flora without expected TNC conservation status
Suaeda calceoliformis
Chemistry
376