id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
53,083,768
https://en.wikipedia.org/wiki/RCA%20CDP1861
The RCA CDP1861 was an integrated circuit Video Display Controller, released by the Radio Corporation of America (RCA) in the mid-1970s as a support chip for the RCA 1802 microprocessor. The chip cost in 1977 amounted to less than US$20. History The CDP1861 was manufactured in a low-power CMOS technology, came in a 24-pin DIP (Dual in-line package), and required a minimum of external components to work. In 1802-based microcomputers, the CDP1861 (for the NTSC video format, CDP1864 variant for PAL), used the 1802's built-in DMA controller to display black and white (monochrome) bitmapped graphics on standard TV screens. The CDP1861 was also known as the Pixie graphics system, display, chip, and video generator, especially when used with the COSMAC ELF microcomputer. Other known chip markings for the 1861 are TA10171, TA10171V1 and a TA10171X, which were early designations for "pre-qualification engineering samples" and "preliminary part numbers", although they have been found in production RCA Studio II game consoles and Netronics Elf microcomputers. The CDP1861 was also used in the Telmac 1800 and Oscom Nano microcomputers. Specifications The 1861 chip could display 64 pixels horizontally and 128 pixels vertically, though by reloading the 1802's R0 DMA (direct memory access) register via the required 1802 software controller program and interrupt service routine, the resolution could be reduced to 64×64 or 64×32 to use less memory than the 1024 bytes needed for the highest resolution (with each monochrome pixel occupying one bit) or to display square pixels. A resolution of 64×32 created square pixels and used 256 bytes of memory (2K bits). This was the usual resolution for the Chip-8 game programming system. Since the video graphics frame buffer was often similar or equal in size to the memory size, it was not unusual to display your program/data on the screen allowing you to watch the computer "think" (i.e. process its data). Programs which ran amok and accidentally overwrote themselves could be spectacular. The CDP1862 Color Generator Circuit IC, an 1861 companion chip, could be used to generate limited color graphics. Due to the discontinuation of the 1861 and its rarity, in 2004 a fully functional analogue called the Spare Time Gizmos STG1861 was made for use with the newly designed and produced ELF 2000 (Elf2K) computer. It is made in the form of a small printed circuit board (PCB) with two small programmable logic devices, two simple TTL chips, and a 24-pin DIP connector for mounting in place of the original chip. Gallery References External links CDP1861 Datasheet Graphics hardware Integrated circuits RCA brands
RCA CDP1861
[ "Technology", "Engineering" ]
611
[ "Computer engineering", "Integrated circuits" ]
53,084,684
https://en.wikipedia.org/wiki/Canary%20Diamond
The Canary Diamond is an uncut canary-yellow 17.86 carat diamond found in 1917 at what is now the Crater of Diamonds State Park in Arkansas. It is in the collection of the Smithsonian Museum of Natural History. The diamond was in the collection of civil engineer and mineral collector Washington Roebling; his son donated it, along with the rest of Roebling's collection, to the museum in 1926 after Roebling's death. See also List of diamonds References Diamonds originating in the United States Gemstones Individual diamonds
Canary Diamond
[ "Physics" ]
107
[ "Materials", "Gemstones", "Matter" ]
53,084,982
https://en.wikipedia.org/wiki/NGC%20397
NGC 397 is a lenticular galaxy located in the constellation Pisces. It was discovered on December 6, 1866, by Robert Ball. It was described by Dreyer as "extremely faint, small, round, very faint star to west." References External links 0397 18661206 Pisces (constellation) Lenticular galaxies 004051
NGC 397
[ "Astronomy" ]
75
[ "Pisces (constellation)", "Constellations" ]
53,085,145
https://en.wikipedia.org/wiki/Axonometry
Axonometry is a graphical procedure belonging to descriptive geometry that generates a planar image of a three-dimensional object. The term "axonometry" means "to measure along axes", and indicates that the dimensions and scaling of the coordinate axes play a crucial role. The result of an axonometric procedure is a uniformly-scaled parallel projection of the object. In general, the resulting parallel projection is oblique (the rays are not perpendicular to the image plane); but in special cases the result is orthographic (the rays are perpendicular to the image plane), which in this context is called an orthogonal axonometry. In technical drawing and in architecture, axonometric perspective is a form of two-dimensional representation of three-dimensional objects whose goal is to preserve the impression of volume or relief. Sometimes also called rapid perspective or artificial perspective, it differs from conical perspective and does not represent what the eye actually sees: in particular parallel lines remain parallel and distant objects are not reduced in size. It can be considered a conical perspective conique whose center has been pushed out to infinity, i.e. very far from the object observed. The term axonometry is used both for the graphical procedure described below, as well as the image produced by this procedure. Axonometry should not be confused with axonometric projection, which in English literature usually refers to orthogonal axonometry. Principle of axonometry Pohlke's theorem is the basis for the following procedure to construct a scaled parallel projection of a three-dimensional object: Select projections of the coordinate axes, such that all three coordinate axes are not collapsed to a single point or line. Usually the z-axis is vertical. Select for these projections the foreshortenings, , and , where The projection of a point is determined in three sub-steps (the result is independent of the order of these sub-steps): starting at the point , move by the amount in the direction of , then move by the amount in the direction of , then move by the amount in the direction of and finally Mark the final position as point . In order to obtain undistorted results, select the projections of the axes and foreshortenings carefully (see below). In order to produce an orthographic projection, only the projections of the coordinate axes are freely selected; the foreshortenings are fixed (see :de:orthogonale Axonometrie). The choice of the images of the axes and the foreshortenings Notation: angle between -axis and -axis angle between -axis and -axis angle between -axis and -axis. The angles can be chosen so that The foreshortenings: Only for suitable choices of angles and foreshortenings does one get undistorted images. The next diagram shows the images of the unit cube for various angles and foreshortenings and gives some hints for how to make these personal choices. In order to keep the drawing simple, one should choose simple foreshortenings, for example or . If two foreshortenings are equal, the projection is called dimetric. If the three foreshortenings are equal, the projection is called isometric. If all foreshortenings are different, the projection is called trimetric. The parameters in the diagram at right (e.g. of the house drawn on graph paper) are: Hence it is a dimetric axonometry. The image plane is parallel to the y-z-plane and any planar figure parallel to the y-z-plane appears in its true shape. Special axonometries Engineer projection In this case the foreshortenings are: (dimetric axonometry) and the angles between the axes are: These angles are marked on many German set squares. Advantages of an engineer projection: simple foreshortenings, a uniformly scaled orthographic projection with scaling factor 1.06, the contour of a sphere is a circle (in general, an ellipse) . For more details: see :de:Axonometrie. Cavalier perspective, cabinet perspective image plane parallel to y-z-plane. In the literature the terms "cavalier perspective" and "cabinet perspective" are not uniformly defined. The above definition is the most general one. Often, further restrictions are applied. For example: cabinet perspective: additionally choose (oblique) and (dimetric), cavalier perspective: additionally choose (oblique) and (isometric). Birds eye view, military projection image plane parallel to x-y-plane. military projection: additionally choose (isometric). Such axonometries are often used for city maps, in order to keep horizontal figures undistorted. Isometric axonometry (Not to be confused with an isometry between metric spaces.) For an isometric axonometry all foreshortenings are equal. The angles can be chosen arbitrarily, but a common choice is . For the standard isometry or just isometry one chooses: (all axes undistorted) The advantage of a standard isometry: the coordinates can be taken unchanged, the image is a scaled orthographic projection with scale factor . Hence the image has a good impression and the contour of a sphere is a circle. Some computer graphic systems (for example, xfig) provide a suitable raster (see diagram) as support. In order to prevent scaling, one can choose the unhandy foreshortenings (instead of 1) and the image is an (unscaled) orthographic projection. Circles in axonometry A parallel projection of a circle is in general an ellipse. An important special case occurs, if the circle's plane is parallel to the image plane–the image of the circle is then a congruent circle. In the diagram, the circle contained in the front face is undistorted. If the image of a circle is an ellipse, one can map four points on orthogonal diameters and the surrounding square of tangents and in the image parallelogram fill-in an ellipse by hand. A better, but more time consuming method consists of drawing the images of two perpendicular diameters of the circle, which are conjugate diameters of the image ellipse, determining the axes of the ellipse with Rytz's construction and drawing the ellipse. Spheres in axonometry In a general axonometry of a sphere the image contour is an ellipse. The contour of a sphere is a circle only in an orthogonal axonometry. But, as the engineer projection and the standard isometry are scaled orthographic projections, the contour of a sphere is a circle in these cases, as well. As the diagram shows, an ellipse as the contour of a sphere might be confusing, so, if a sphere is part of an object to be mapped, one should choose an orthogonal axonometry or an engineer projection or a standard isometry. References Notes External links Orthogonal axonometry Graphical projections
Axonometry
[ "Mathematics" ]
1,456
[ "Mathematical objects", "Functions and mappings", "Graphical projections", "Mathematical relations" ]
65,773,324
https://en.wikipedia.org/wiki/Emmett%20Leahy%20Award
The Emmett Leahy Award is given annually to individuals who have had major impact on the field of information management. The award has been given since 1967, and honors Emmett Leahy, a pioneer in records management. Introduction The Emmett Leahy Award is an annual award that recognizes an individual whose outstanding contributions have had a major impact on the field of information and records management. First presented in 1967 and given in honor of Emmett J. Leahy (1910-1964), an American pioneer in records management, it is widely regarded as the highest award for individual accomplishment in the field. It is the significance of the individual’s overall impact that sets it apart from other awards, which recognize professional service or are presented to project teams, groups, companies or public organizations. Each year the independent Award Committee, comprising the last ten awardees, selects the recipient based on their creation of original records and information management principles and concepts, their demonstrable and substantive impact on records and information management practices, and their promotion of excellence in the profession through innovative thought leadership and direct engagement in records and information management education. Recipients have included records managers, archivists, educators, users and consultants from the USA and internationally. History of the award The Emmett Leahy Award was created by Rodd Exelbert, founder of the Information and Records Management (IRM) magazine. In 1966 he was looking for a way to spotlight outstanding contributions to the IRM profession through an award when he heard a presentation by Christopher Cameron, Managing Partner of Leahy & Company, at the ARMA Convention in Houston about the work of Emmett J. Leahy. It inspired him to want to name the award after Leahy. Following discussions with Leahy & Company, and gaining the permission of Leahy’s widow Betty, the Emmett Leahy Award, for "a man or woman whose unique contributions to records control, filing, and information retrieval have advanced the information and records management profession", was born.  Exelbert convened a committee of IRM experts who worked to develop a set of selection criteria and a list of candidates to evaluate against the criteria. The first award, in the form of a personalised plaque for the recipient to retain, was presented to Ed Rosse, US Social Security Administration, at the 1967 ARMA Annual Meeting. Exelbert administered the award until 1980 using the IRM magazine to publicize it and solicit nominations, which widened its scope and increased the number of potential candidates. He introduced changes to the process by including at least one Emmett Leahy Award recipient on the selection committee. Around the same time ARMA decided that the award should no longer be presented at its meetings, since it was not involved in the process and were concerned it was competing against its own awards. They asked their official certifying body, the Institute of Certified Records Managers (ICRM), to assume administration of the award. At a meeting in late 1981 the Board of Regents of the ICRM considered this request and established a committee to examine the issue. Over the next three years discussions took place with Leahy Business Archives Inc. and Mrs Betty Leahy White, Emmett Leahy’s widow, about the award’s future administration. Formal delegation of authority was finally secured in 1983 and the award process resumed in 1984 with the next award being made in 1985. The award has continued to be sponsored but it is the exclusive responsibility of the Emmett Leahy Award Committee to select the annual recipient. Following the initial sponsorship by the publisher of the Information and Records Management Magazine and then the ICRM, Pierce Leahy Archives became the sponsors in the late 1980s followed by Iron Mountain in 2000. Other sponsors have included Huron Consulting Group and Drinker Biddle & Reath LLP. In 2019 Preservica, a digital preservation company, became the award’s sponsor. Nomination and selection process The selection of the annual Emmett Leahy Award recipient is the exclusive responsibility of the Emmett Leahy Award Committee. Composed of the previous ten award winners, the Committee is independent and not associated with any other organization. Any living individual who has had a major impact on the field of records and information management is eligible to be nominated. The Committee identifies potential nominees and nominations may also be submitted to the Committee Chair by any individual, organization, or institution.  The Committee reviews all nominations and then invites an agreed final set of nominees who meet the award criteria to submit formal applications. Individuals invited to apply are asked to submit their application in accordance with the content and format guidance provided on the award website. Since this award honors individuals who have had a significant impact on the records and information management field, applications need to make clear the individual's specific role in bringing about the impacts identified.  They must demonstrate the individual's impact in four areas: (i) Program Development and Management, (ii) Innovation, (iii) Education and (iv) Professional and Organizational Leadership. Each Committee member independently reviews and assesses the submissions of invited applicants against the award criteria. The Committee then meets to discuss each applicant in detail and ranks them to reach their decision about the successful applicant for that year. The Committee’s deliberations are conducted on the basis of objectivity, respect and consensus. List of Recipients In reverse chronological order from 1967 to the present day. See the ‘Award Winners’ section of the Emmett Leahy Award website for further details of recipients. 1999 to present Award winners since 1999 are: 1985 to 1998 Award winners before 1999 are: 1967 to 1980 See also Records management Archival science Information management Digital curation Digital preservation References Awards established in 1967 Information management Lists of award winners
Emmett Leahy Award
[ "Technology" ]
1,133
[ "Information systems", "Information management" ]
65,773,644
https://en.wikipedia.org/wiki/Vivanti%E2%80%93Pringsheim%20theorem
The Vivanti–Pringsheim theorem is a mathematical statement in complex analysis, that determines a specific singularity for a function described by certain type of power series. The theorem was originally formulated by Giulio Vivanti in 1893 and proved in the following year by Alfred Pringsheim. More precisely the theorem states the following: A complex function defined by a power series with non-negative real coefficients and a radius of convergence has a singularity at . A simple example is the (complex) geometric series with a singularity at . References Reinhold Remmert: The Theory of Complex Functions. Springer Science & Business Media, 1991, , p. 235 I-hsiung Lin: Classical Complex Analysis: A Geometric Approach (Volume 2). World Scientific Publishing Company, 2010, , p. 45 Theorems in complex analysis
Vivanti–Pringsheim theorem
[ "Mathematics" ]
170
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in complex analysis", "Mathematical analysis stubs" ]
65,775,799
https://en.wikipedia.org/wiki/MyDevice
MyDevice was a smartphone manufactured by the Finnish company MyOrigo. The first prototype was developed and designed in early 2002 by Johannes Väänänen, but the phone was never released to the public markets. MyDevice was remarkable for its time, since it had auto-rotate function, web browser, simplistic "smartphone UI and functions" (like navigating with swipes) and it was more intended to be used without stylus. The device was presented to big manufacturers like Nokia, but the leaders at Nokia didn't see the potential for a touchscreen-phone. In 2002 MyDevice prototype was presented to Apple's Steve Jobs. It has been suggested that MyDevice was the inspiration for the Apple iPhone, which was released in 2007. References Smartphones
MyDevice
[ "Technology" ]
162
[ "Mobile technology stubs", "Mobile phone stubs" ]
65,776,222
https://en.wikipedia.org/wiki/Barsali
Barsali is a village located in Betul tehsil, in Betul district of Madhya Pradesh, India. It is the location of the geographical centre of India. References Villages in Betul district Geographical centres
Barsali
[ "Physics", "Mathematics" ]
42
[ "Point (geometry)", "Geometric centers", "Geographical centres", "Symmetry" ]
65,778,287
https://en.wikipedia.org/wiki/Snezhana%20Abarzhi
Snezhana I. Abarzhi (also known as Snejana I. Abarji) is an applied mathematician and theoretical physicist specializing in the dynamics of fluids and plasmas and their applications in nature and technology. Her research has revealed that instabilities elucidate dynamics of supernova blasts, and that supernovae explode more slowly and less turbulently than previously thought, changing the understanding of the mechanisms by which heavy atomic nuclei are formed in these explosions. Her works have found the mechanism of interface stabilization, the special self-similar class in interfacial mixing, and the fundamentals of Rayleigh-Taylor instabilities. Education and career Abarzhi earned bachelor's degrees in physics and applied mathematics and in molecular biology in 1987 from the Moscow Institute of Physics and Technology, and earned a master's degree in physics and applied mathematics there, summa cum laude, in 1990. She completed her doctorate in 1994 through the Landau Institute for Theoretical Physics and Kapitza Institute for Physical Problems of the Russian Academy of Sciences, supervised by Sergei I. Anisimov. Abarzhi held a position as a researcher for the Russian Academy of Sciences from 1994 to 1997 (on leave in 1997-2004). She came to the US in 1997 as a visiting professor at the University of North Carolina at Chapel Hill, and then in 1998 became an Alexander von Humboldt Fellow at the University of Bayreuth in Germany. In 1999 she took a research position at Stony Brook University. In 2002 she briefly moved to a research professorship at Osaka University before returning to the US as a senior fellow in the Center for Turbulence Research at Stanford University. In 2005 she became a research faculty member at the University of Chicago and in 2006 she added a regular-rank faculty position as an associate professor at the Illinois Institute of Technology. She also worked at Carnegie Mellon University from 2013 to 2016 before moving to the University of Western Australia as professor and chair of applied mathematics. Abarzhi is a member of the Committee on Scientific Publications of the American Physical Society, and an organizer of conferences and programs on non-equilibrium dynamics of interfaces and turbulent mixing and beyond. In 2020 Abarzhi was named a Fellow of the American Physical Society (APS), after a nomination from the APS Division of Fluid Dynamics, "for deep and abiding work on the Rayleigh-Taylor and related instabilities, and for sustained leadership in that community". Selected publications Abarzhi SI, Hill DL, Williams KC, Li JT, Remington BA, Arnett WD 2023 Fluid dynamics mathematical aspects of supernova remnants. Phys. Fluids 35, 034106. https://doi.org/10.1063/5.0123930 Abarzhi SI, Sreenivasan KR 2022 Self-similar Rayleigh-Taylor mixing with accelerations varying in time and space. Proc. Natl. Acad. Sci. USA 119, e2118589119. https://doi.org/10.1073/pnas.2118589119 Ilyin DV, Abarzhi SI 2022 Interface dynamics under thermal heat flux, inertial stabilization and destabilizing acceleration. Springer Nat. Appl. Sci. 4, 197. https://doi.org/10.1007/s42452-022-05000-4 Meshkov EE, Abarzhi SI 2019 On Rayleigh-Taylor interfacial mixing. Fluid Dyn. Res. 51, 065502. https://dx.doi.org/10.1088/1873-7005/ab3e83, http://arxiv.org/abs/1901.04578 References Year of birth missing (living people) Living people 20th-century American mathematicians 20th-century American women mathematicians 21st-century American mathematicians 21st-century American women mathematicians Russian mathematicians Russian women mathematicians Fluid dynamicists Moscow Institute of Physics and Technology alumni Illinois Institute of Technology faculty Carnegie Mellon University faculty Academic staff of the University of Western Australia Fellows of the American Physical Society
Snezhana Abarzhi
[ "Chemistry" ]
849
[ "Fluid dynamicists", "Fluid dynamics" ]
65,779,981
https://en.wikipedia.org/wiki/The%20Clean%20Network
The Clean Network is a U.S. government-led, bi-partisan effort announced by then U.S. Secretary of State Mike Pompeo in August 2020 to address what it describes as "the long-term threat to data privacy, security, human rights and principled collaboration posed to the free world from authoritarian malign actors." Its promoters state that it has resulted in an "alliance of democracies and companies," "based on democratic values." According to the Trump administration, the Clean Network is intended to implement internationally accepted digital trust standards across a coalition of trusted partners. In December 2020, the United States announced that more than 60 nations, representing more than two thirds of the world's gross domestic product, and 200 telecom companies, have publicly committed to the principles of The Clean Network. This alliance of democracies includes 27 of the 30 NATO members; 26 of the 27 EU members, 31 of the 37 OECD nations, 11 of the 12 Three Seas nations as well as Japan, Israel, Australia, South Korea, Singapore, Taiwan, Canada, New Zealand, Vietnam and India. The term "Clean Network" was coined by U.S. Undersecretary of State Keith Krach, who initially led the initiative, which includes officials in the Treasury Department, the Office of the U.S. Trade Representative, the National Security Council, and the Commerce Department. According to Bloomberg, Krach is credited with coordinating a variety of national and regional approaches to shape a more unified international project, relying on trust more than compulsion—a notable change in tone after years in which the Trump administration pursued a go-it-alone, "America First" strategy. On April 22, 2021, David Ignatius of the Washington Post stated that Krach's Clean Network provides continuity with the Biden administration's desire to get democracies together on the same playing field on technology. Krach described the Huawei effort as a “beachhead” in a wider battle to unite against Chinese economic pressure in everything from investment to strategic materials that bears the hallmarks of 'good old fashioned' diplomacy, in contrast to a somewhat more confrontational style at the beginning of the administration. The Wall Street Journal wrote that the Clean Network will be perhaps the "most enduring foreign-policy legacy" of the last four years. Chinese foreign ministry spokesman Zhao Lijian referred to the Clean Network as a "US surveillance network" and "consolidation of US digital hegemony". Researchers have noted that the announcement of the Clean Network was met with indifference in many major European countries, among concerns that the initiative would fragment the internet, with many also skeptical of US claims that Huawei poses an uncontrollable security threat. Several European countries in the Clean Network have since allowed Huawei to build their non-core 5G networks. A December 2021 op-ed by historian Arthur L. Herman and former U.S. national security advisor Robert C. O'Brien noted that only eight countries joined the US-led ban on Huawei's 5G equipment, compared to the more than 90 countries that signed up with Huawei, including several NATO members and regional allies. Herman and O'Brien argued that the US have not offered a viable alternative to Huawei's network, and failed to utilize wide spectrum options. Overview On August 5, 2020, U.S. Secretary of State Mike Pompeo launched the Clean Network, which is the State Department’s comprehensive approach to address what it sees as the long-term threats to data privacy, security, and trusted collaboration posed by malign state actors. It is rooted in internationally accepted "Digital Trust Standards" and represents the execution of a multi-year, enduring strategy built on a coalition of trusted partners. According to Pompeo, the Clean Network emphasizes the importance of securing the entire 5G information technology ecosystem, including all extensions and accessories. The United States government sees these efforts as part of its commitment to an open, interoperable, reliable, and secure global Internet based on shared democratic values and respect for human rights. The State Department looked for a range of commitments from countries and foreign telecom providers to build their 5G networks without Huawei or ZTE equipment, and offered financing from the Exim Bank or USAID for Ericsson and Nokia equipment. The State Department pressed countries and firms to sign MoUs and make official statements supporting the initiative. The EU formed a task force on 5G network security in March 2019, which released standards in January 2020, known as the EU Toolbox on 5G Cybersecurity, that did not explicitly ban Hauwei equipment, but instead suggested each country should evaluate high-risk suppliers. Countries that have committed to build networks implementing the EU Toolbox standards are counted at counties participating in the Clean Network. United States Under Secretary of State for Economic Growth, Energy, and the Environment Keith Krach was the initial lead advocate of the U.S. government's push to prevent the use of potentially high-risk Chinese technology in sensitive systems around the world. According to Bloomberg, the Clean Network effort to create a united economic front has similarities with George Kennan's “long telegram” of 1946 to the Soviet Union. David Fidler, adjunct senior fellow for cybersecurity and global health at the Council on Foreign Relations, made this claim in a blog post in 2020. Kennan formulated the Cold War strategy of containment, which the Chinese claim is now being used against them. Under Secretary Krach traveled to Asia, Europe, South America, and the Middle East to secure commitments from more governments to join the U.S. effort. "The success of the Clean Network has taken all the momentum away from Huawei," Krach said. "When we looked at this six months ago it looked like Huawei was unstoppable." The Clean Network and the EU 5G Clean Toolbox, the U.S. government claims, have paved the way toward protecting citizens’ privacy, companies’ intellectual property, and countries’ national security from "aggressive intrusions" by malign actors, such as the Chinese Communist Party and its surveillance and data collection tools, such as Huawei. "Countries and companies are more and more asking the question, 'Who do we trust?'" Krach said. "The answer’s coming back, it’s certainly not Huawei because they’re the backbone of the Chinese Communist Party’s surveillance state." The "Clean Network" brand replaced the original name of "Economic Prosperity Network" in which trusted democracies and the private sector form an economic alliance. It was conceived to have three components from initiatives that were already underway: a Clean Network for communications that is free from untrusted vendors; a Blue Dot Network for global infrastructure investment to counter China's "Belt and Road" initiative; and an Energy Resource Governance Initiative to secure supplies of rare earth metals and other strategic minerals. History February 14, 2020 at the 2020 Munich Security Conference, U.S. House Speaker Nancy Pelosi warned European countries they will "choose autocracy over democracy" if they let Huawei take part in rolling out 5G technology, in a sign of the bipartisan US political pressure over the Chinese company. February 18, 2020, at a press conference in London, Huawei's president of carrier business Ryan Ding announced, "We have 91 commercial 5G contracts worldwide, including 47 from Europe." March 3, 2020 Senate Minority Leader Chuck Schumer (D-NY) led a bipartisan group of senators in urging Parliament to reconsider the Johnson government's decision to allow Huawei to supply some of the United Kingdom's 5G telecommunications structure. May 13, 2020, the Center for Strategic and International Studies publishes Clean Network's digital trust standard. May 18, 2020, the "5G Trifecta" announced which represents the onshoring of TSMC's semiconductors, locking down on Huawei's advanced semiconductors and the global roll out of the Clean Path stratagem. June 3, 2020, Canadian major telcos effectively lock Huawei out of 5G build. The decision of Bell and TELUS to shift to Ericsson and Nokia has left Huawei with no major carrier customers in Canada. June 10, 2020, Krach's bipartisan semiconductor bill, Chips for America Act, is introduced by Senators Warner and Cornyn. June 24, 2020, Telefónica CEO and Chairman José María declares, "Telefónica is proud to be a 5G Clean Network company. Telefónica Spain and O2 (UK) are fully clean networks, and Telefónica Deutschland (Germany) and Vivo (Brazil) will be soon without equipment from any untrusted vendors." June 25, 2020, Under Secretary of State Krach welcomes the Czech Republic, Norway, Poland, Estonia, Romania, Denmark, Greece, New Zealand, Japan, Australia, Israel, and Latvia as members of Clean Network. June 29, 2020, Nokia and Ericsson chosen as Singapore's 5G network providers. July 4, 2020, Under Secretary Krach ties China's surveillance state with genocide and slave labor in Xinjiang on Cavuto Live. July 14, 2020, the United Kingdom announces plans to ban Huawei from future 5G networks. Specifically, UK mobile providers are being banned from buying new Huawei 5G equipment after December 31. July 22, 2020, French authorities limited Huawei by telling telecoms operators planning to buy Huawei equipment that they would not be able to renew licenses for the gear once they expire in 2028. August 5, 2020 - Announcement of the expansion of the Clean Network to include Clean Carrier, Clean Store, Clean Apps, Clean Cloud, and Clean Cable. August 6, 2020 - President Trump signed two executive orders exercising his authority under the International Emergency Economic Powers Act (IEEPA) to address the alleged threats posed by apps such as TikTok and WeChat. August 10, 2020 - The Clean Network grew to 30 Clean Countries and Territories along with some of largest telecommunications companies, including Orange, Jio, Singtel, Telstra, SK, KT and all telcos in Canada, Norway, Vietnam, and Taiwan. August 11, 2020, U.S. State Department called on its allies and partners in government and industry around the world to join the growing tide to secure data from the Chinese Communist Party (CCP)'s surveillance state and China's Great Firewall and said “Momentum for The Clean Network is growing.” August 24, 2020, India phases out equipment from Chinese companies from its telecom's networks over an escalating border dispute. September 18, 2020, Krach became the highest-ranking State Department official since 1979 to visit Taiwan. He was there to represent the United States at the funeral of former president Lee. Krach welcomed Taiwan to the Clean Network with Taiwan's President Tsai. "Taiwan is a great partner, a great friend," Krach said. "They're a role model for capitalism and democracy in that part of the world." September 22, 2020, U.S. Under Secretary of State Keith Krach begins European Clean Network tour including EU and NATO headquarters, to discuss the move toward clean 5G infrastructure and the goal of building a Transatlantic Clean Network. September 28, 2020, Krach and Austrian Federal Minister Elisabeth Köstinger meet to discuss the U.S.-Austria partnership and multiple areas of economic collaboration through the Clean Network and the EU 5G Clean Toolbox. September 30, 2020 – U.S. Under Secretary of State Keith Krach and EU Commissioner Thierry Breton issued a joint statement on the synergies between the Clean Network and the EU 5G Clean Toolbox. Toolbox meets criteria for being part of the Clean Network. September 30, 2020 – NATO seeks a 5G Clean NATO Network due to the strategic importance of having a non-fractured alliance. October 1, 2020, Portugal commits to implementing the EU 5G Clean Toolbox, joins Clean Network. October 2, 2020, Spain commits to implementing the EU 5G Clean Toolbox, joins Clean Network. October 3, 2020, Albania: Prime Minister Edi Rama stated, "Albania sees its role in the region not just as a constructive role in building peace and strengthening dialogue, but as a proactive role in the 5G Clean Path." In addition to Albania's commitment to 5G Clean Path, Under Secretary Krach and Albania's Finance Minister Anila Denaj signed a Memorandum of Economic Cooperation, laying the foundation on 5G security. October 4, 2020, Germany prepares legislation that included two-phase reviews in building its 5G network. October 8, 2020, Luxembourg joins Clean Network. October 9, 2020, Belgium announce replacement of Huawei, joins the Clean Network. The Belgian capital, Brussels, is home to the European Union's executive body and had been 100% reliant on Chinese vendors for its radio networks. Belgium has now awarded their 5G contracts to Nokia instead of Huawei to complete their transition to a Clean Country. October 14, 2020 – Clean Network grew to over 40 Clean Countries, and 50 Clean Telcos. October 17, 2020 – Clean Network added companies including Oracle, HP, Reliance Jio, NEC, Fujitsu, Cisco, NTT, SoftBank and VMware. October 20, 2020, Cyprus joins Clean Network. U.S. Undersecretary of State Krach and Cypriot Minister for Digital Policy Kyriacos Kokkinos sign a memorandum of understanding regarding "Clean" technologies in Cyprus. October 21, 2020 – Three Seas Initiative announced support for the Clean Network at annual conference in Estonia. October 23, 2020 – US Under Secretary of State Krach signed three Clean Network memorandums of understanding with Prime Ministers in Bulgaria, Kosovo and North Macedonia. October 23, 2020 – Slovakia signs Joint Declaration on 5G Security, joins Clean Network. October 31, 2020 – Clean Network grew to 49 country members, representing two-thirds of global economic output. November 7, 2020 – Krach begins Latin American Clean Network tour to Brazil, Chile, Ecuador, Panama, and the Dominican Republic to meet with government officials and business leaders. November 10, 2020 – Brazil joins Clean Network as the 50th member. November 22, 2020 – Ecuador and the Dominican Republic joined The Clean Network. November 25, 2020 – Huawei sells off Honor phone business to a state-led consortium. December 23, 2020 – Ukraine announces, intent to join Clean Network because "joining the Clean Network will pave the way for more private sector investment in Ukraine, in particular the innovation sector." January 12, 2021 – Nauru announces joining The Clean Network. January 14, 2021 – Palau joins The Clean Network. January 15, 2021 – The European nation of Georgia signs MOU to join The Clean Network. May 22, 2021 – In Ethiopia, a Vodafone-led group with financial backing of the International Development Finance Corp wins contract to build a nationwide 5G-capable wireless network against Huawei. April 12, 2021 – Harvard Business School published "The Clean Network and the Future of Global Technology Competition." Impact on Huawei and similar companies The United States issued warnings about the risks of reliance on Chinese telecommunication equipment, but acceptance of Huawei products increased around the world. According to Under Secretary Krach in late 2020, "when we looked at this six months ago it looked like Huawei was unstoppable. It looked like they were going to run the table in Europe and everywhere else.” At a February 18, 2020 press conference in London, Huawei's president of carrier business Ryan Ding announced, "We have 91 commercial 5G contracts worldwide, including 47 from Europe." Following the U.S. government's campaign to reduce international reliance on Chinese-made telecommunications equipment, Huawei's deals outside of China decreased from 91 to 12. According to the United States, Chinese national intelligence laws can be used to force companies like Huawei, ZTE, and other Chinese telecommunication equipment vendors to turn over any information or data upon the request of the Chinese Communist Party government. The U.S. State Department argues that these laws thus make Huawei and similar vendors "an arm of the People’s Republic of China (PRC) surveillance state." Huawei was founded in 1987 by Ren Zhengfei, a veteran of the People's Liberation Army's engineering corps. Many of the company's crucial first contracts were with the Chinese army. In 1996, the Chinese government banned competition from foreign suppliers, and Huawei may have received a $30 billion line of credit from the China Development Bank, along with other state-backed financing. This gave the company control over the Chinese domestic market and enabled it to fuel rapid international expansion by offering discounts.  Forced technology transfers from foreign companies, and several cases of technology theft also contributed the company's growth, including the theft of router software from Cisco and a jury finding that Huawei committed industrial espionage against T-Mobile. According to the United States, the rise of Huawei was the fulfillment of decades of careful planning. Supported by the CCP, Huawei benefited from state protection against foreign competitors, billions in funding from the Chinese government, as well as forced technology transfers and well documented instances of outright technology theft. Secretary of State Mike Pompeo said, "Huawei was a trojan horse for Chinese intelligence and the CCP surveillance-state." The Chinese government rejects the accusation of bullying. In July, after U.S. regulators labeled Huawei and ZTE Corp. as threats to national security, a Foreign Ministry spokesman accused the U.S. of "abusing state power" to hurt Chinese companies "without any evidence." Huawei's U.S. website says: "Everything we develop and deliver to our customers is secure, trustworthy, and this has been consistent over a track record of 30 years." ZTE says it "attaches utmost importance to our customers’ security values." The U.S. claims that by building an alliance of democracies built on "democratic values" embodied in the Trust Standards, it has garnered international support and bipartisan backing. On April 12, 2021, Harvard Business School published a case study on "The Clean Network and the Future of Global Technology Competition," noting that "the controversial program to some heralded a new era of multilateral, democratic governance of the internet and to others augured a "splinternet" where market participants and countries had to choose between the U.S. and China." Clean Network lines of effort According to the U.S. Department of State, the following are the current lines of effort of the Clean Network: Clean 5G Infrastructure Clean 5G Infrastructure does not use any transmission, control, computing, or storage equipment from untrusted IT vendors, such as Huawei and ZTE, which are required by Chinese law to comply with directives of the CCP. Clean Path The Clean Path requires all network traffic from 5G standalone networks entering or exiting U.S. diplomatic facilities to transit only through equipment provided by trusted vendors to guard against untrusted vendors by blocking their ability to intercept and disseminate sensitive information to malign actors. As first described by Secretary Pompeo on April 29, 2020, the 5G Clean Path is an attempt to create an end-to-end communication path that does not use any equipment from untrusted IT vendors. This includes transmission, control, computing, or storage equipment. The concern raised by Secretary Pompeo is that those vendors are required to comply with directives of the Chinese Communist Party, including possibly revealing private or confidential information. Similarly, mobile data traffic entering U.S. diplomatic systems will be subject to stringent requirements to protect its security. Clean Carrier The United States of America seeks to ensure untrusted People's Republic of China carriers are not directly connected with U.S. telecommunications networks because the U.S. believes that such companies pose a danger to U.S. national security. Clean Store The U.S. believes that PRC apps in mobile phone app stores threaten its citizens' "privacy, proliferate viruses, censor content, and spread propaganda and disinformation." President Trump previously signed two Executive Orders addressing alleged threats posed by TikTok and WeChat, on the basis that TikTok and WeChat capture vast swathes of data from their users and are subject to Chinese jurisdiction – which may lead to them being compelled to turn over private information to the CCP. The U.S. stated its goal to protect American people's sensitive personal and business information on their mobile phones from exploitation and theft. Clean Apps The United States defined "Clean Apps" as a program to prevent untrusted smartphone manufacturers from pre-installing or marketing untrusted apps on their apps store. The United States Department of State claimed that Huawei, an arm of the PRC surveillance state is trading on the innovations and reputations of leading U.S. and foreign companies. The department recommended that these companies should remove their apps from Huawei's app store to ensure they are not partnering with a human rights abuser. Clean Cloud The United States defined the "Clean Cloud" as an effort "to prevent U.S. citizens' most sensitive personal information and businesses' most valuable intellectual property, including COVID-19 vaccine research, from being stored and processed on cloud-based systems built or operated by untrusted vendors, such as Alibaba, Baidu, China Mobile, China Telecom, and Tencent." Clean Cable The United States defined the "Clean Cable" as an effort to "ensure the undersea cables connecting [the United States] to the global internet are not subverted for intelligence gathering by the PRC at hyper scale." The U.S. also announced a goal to work with other nations to ensure that undersea cables in other locations around the world are built by trusted vendors. Clean Telcos In December 2020, the United States announced more than 60 nations, representing two-thirds of the world's gross domestic product and 180 telecom companies have publicly committed to the principles of The Clean Network. "Clean Telcos" include Reliance Jio in India, Orange in France, Telefónica in Spain, O2 in the United Kingdom, Telstra in Australia, SK Telecom and KT in South Korea, NTT Docomo and SoftBank in Japan, Hrvatski Telekom in Croatia, Tele2 in Estonia, Cosmote in Greece, Three in Ireland, LMT in Latvia, Ziggo in the Netherlands, Plus in Poland, Telefónica Deutschland in Germany, Vivo in Brazil, Chunghwa in Taiwan, TDC in Denmark, O2 in the United Kingdom, Singtel, Starhub, and M1 in Singapore and all the major telcos in Canada, Japan, Taiwan, Luxembourg, and the United States. Clean Trust standards principles The Trust Principle Doctrine The ‘Trust Principle’ is based on democratic values which includes respect for the rule of law, property, press, human rights, and national sovereignty, protection of labor and the environment, and standards for transparency, integrity, and reciprocity. Under Secretary Krach deployed the “Trust Principle” doctrine building the Clean Network Alliance of Democracies to protect global 5G infrastructure and creating a usable model for overcoming authoritarian economic threats. "Trust Principle" doctrine serves as a new basis for 21st century international relations and as a peaceful alternative to China's “Power Principle,” of intimidation, retaliation, coercion, and retribution. Leon Panetta, the Secretary of Defense under President Barack Obama said, “The Clean Network pioneered a trust-based model for countering authoritarian aggression across all areas of techno-economic competition.” Digital Trust Standard The Center for Strategic and International Studies (CSIS) assembled a group of 25 experts from Asian, European, and U.S. companies and research centers. Their stated goal was to develop criteria to assess the trustworthiness of telecommunications equipment suppliers. The group produced a set of "Criteria for Security and Trust in Telecommunications Networks and Services" that are believed to provide governments and network operators additional tools to evaluate trustworthiness and security of equipment and suppliers, in tandem with the European Union's 5G Toolbox and the Prague Proposals. Prague Proposals In May 2019, government officials from more than 30 countries, met in Prague with representatives from the European Union, the North Atlantic Treaty Organization, and industry. They discussed national security, economic, and commercial considerations that must be part of each country's evaluation of 5G vendors. They produced a document called the Prague Proposals that contain recommendations and principles for nations as they design, construct, and administer their 5G infrastructure. EU 5G cybersecurity Toolbox On September 30, 2020, U.S. Under Secretary of State Keith Krach and EU Commissioner Thierry Breton issued a joint statement on the synergies between the Clean Network and the EU 5G Clean Toolbox. The 5G cybersecurity Toolbox was released by the European Commission with EU Member States. At the time of release the EC noted that European 5G suppliers are likely to comply with its directives. The Toolbox provides definitions and measurements on how to avoid the use of "high-risk" suppliers in the network. This is intended to include the Radio Access Network. Since January 2020, multiple EU Member States have announced steps to fulfill the 5G cybersecurity Toolbox's recommendations. Implementing the 5G EU Clean Toolbox In March 2019, a number of EU the Heads of State or Governments called for a joint approach to the security of 5G networks. Following this, the European Commission adopted the Commission Recommendation on the Cybersecurity of 5G, which set out a number of actions at national and Union level to strengthen the cybersecurity of 5G networks. Core principles The proponents of the Clean Network state that Clean Network partnerships are grounded in democratic values that form the basis of trust: integrity, accountability, transparency, reciprocity, and respect for the rule of law, property, labor, sovereignty, human rights, and the planet. This creates a “high-integrity, level playing field for reliable collaboration with the understanding that there is no prosperity without liberty.” Human rights and clean labor practices A key tenet for the Clean Network's Trust Principles is human rights. Huawei is allegedly the backbone of the CCP's surveillance state and is accused of assisting human rights abuses against the people in the mass-detention of Uyghurs in the Xinjiang internment camps and employing forced Uyghur labor in its supply chain. Clean Network collaboration European Union On September 30, 2020, U.S. Under Secretary of State Keith Krach and EU Commissioner for Internal Market Thierry Breton met in Brussels to discuss cooperation in securing telecommunications infrastructure. They also sought ways to further advance U.S.-EU digital cooperation and secure technology supply chains. They stated that this is essential to protect peoples’ personal data, companies’ intellectual property, and national security. According to their discussions, both the Clean Network program and the 5G Toolbox share the same goal of developing, deploying, and commercializing 5G networks based on the principles of free competition, transparency, and the rule of law. Under Secretary Keith Krach and Commissioner Thierry Breton urged stakeholders to carefully weigh the long-term impact of allowing “high-risk suppliers” access – directly or indirectly – to their 5G networks when building their telecommunications infrastructure and services. On September 30, 2020, U.S. Under Secretary of State Keith Krach and EU Commissioner Thierry Breton met in Brussels to discuss cooperation in securing telecommunications infrastructure. They also sought ways to further advance U.S.-EU digital cooperation and secure technology supply chains. NATO On September 30, 2020, NATO Deputy Secretary General Mircea Geoana noted that 25 NATO countries have committed to being "Clean Countries". He emphasized the strategic importance of having a secure 5G Clean NATO Network, which is non-fractured, because, he said, "the Alliance is only as strong as its weakest link." He also praised the U.S.-EU joint statement on the synergies between the Clean Network and the EU 5G Clean Toolbox. Three Seas Initiative On October 21, 2020, at an annual conference in Estonia, the Three Seas Initiative announced its support for the Clean Network. As of late 2020, 27 NATO members have committed to being "Clean Countries" on both sides of the Atlantic by allowing only trusted vendors in their 5G networks. Between September 21 and October 4, 2020, Under Secretary of State Keith Krach visited eight European countries, including EU and NATO headquarters, to discuss the goal of building a Transatlantic Clean Network. Under Secretary of State Krach said, "Countries and companies are terrified of China’s retaliation. The CCP cannot retaliate against everyone. That is where the EU comes in, the Transatlantic Alliance comes in, NATO comes in. The bottom line is the tide has turned. Countries and companies now understand that the central issue is not about technology, but trust." Clean Network developments The "5G trifecta" The Washington Times described U.S. Undersecretary of State Keith Krach's initial move to put Huawei on the defensive as the "5G trifecta in competition with China" based on onshoring of Taiwan Semiconductor Manufacturing Company's (TSMC) semiconductors, the prohibition of advanced semiconductors from Huawei, and the roll-out of the Clean Path strategy. The 5G trifecta served as the launchpad for the Clean Network. First, Krach's team partnered with the Commerce Department to secure an announcement from TSMC that it will build the world's most advanced five-nanometer chip fabrication facility in Arizona. This was the largest onshoring in American history and seen by the United States as a leap forward in securing the semiconductor supply chain and 5G security for the United States and its partners. It provides a "Made in America" source for chips powering everything from smartphones, to 5G base stations, to advanced artificial intelligence. The New York Times called the onshoring announcement of $12 billion semiconductor plant a win for the Trump administration, which has called for building up U.S. manufacturing capabilities and has criticized the fragility of a tech supply chain heavily centered in China. The impact catalyzed a critical piece of legislation that Krach championed called the Chip Act, a bipartisan, bicameral bill that will help bring semiconductors production vital to national security back to the United States. Republican Senator John Cornyn and Democrat Senator Mark Warner said, "America's innovation in semiconductors undergirds our entire innovation economy, driving the advances we see in autonomous vehicles, supercomputing, IoT devices and more. Unfortunately, our complacency has allowed our adversaries to catch up." The Chip Act reinvests in this national priority by providing targeted tax incentives for advanced manufacturing, funding research in microelectronics, and emphasizing the need for multilateral engagement with our allies in bringing greater attention to security threats to the global supply chain.  The Bill passed unanimously in the House and 96–4 in the Senate. Second, the State Department successfully launched its 5G Clean Path initiative, which requires all 5G data entering or exiting facilities to transit only through trusted equipment, and never through from untrusted vendors such as Huawei and ZTE. Many companies and countries like Japan, Albania and Taiwan have already adopted the Clean Path. Polish Prime Minister Mateusz Morawiecki called on all countries and companies, especially the Europeans, to adopt a Clean Path to secure their 5G networks. U.S. Ambassador to NATO Kay Bailey Hutchison has called for a 5G Clean NATO Network with a Clean Path feeding into its military bases. The Clean Path stratagem raised the cost for telco operators who were contemplating Huawei 5G by creating a critical mass of network traffic from their customers that required that all sources feeding into it had to run on only trusted equipment. Third, Krach's State Department team succeeded in expanding the Foreign Direct Product Rule to prevent Huawei from "dodging" U.S. export controls. Senate Democratic Leader Chuck Schumer (D-NY) and Senator Tom Cotton (R-AR), joined by over a dozen other senators, sent a bipartisan letter to President Trump because they were concerned about Commerce Department saying it would issue licenses to allow U.S. firms to conduct business with Huawei. This rule effectively blocked Huawei's access to advanced semiconductors required for 5G and sophisticated smartphones, leading to Huawei selling its Honor budget smartphone brand to a Chinese consortium six months later. Diplomatic initiatives Latin America As of November 2020, the United States announced that more than 50 nations were committed to the principles of The Clean Network. On November 10, 2020, Brazil became the 50th member. On November 22, Secretary of State Mike Pompeo announced that Ecuador, and the Dominican Republic also joined The Clean Network. Undersecretary Krach said "the participation of the Dominican Republic in the Clean Network paves the way for the expansion of investments by the US private sector and strengthens mutual guarantees for like-minded partners in the region and other parts of the world." Other nations Spain: Committed to implementing the EU 5G Clean Toolbox. Portugal: Committed to implementing the EU 5G Clean Toolbox. Slovakia: Committed to signing a Joint Declaration on 5G Security. Slovenia: Signed a Joint Declaration on 5G Security. Albania: Prime Minister Edi Rama stated, "Albania sees its role in the region not just as a constructive role in building peace and strengthening dialogue, but as a proactive role in the 5G Clean Path." In addition to Albania's commitment to 5G Clean Path, Under Secretary Krach and Albania's Finance Minister Anila Denaj signed a Memorandum of Economic Cooperation, laying the foundation to a memorandum of understanding on 5G security. Nauru joined in January 2021. The Clean Network under the Biden administration The Clean Network has received support from members of both major U.S. political parties. At the Munich Security Conference in February 2020, U.S. House Speaker Nancy Pelosi (D-CA) warned European countries they will "choose autocracy over democracy" if they let Huawei take part in rolling out 5G technology, in a sign of the bipartisan US political pressure over the Chinese company. She added it would be the "most insidious form of aggression" if 5G communications were to come under the control of an "anti-democratic government". According the Wall Street Journal, U.S. President Joe Biden is expected to continue the Clean Network's ideals by continuing to "advance Washington’s tough, new attitude toward China but with an approach that relies more on pressure from U.S. allies, sanctions and other tools to shape Beijing’s behavior." Another article predicted that Biden would limit China's influence by continuing to support the Clean Network plan of building alliances with allies, partners, and like-minded countries to promote values of human rights, democratic principles, and market economies. As of December 2020, Biden has not yet laid out his detailed China strategy, but Biden aides are expected to adopt the State Department's Clean Network and expand it to a broader set of commercial and strategic pacts to put pressure on Beijing. According to Bloomberg, "there’s a good chance the Biden administration will pick up where Krach leaves off, assuming he isn’t asked to stay on." In April 2022, Kurt Campbell, Biden's “Asia Chief” Head of Indo-Pacific Affairs at the U.S. National Security Council, stated, “Almost all the work that Keith Krach did at the State Department, including trusted networks, the Blue Dot Network, etc., have been followed on in the Biden Administration and, in many respects, that's the highest tribute.” Biden's pick for Secretary of State, Antony Blinken, laid out his strategy for pushing back on China's various bad behaviors in a Hudson Institute event, where he promised to rally allies and put our values back at the center of our foreign policy toward. “When we’re working with allies and partners, it’s 50 or 60% of GDP. That’s a lot more weight and a lot harder for China to ignore. China sees alliances as a core source of strength for the United States, something they don’t share and enjoy.” The Clean Network includes 53 countries, representing 66% of the worlds GDP, on the Clean Network, plus 180 telcos on top of that. The future of the Clean Network According to the United States, The Clean Network is the first step in a vision of constructing a network of networks—like-minded countries, companies and civil society that operate by a set of trust principles for all areas of collaboration. It is an alliance of democracies with the goal of creating an equitable and unifying geo-economic network for multiple areas of collaboration. Additional “clean initiatives” have been announced—Clean Path, Clean Carrier, Clean Store, Clean App, Clean Cable, and Clean Cloud. In his Senate testimony, Undersecretary of State Keith Krach shared his vision for the next wave that includes Clean Data Centers, Clean Currency, Clean Data, Clean Drones, Clean Security, and Clean Things (i.e., Internet of Things). The Clean Network is a collection of networks that consists of multiple forms and lines of economic collaboration organized by sectors, regions, and industries. Forms of collaboration include commerce, investment, supply, chains, money flows, research, safeguarding assets, research, logistics, procurement, trade, policy, and standards. The lines of collaboration include sectors such as energy, healthcare, digital, agriculture, manufacturing, transportation, minerals, infrastructure, finance, space, and security. The Clean Network team's strategy is based on the idea that the fastest way to construct a network is to build a “network of networks” as evidenced by integrating with the Clean EU 5G Cybersecurity Toolbox, collaborating on the Clean 5G NATO Network concept, and getting the recent endorsement by the Three Seas Initiative, a geo-economic network that comprises 12 Eastern European countries. The Clean Network team's vision is to utilize Metcalfe's law for maximum network effect because as the number of members and areas of collaboration grows and the brand builds, the power of the Clean Network increases at an exponential rate. Krach has stated that the power of the Clean Network enables it to accomplish its noble democratic mission. The core principles enable fast, frictionless, and trusted collaboration critical to the members’ shared future. According to the U.S. State Department, the next dimension of adjacent areas outside of tech have already begun—Clean Infrastructure and Clean Financing which is called the Blue Dot Network; Clean Minerals which is called the Energy Resource Governance initiative; and Clean Supply Chains with Clean Labor Practices. The Clean Network has the potential to be used as an umbrella network for existing regional initiatives like the Indo-Pacific Strategy, Three Seas Initiative, Transatlantic Partnership, EU-Asia Connectivity Initiative and American CRECE. In April 2022, the White House announced that United States with 60 partners from around the globe launched the Declaration for the Future of the Internet. Reactions Chinese foreign ministry spokesman Zhao Lijian referred to the Clean Network as a "US surveillance network" and "consolidation of US digital hegemony". Zhao stated that "In the era of globalization, 5G development should be jointly developed and shared by all countries. The practice of politicizing the 5G issue and creating small circles is not conducive to the development of 5G, goes against the principle of fair competition, and does not conform to the common interests of the international community." Krach was nominated for the 2022 Nobel Peace Prize for his ‘Trust Principle’ doctrine in developing the Clean Network Alliance of Democracies. Conversely, Krach was among 28 former Trump administration officials sanctioned by the Chinese government on January 20, 2021, a move which the incoming Biden administration described as "unproductive and cynical." A March 2021 GLOBSEC study noted the United States' effort to contain China in Europe has not been met with universal support in the region. Ukraine and Serbia both signed deals with Huawei despite American pressure, while Georgia, Armenia, and Azerbaijan all attempted to navigate a dedicate balance between geopolitical rivalries. Xuewen Gu of the University of Bonn's Center for Global Studies noted that the US did not make great headway in convincing its European allies that Huawei poses an uncontrollable security threat, with several of the countries listed in the Clean Network already broken their commitments to the initiative. Gu argued that this is due European leaders' suspicions of closed networks, such as China's own Great Firewall, and that the Clean Network would run in contrary to the spirit of an open, worldwide internet. Gu also noted European concerns that the Clean Network could potentially split the world's cyberspace in half, forcing European governments to take sides between China and the USA. Der Spiegel's Patrick Beuth referred to Mike Pompeo's announcement of the Clean Network as "noise" and "a pathetic document in contemporary history", and criticized Pompeo's characterization of Chinese networks as "unclean" and "spread viruses". Andy Müller-Maguhn gave a statement in June 2021 at a hearing on "Innovative Technologies and Standardization in a Geopolitical Perspective" of the Committee for Foreign Affairs of the German federal parliament. In view of the Clean Network Initiative he made the connection from the NOBUS-Strategy to a renewed America First policy under Trump which continued with the lack of detachment from the policy by the Biden administration. The compartmentalization the network would work against the principle of net neutrality and towards the balkanization of the internet. The limiting of communication between parties would increase the likelihood of conflict and arouse in him associations with war preparations. A number of countries and telecom providers listed in the Clean Network however did not formally ban cooperation with Huawei. The government of Japan for instance noted that it did not join US efforts to ban Huawei, but will undertake its own steps to address national security concerns. Vodafone in Spain utilizes Huawei in its 5G network alongside Ericsson, and in June 2020 the Spanish government granted Huawei a security clearance in working with Telefónica in building its core 5G network. While the three major telecommunications providers in Portugal - Vodafone, NOS, and Altice excluded Huawei equipment in their core 5G networks without stated reasons, the Portuguese government has not banned Huawei and noted that such decisions "has nothing to do with the options or impositions of the Portuguese government". Telefónica Germany announced that it would continue to cooperate with Huawei in its 4G and non-core 5G networks, alongside Ericsson with its 5G core networks. Netherlands' KPN announced that it had struck a deal with Huawei in building its non-core 5G network, as well as switching from Ericsson to Huawei in its 4G network, with the company noting that removing Huawei equipment would be costly and disruptive. Following the end of US President Donald Trump's term in January 2021, the then President of Brazil and Trump ally, Jair Bolsonaro, reversed course and allowed Huawei to participate in Brazil's 5G auction due to opposition within the government and industry as well as mounting financial costs. In June 2023, the European Union was contemplating the implementation of a mandatory ban on member countries engaging with companies that were identified as posing security threats within their 5G infrastructure, which included companies such as Huawei. References United States Department of State Foreign relations of the United States India–United States relations Data protection Privacy in the United States Data security Information privacy
The Clean Network
[ "Engineering" ]
9,075
[ "Cybersecurity engineering", "Data security", "Information privacy" ]
65,780,852
https://en.wikipedia.org/wiki/Chinese%20salami%20slicing%20strategy
China's salami slicing (; ) is a geopolitical strategy involving a series of small steps allegedly taken by the government of People's Republic of China that would become a larger gain which would have been difficult or unlawful to perform all at once. When discussing this concept, notedly debated in the publications of the Lowy Institute from Australia, some defenders of the concept are Brahma Chellaney, Jasjit Singh, Bipin Rawat or the Observer Research Foundation from India or the United States Institute of Peace, Bonnie S. Glaser (Center for Strategic and International Studies) or Erik Voeten (The Washington Post) from the US, while detractors are H. S. Panag from India or Linda Jakobson. Advocates of the term have cited examples such as the territorial disputes in the South China Sea and along the Sino-Indian border. Modus operandi According to Indian strategist and writer Brahma Chellaney, "salami slicing" rather than overt aggression is China's favored strategy because none of its series of small actions serves as a casus belli by itself. China slices very thinly, camouflaging offense as defense, and eventually gains a larger strategic advantage. This throws its targets off balance by presenting a Hobson’s choice: either silently suffer or risk an expensive and dangerous war with China. This can also place the blame and burden of starting a war on the targets. Dimensions Proponents of the salami slicing strategy allege that China has used this in political, economic, and military realms. India Indian authors accuse China of using piecemeal claims to expand its territory at India's expense. Brahma Chellaney has cited China's incorporation of Aksai Chin in a step-by-step process between 1952 and 1964, its 2020-2021 border skirmishes with India, and Tajikistan's Pamir Mountains as examples. The Five Fingers of Tibet involving Nepal and Bhutan as well as the String of Pearls in the Indian Ocean have also been described as manifestations of China's salami slicing. South China Sea According to Chellaney, China expands its exclusive economic zone (EEZ) in the South China Sea at the expense of other nations EEZ through its nine-dash line claims. It took control of the Paracel Islands in 1974, Johnson Reef in 1988, Mischief Reef in 1995, and Scarborough Shoal in 2012. China has installed military infrastructure in these areas and deployed the China Maritime Safety Administration, Fisheries Law Enforcement Command, and the State Oceanic Administration, agencies that Chellaney describes as paramilitary in nature. Extension of the concept Retired Indian Brigadier S. K. Chatterji extended salami slicing in reference to China's Belt and Road Initiative (BRI), Confucius Institute, allegations of technology theft, involvement in the World Health Organization, activities in Hong Kong and Tibet, and diplomatic support for North Korea and Pakistan. BRI and debt-trap diplomacy Some critics have claimed that the Belt and Road Initiative (BRI) is having the effect of pressuring Papua New Guinea, Sri Lanka, Kenya, Djibouti, Egypt, Ethiopia, and other nations who are unable to pay their debts, to hand over their infrastructure and resources to China. According to Chellaney, this is "clearly part of China's geostrategic vision". China's overseas development policy has been called debt trap diplomacy because once indebted economies fail to service their loans, they are said to be pressured into supporting China's geostrategic interests. However, other analysts such as the Lowy Institute argue that the BRI is not the main cause of failed projects, while the Rhodium Group found that "asset seizures are a very rare occurrence", while debt write-off is the most common outcome. Some governments have accused the Belt and Road Initiative of being "neocolonial" due to what they allege is China's practice of debt trap diplomacy to fund the initiative's infrastructure projects in Pakistan, Sri Lanka and the Maldives. China contends that the initiative has provided markets for commodities, improved prices of resources and thereby reduced inequalities in exchange, improved infrastructure, created employment, stimulated industrialization, and expanded technology transfer, thereby benefiting host countries. Allegations of technology theft China is accused by critics of the theft of "cutting-edge technology from global leaders in diverse fields", including U.S. military technology, classified information, and trade secrets of American companies. It uses lawful as well as covert methods, leveraging a network of existing scientific, academic and business contacts such as the Thousand Talents Plan. The German Federal Ministry of the Interior estimates that Chinese economic espionage could be costing Germany between 20 and 50 billion euros annually. Spies are reportedly targeting mid- and small-scale companies that do not have as strong security regimens as larger corporations. Lobbying and influence operations China is accused of nominating persons to various organizations with the view of influencing the organizational culture and values to the advantage of China's national interests. Examples cited include the promotion of Chinese officials to the UN Food and Agriculture Organization, which critics have claimed advances Chinese national interests. The Confucius Institutes have also been claimed to advance Chinese state interests. China is alleged to have attempted foreign electoral intervention in the domestic political elections of other nations, including in the United States, although these claims have not been supported by evidence., China has been accused of interference in elections on Taiwan, and has been accused of influencing Australian members of Parliament. Relations between China and Australia deteriorated after 2018 due to growing concerns of Chinese political influence in various sectors of Australian society including in the Government, universities and media as well as China's stance on the South China Sea dispute. Consequently, Australian Coalition Government announced plans to ban foreign donations to Australian political parties and activist groups. Australia has empowered the Australian Security Intelligence Organisation, Australian Federal Police (AFP) and the Attorney-General’s Department to target the China-linked entities and people under new legislation to combat Chinese influence operations, including the alleged deployment of the United Front Work Department of the Chinese Communist Party (CCP). The United Work Front department is accused of lobbying policy makers outside of China to enact pro-CCP policies, targeting people or entities that are outside the CCP, especially in the overseas Chinese community, who hold social, commercial, or academic influence, or who represent interest groups. Through its efforts, the UFWD seeks to ensure that these individuals and groups are supportive of or useful to CCP interests and potential critics remain divided. In 2005, a pair of Chinese dissidents claimed that China may have up to 1,000 intelligence agents in Canada. The head of the Canadian Security Intelligence Service Richard Fadden in a television interview implied that various Canadian politicians at provincial and municipal levels had ties to Chinese intelligence, a statement which he withdrew few days later. Usage of the phrase In 1996, a United States Institute of Peace report on the South China Sea dispute writes "[...] analysts point to Chinese 'salami tactics,' in which China tests the other claimants through aggressive actions, then backs off when it meets significant resistance." In 2001, Jasjit Singh, IDSA, wrote "Salami-slicing of the adversary's territory where each slice does not attract a major response, and yet the process over a time would result in gains of territory. China's strategy of salami slicing during the 1950s on our northern frontiers [...]". In 2012, Robbert Haddick described "salami-slicing," as "the slow accumulation of small actions, none of which is a casus belli, but which add up over time to a major strategic change [...] The goal of Beijing's salami-slicing would be to gradually accumulate, through small but persistent acts, evidence of China's enduring presence in its claimed territory [...]." In December 2013, Erik Voeten wrote in a Washington Post article concerning China's salami-tactics with reference to "extension of its air defense zone over the East China Sea" – "The key to salami tactics’ effectiveness is that the individual transgressions are small enough not to evoke a response"– going on to ask, "So how should the United States respond in this case?" In January 2014, Bonnie S. Glaser, a China expert in the Center for Strategic and International Studies, made a statement before the US House Armed Services Subcommittee on Seapower and Projection Forces and the House Foreign Affairs Subcommittee on Asia Pacific, "How the US responds to China's growing propensity to use coercion, bullying and salami-slicing tactics to secure its maritime interests is increasingly viewed as the key measure of success of the US rebalance to Asia. [...] China thus seeks to employ a charm offensive with the majority of its neighbors while continuing its salami-slicing tactics to advance its territorial and maritime claims and pressing its interpretation of permissible military activities in its EEZ." In March 2014, Darshana M. Baruah, a Junior Fellow at ORF and a nonresident scholar at the Carnegie Endowment for International Peace, wrote "As Beijing's 'salami slicing' strategy is gathering speed it is more important than ever for ASEAN to show it solidarity and stand up to its bigger neighbour, China." In India in 2017, the Indian Chief of the Army Staff General Bipin Rawat used the phrase in a statement, "As far as northern adversary is concerned, the flexing of muscle has started. The salami slicing, taking over territory in a very gradual manner, testing our limits of threshold is something we have to be wary about and remain prepared for situations emerging which could gradually emerge into conflict." Critique In 2019, retired Indian Lieutenant General H. S. Panag wrote that the phrase "salami slicing" as used "by military scholars as well as Army Chief General Bipin Rawat in relation to the Line of Actual Control — is a misnomer". He argues that whatever territory China needed to annex was done prior to 1962. While there have been territorial claims by China after 1962, they are done more to "embarrass" India rather than a form of "permanent salami slicing". Linda Jakobson, a political scientist, has argued that rather than salami slicing based territorial expansion and decision making, "China's decision-making can be explained by bureaucratic competition between China's various maritime agencies." Bonnie S. Glaser argues against this view point, saying "bureaucratic competition among numerous maritime actors [...] is probably not the biggest source of instability. Rather, China's determination to advance its sovereignty claims and expand its control over the South China Sea is the primary challenge." See also Salami slicing tactics (politics) Grey-zone (international relations) East China Sea EEZ disputes Cabbage tactics Foreign policy of China References Further reading Ronak Gopaldas (3 October 2018). China’s salami slicing takes root in Africa. Institute for Security Studies Africa. Brahma Chellaney (24 December 2013). The Chinese art of creeping warfare. Livemint Aggression in international law Causes of war Salami Geopolitical rivalry Salami Salami Military strategy Peace and conflict studies Psychological warfare techniques Salami
Chinese salami slicing strategy
[ "Biology" ]
2,328
[ "Behavior", "Aggression", "Aggression in international law" ]
65,781,271
https://en.wikipedia.org/wiki/Neuropeptide%20W
Neuropeptide W or preprotein L8 is a short human neuropeptide. Neuropeptide W acts as a ligand for two neuropeptide B/W receptors, NPBWR1 and NPBWR2, which are integrated in GPCRs family of alpha-helical transmembrane proteins. Structure There are two forms of neuropeptide W whose precursor is encoded by NPW gene. The 23-amino-acid form (neuropeptide W-23) is the one that activates the receptors whereas the C-terminally extended form (neuropeptide W-30) is less effective. These isoforms were demonstrated in different species like rat, human, chicken, mouse and pig. The name of neuropeptid W is due to the tryptophan residues located on both sides, the N- side and -C side, in its two mature forms. Location Neuropeptide W was first identified in porcine hypothalamus in 2002. In humans, it is highly confined in neurons of the substantia nigra and the spinal cord, and fewer expressed in neurons of the hippocampus, hypothalamus, amygdala, parietal cortex and cerebellum. It can also be found in some peripheral tissues such as trachea, stomach, liver, kidney prostate, uterus and ovary. It has to be said that tissue distribution information is still lacking. For the moment, Neuropeptide W location differences between studied species (rat, mouse, chicken, pig) are slight, even though quantities differ between the organs. Function Neuropeptide W in CNS Neuropeptide W in the Central Nervous System is surely implicated in feeding activity and energy metabolism, in the adrenal axis stress response, and the regulation of neuroendocrine functions like the hormone release from the pituitary gland, but it is not considered as an inhibitory or regulatory factor in it. Neuropeptide W may also be involved in autonomic regulation, pain sensation, emotions, anxiety and fear. It seems that regulation of feeding behaviour and energy metabolism is the primary function of the neuropeptide W signaling system. On the one hand, Neuropeptide W regulates the endocrine signals aimed at anterior hypophysis. This stimulates both the need for water (thirst) and the need for food (hunger). On the other hand, it plays a compensatory role in energy metabolism. Regarding the adrenal axis response to stress, it plays a relevant role as a messenger in brain networks that help the activation of HPA (hypothalamic–pituitary–adrenal axis), which will cause the response to stress. An example of neuroendocrine functions is the regulation of the secretion of cortisol due to the activation or deactivation of neuropeptide B/W receptors. Moreover, Neuropeptide W is found in an area that is connected with preauthonomic centers in the brainstem and spinal cord. Because of this location, there is a chance that it can affect some cardiovascular function. Infusion of neuropeptide W has been shown to suppress the eating of food and body weight and increase heat production and body temperature, this verifies its works as an endogenous catabolic signaling molecule. Neuropepdide W in peripheral tissues Nevertheless, function and physiological role of peripheric neuropeptid W is not clearly known. References Neuropeptides G proteins Genes on human chromosome 17
Neuropeptide W
[ "Chemistry" ]
771
[ "G proteins", "Signal transduction" ]
65,781,618
https://en.wikipedia.org/wiki/Iodine%20nitrate
Iodine nitrate is a chemical with formula INO3. It is a covalent molecule with a structure of I–O–NO2. Preparation The compound was first produced by the reaction of mercury(II) nitrate and iodine in ether. Other nitrate salts and solvents can also be used. As a gas it is slightly unstable, decaying with a rate constant of −3.2×10−2 s−1. The possible formation of this chemical in the atmosphere and its ability to destroy ozone have been studied. Potential reactions in this context are: IONO2 → IO + NO2 IONO2 → I + NO3 I + O3 → IO + O2 References Nitrates Iodine compounds
Iodine nitrate
[ "Chemistry" ]
146
[ "Oxidizing agents", "Nitrates", "Salts" ]
65,781,810
https://en.wikipedia.org/wiki/SONiC%20%28operating%20system%29
The Software for Open Networking in the Cloud or alternatively abbreviated and stylized as SONiC, is a free and open source network operating system based on Linux. It was originally developed by Microsoft and the Open Compute Project. In 2022, Microsoft ceded oversight of the project to the Linux Foundation, who will continue to work with the Open Compute Project for continued ecosystem and developer growth. SONiC includes the networking software components necessary for a fully functional L3 device and was designed to meet the requirements of a cloud data center. It allows cloud operators to share the same software stack across hardware from different switch vendors and works on over 100 different platforms. There are multiple companies offering enterprise service and support for SONiC. Overview SONiC was developed and open sourced by Microsoft in 2016. The software decouples network software from the underlying hardware and is built on the Switch Abstraction Interface API. It runs on network switches and ASICs from multiple vendors. Notable supported network features include Border Gateway Protocol (BGP), remote direct memory access (RDMA), QoS, and various other Ethernet/IP technologies. Much of the protocol support is provided through inclusion of the FRRouting suite of routing daemons. The SONiC community includes cloud providers, service providers, and silicon and component suppliers, as well as networking hardware OEMs and ODMs. It has more than 850 members. The source code is licensed under a mix of open source licenses including the GNU General Public License and the Apache License, and is available on GitHub. References External links 2017 software Computing platforms Debian-based distributions Free and open-source software Linux Linux Foundation projects Microsoft free software Microsoft operating systems Network operating systems Software using the Apache license Software using the GNU General Public License
SONiC (operating system)
[ "Technology", "Engineering" ]
346
[ "Computing platforms", "Computer networks engineering", "Network operating systems" ]
65,782,984
https://en.wikipedia.org/wiki/Free%20matroid
In mathematics, the free matroid over a given ground-set E is the matroid in which the independent sets are all subsets of E. It is a special case of a uniform matroid. The unique basis of this matroid is the ground-set itself, E. Among matroids on E, the free matroid on E has the most independent sets, the highest rank, and the fewest circuits. Free extension of a matroid The free extension of a matroid by some element , denoted , is a matroid whose elements are the elements of plus the new element , and: Its circuits are the circuits of plus the sets for all bases of . Equivalently, its independent sets are the independent sets of plus the sets for all independent sets that are not bases. Equivalently, its bases are the bases of plus the sets for all independent sets of size . References Matroid theory
Free matroid
[ "Mathematics" ]
183
[ "Combinatorics stubs", "Matroid theory", "Combinatorics" ]
65,783,425
https://en.wikipedia.org/wiki/Pakistani%20women%20in%20STEM
While STEM (Science, technology, engineering and mathematics) fields all over the world are dominated by men, the number of Pakistani women in 'STEM' is low due to one of the highest gender gaps in STEM fields. However, over the time, some Pakistani women have emerged as scientists in fields like Physics, Biology and computer sciences. Gender gap in Pakistan Pakistan has one of the highest gender gaps in the world, and it is the third least performer in gender parity according to a report published by World Economic Forum in 2020. The low literacy rate of women in Pakistan, despite women making almost half the population, is one of the factors in a high gender gap in STEM fields. This literacy rate is even lower in science and technology. Facts According to UNESCO, among students enrolled in bachelor's degrees, 47% are women while 53% are men. The number of women pursuing doctoral studies is only 36%, while the percentage of men is 64%. There is also a significant gender gap in research sector, with women making only 34% of researchers. Among students in universities, the field of natural sciences is reported to have only 40% women students, while medical sciences have 45%, engineering has 21% and agricultural sciences have only 12%. Engineering gender gap According to the World Economic Forum, only 4.9% of engineering jobs are held by women in Pakistan. The numbers are particularly low in the energy sector with only 3%  female engineers in the power transmission sector. The field of artificial intelligence has also seen few numbers of women engineers, with only 22% part of the workforce. Bridging the gap Efforts have been done by the government of Pakistan as well as women who are part of STEM fields, to reduce the wide gender gap in STEM. Since 2018, the government of Pakistan has worked to improve wage equality and its position on educational attainment index. Workplace sexual harassment laws have also been made to encourage women to become part of the workforce in both STEM fields as well as non STEM fields. Many private organizations like Women in tech, Women Engineers Pakistan have been founded to encourage STEM education in women. Notable women Some notable Pakistani women contributing to STEM are: Nergis Mavalvala : is Pakistani-American physicist known for her breakthrough research in gravitational waves detection in 2015. She has also received the prestigious MacArthur Foundation Award in 2010. Nergis became the first female Dean of school of sciences at MIT in 2020. Tasneem Zehra Husain : is theoretical physicist and among the few Pakistani women to obtain a doctorate in physics. She is also the first Pakistani woman working on string theory. Husain has represented Pakistan at the Meeting of Nobel Laureates in Lindau, Germany and led the Pakistan team to the World Year of Physics (WYP) Launch Conference in Paris. Asma Zaheer : is computer scientist and the first Pakistani to receive "the best of IBM award, 2019". Azra Quraishi : She was a botanist who is credited for increasing potato yield by 5% in Pakistan. This improved Pakistan's position in trade and brought Azra, national recognition. She was awarded the Norman Borlaug Award in 1997. Arfa Karim : was a computer prodigy who became the youngest person to become a Microsoft certified Professional in 2004. She was personally invited by Bill Gates to the Microsoft headquarters in USA. Arfa was also named in the Guinness book of world records. Mariam Sultana : is an astrophysicist. She became the first female astrophysicist in Pakistan after she obtained her PhD in 2012. Talat Shahnaz Rahman is a condensed matter physicist. Her research topics include surface phenomena and excited media, including catalysis, vibrational dynamics, and magnetic excitations. Aban Markar Kabraji: is a biologist and scientist of Parsi origin. She is a regional director of the Asia Regional Office of IUCN, the International Union for Conservation of Nature. She was awarded the Tamgha-e-Imtiaz for her outstanding contribution and dedication to the cause of environmental protection, sustainable development and nature conservation. Asifa Akhtar: is a biologist who has worked in the area of chromosomes. She became the first international female vice president of the biology and medicine section at Germany's prestigious Max Planck Society. Asifa has also been awarded the European Life Science Organization (ELSO) award. Farzana Aslam: is a physicist and astronomer. She has worked in the area of polymer composite sensitized with semiconductor nanoparticles, photon and laser sciences. For her contributions, Farzana was awarded a commendation award at the Photon 04 conference held by Institute of Physics at Glasgow. References External links Women in Tech Women in science and technology Employment by country Statistics of education Women's education in Pakistan
Pakistani women in STEM
[ "Technology" ]
978
[ "Women in science and technology" ]
65,783,712
https://en.wikipedia.org/wiki/Women%20in%20Technology%20and%20Science
Women in Technology and Science (or WITS) is an Irish organisation representing women working in science and technology. It accepts members from industry and academia, and of all ages, from students to professionals. It was founded in 1990 by Mary Mulvihill. WITS organises a monthly e-mail newsletter WITSWORDS Newsletters. They have undertaken various initiatives, including compiling the WITS Talent Bank, a list of more than 150 women working in science and technology, and the Re-Enter initiative to support women returning to the workforce after a career break. A group from WITS attended a reception hosted by President Michael D. Higgins at Áras an Uachtaráin to recognise Women in Science in January 2016. In November 2020 they celebrated 30 years of activity. WITS is run by an elected executive committee. Chairs of WITS have included Mary Mulvihill, Dr Ena Prosser, Sadhbh McCarthy, Dr Marion Palmer, Mary Carroll and Julie Hogan. Members include Aoibhinn Ní Shúilleabháin, Jane Grimson and Norah Patten. Gallery References External links WITS website 1990 establishments in Ireland Professional associations based in Ireland Organizations for women in science and technology Organizations established in 1990
Women in Technology and Science
[ "Technology" ]
241
[ "Organizations for women in science and technology", "Women in science and technology" ]
65,783,919
https://en.wikipedia.org/wiki/Taylor%E2%80%93von%20Neumann%E2%80%93Sedov%20blast%20wave
Taylor–von Neumann–Sedov blast wave (or sometimes referred to as Sedov–von Neumann–Taylor blast wave) refers to a blast wave induced by a strong explosion. The blast wave was described by a self-similar solution independently by G. I. Taylor, John von Neumann and Leonid Sedov during World War II. History G. I. Taylor was told by the British Ministry of Home Security that it might be possible to produce a bomb in which a very large amount of energy would be released by nuclear fission and asked to report the effect of such weapons. Taylor presented his results on June 27, 1941. Exactly at the same time, in the United States, John von Neumann was working on the same problem and he presented his results on June 30, 1941. It was said that Leonid Sedov was also working on the problem around the same time in the USSR, although Sedov never confirmed any exact dates. The complete solution was published first by Sedov in 1946. von Neumann published his results in August 1947 in the Los Alamos scientific laboratory report on , although that report was distributed only in 1958. Taylor got clearance to publish his results in 1949 and he published his works in two papers in 1950. In the second paper, Taylor calculated the energy of the atomic bomb used in the Trinity (nuclear test) using the similarity, just by looking at the series of blast wave photographs that had a length scale and time stamps, published by Julian E Mack in 1947. This calculation of energy caused, in Taylor's own words, 'much embarrassment' (according to Grigory Barenblatt) in US government circles since the number was then still classified although the photographs published by Mack were not. Taylor's biographer George Batchelor writes This estimate of the yield of the first atom bomb explosion caused quite a stir... G.I. was mildly admonished by the US Army for publishing his deductions from their (unclassified) photographs. Mathematical description Consider a strong explosion (such as nuclear bombs) that releases a large amount of energy in a small volume during a short time interval. This will create a strong spherical shock wave propagating outwards from the explosion center. The self-similar solution tries to describe the flow when the shock wave has moved through a distance that is extremely large when compared to the size of the explosive. At these large distances, the information about the size and duration of the explosion will be forgotten; only the energy released will have influence on how the shock wave evolves. To a very high degree of accuracy, then it can be assumed that the explosion occurred at a point (say the origin ) instantaneously at time . The shock wave in the self-similar region is assumed to be still very strong such that the pressure behind the shock wave is very large in comparison with the pressure (atmospheric pressure) in front of the shock wave , which can be neglected from the analysis. Although the pressure of the undisturbed gas is negligible, the density of the undisturbed gas cannot be neglected since the density jump across strong shock waves is finite as a direct consequence of Rankine–Hugoniot conditions. This approximation is equivalent to setting and the corresponding sound speed , but keeping its density non zero, i.e., . The only parameters available at our disposal are the energy and the undisturbed gas density . The properties behind the shock wave such as are derivable from those in front of the shock wave. The only non-dimensional combination available from and is . It is reasonable to assume that the evolution in and of the shock wave depends only on the above variable. This means that the shock wave location itself will correspond to a particular value, say , of this variable, i.e., The detailed analysis that follows will, at the end, reveal that the factor is quite close to unity, thereby demonstrating (for this problem) the quantitative predictive capability of the dimensional analysis in determining the shock-wave location as a function of time. The propagation velocity of the shock wave is With the approximation described above, Rankine–Hugoniot conditions determines the gas velocity immediately behind the shock front , and for an ideal gas as follows where is the specific heat ratio. Since is a constant, the density immediately behind the shock wave is not changing with time, whereas and decrease as and , respectively. Self-similar solution The gas motion behind the shock wave is governed by Euler equations. For an ideal polytropic gas with spherical symmetry, the equations for the fluid variables such as radial velocity , density and pressure are given by At , the solutions should approach the values given by the Rankine-Hugoniot conditions defined in the previous section. The variable pressure can be replaced by the sound speed since pressure can be obtained from the formula . The following non-dimensional self-similar variables are introduced, . The conditions at the shock front becomes Substituting the self-similar variables into the governing equations will lead to three ordinary differential equations. Solving these differential equations analytically is laborious, as shown by Sedov in 1946 and von Neumann in 1947. G. I. Taylor integrated these equations numerically to obtain desired results. The relation between and can be deduced directly from energy conservation. Since the energy associated with the undisturbed gas is neglected by setting , the total energy of the gas within the shock sphere must be equal to . Due to self-similarity, it is clear that not only the total energy within a sphere of radius is constant, but also the total energy within a sphere of any radius (in dimensional form, it says that total energy within a sphere of radius that moves outwards with a velocity must be constant). The amount of energy that leaves the sphere of radius in time due to the gas velocity is , where is the specific enthalpy of the gas. In that time, the radius of the sphere increases with the velocity and the energy of the gas in this extra increased volume is , where is the specific energy of the gas. Equating these expressions and substituting and that is valid for ideal polytropic gas leads to The continuity and energy equation reduce to Expressing and as a function of only using the relation obtained earlier and integrating once yields the solution in implicit form, where The constant that determines the shock location can be determined from the conservation of energy to obtain For air, and . The solution for is shown in the figure by graphing the curves of , , and where is the temperature. Asymptotic behavior near the central region The asymptotic behavior of the central region can be investigated by taking the limit . From the figure, it can be observed that the density falls to zero very rapidly behind the shock wave. The entire mass of the gas which was initially spread out uniformly in a sphere of radius is now contained in a thin layer behind the shock wave, that is to say, all the mass is driven outwards by the acceleration imparted by the shock wave. Thus, most of the region is basically empty. The pressure ratio also drops rapidly to attain the constant value . The temperature ratio follows from the ideal gas law; since density ratio decays to zero and the pressure ratio is constant, the temperature ratio must become infinite. The limiting form for the density is given as follows Remember that the density is time-independent whereas which means that the actual pressure is in fact time dependent. It becomes clear if the above forms are rewritten in dimensional units, The velocity ratio has the linear behavior in the central region, whereas the behavior of the velocity itself is given by Final stage of the blast wave As the shock wave evolves in time, its strength decreases. The self-similar solution described above breaks down when becomes comparable to (more precisely, when ). At this later stage of the evolution, (and consequently ) cannot be neglected. This means that the evolution is not self-similar, because one can form a length scale and a time scale to describe the problem. The governing equations are then integrated numerically, as was done by H. Goldstine and John von Neumann, Brode, and Okhotsimskii et al. Furthermore, in this stage, the compressing shock wave is necessarily followed by a rarafaction wave behind it; the waveform is empirically fiited by the Friedlander waveform. Cylindrical line explosion The analogous problem in cylindrical geometry corresponding to an axisymmetric blast wave, such as that produced in a lightning, can be solved analytically. This problem was solved independently by Leonid Sedov, A. Sakurai and S. C. Lin. In cylindrical geometry, the non-dimensional combination involving the radial coordinate (this is different from the in the spherical geometry), the time , the total energy released per unit axial length (this is different from the used in the previous section) and the ambient density is found to be See also Guderley–Landau–Stanyukovich problem Zeldovich–Taylor flow Becker–Morduchow–Libby solution References Fluid dynamics Equations of fluid dynamics
Taylor–von Neumann–Sedov blast wave
[ "Physics", "Chemistry", "Engineering" ]
1,845
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Piping", "Fluid dynamics" ]
65,784,120
https://en.wikipedia.org/wiki/Louisenthal%20Paper%20Mill
The Louisenthal Paper Mill, or Papierfabrik Louisenthal (PL) in regional language, is a German manufacturer of security paper. Founded in 1878, the company has been a subsidiary of the Giesecke+Devrient company since 1964 which is best known as a German manufacturer of banknotes. History In 1878, a paper mill was established in Gmund am Tegernsee. Since 1964, the company has been a subsidiary of Giesecke+Devrient. The company owns a second factory at Königstein, Saxony, acquired in 1991 after the German reunification. Manufacturing The substrate bears essential security features of banknotes to protect against counterfeiting. In the early days of banknote production, security paper was equipped with real watermarks and security threads. In 1994 the world's first banknote paper with hologram stripes was produced in Louisenthal (the 2000 leva banknote for Bulgaria). After plastic banknotes could not establish themselves on the market, the mill brought a banknote onto the market in 2008 which combined the advantages of paper and polymer banknotes. In 2019, the company was the creator of two new technologies in a new set of banknotes by the Bulgarian National Bank, which won the Best New Banknote award given by the High Security Printing EMEA Conference in Malta. As of 2020, the Louisenthal paper mill claims to be a leading supplier of advanced film elements as security features that produce color shifts and three-dimensional effects depending on the viewing angle. Production Employing a workforce of around 1,100, 320 of which are located at the Königstein site near Dresden, the paper mill produces about 13,000 tons of paper per year. References Banknotes Paper products Manufacturing
Louisenthal Paper Mill
[ "Engineering" ]
354
[ "Manufacturing", "Mechanical engineering" ]
65,784,907
https://en.wikipedia.org/wiki/NGC%207679
NGC 7679 is a lenticular galaxy with a peculiar morphology in the constellation Pisces. It is located at a distance of about 200 million light years from Earth, which, given its apparent dimensions, means that NGC 7679 is about 60,000 light years across. It was discovered by Heinrich d'Arrest on September 23, 1864. The total infrared luminosity is , and thus it is categorised as a luminous infrared galaxy. NGC 7679 is both a starburst galaxy and a Seyfert galaxy. Characteristics NGC 7679 is a barred lenticular galaxy seen face on, and is noted for its distorted shape. The galaxy has two plumes in opposite directions, possibly the result of tidal interaction with NGC 7682, and smooth outer arms. The inner region is of high surface brightness with many knots and a high star formation rate. The star formation rate of the galaxy is estimated to be 80 per year based on the x-ray luminosity observed by XMM-Newton, and on the H-alpha luminosity of 21.2 ± 0.2 per year while observations in infrared indicate a star formation of 11.35 ± 0.6 per year. There is evidence of massive starburst activity in the circumnuclear region, with 35% of the stars there being aged less than 10 million years. A ring of ionised gas dominates both the optical and infrared wavelengths and is the locus of the starburst activity. Nucleus The nucleus of NGC 7679 has been found to be active and has been categorised as a Seyfert galaxy. The most accepted theory for the energy source of Seyfert galaxies is the presence of an accretion disk around a supermassive black hole. NGC 7679 is believed to host a supermassive black hole whose mass is estimated to be (106.77) based on velocity dispersion. The X-ray spectrum from BeppoSAX shows no significant absorption above 2 MeV and the iron Ka line was marginally detected. However, the galaxy shows signs of obstruction in visual light, as it lacks broad emission lines. Two possible reasons are the presence of dust or the accretion disk that produces X-rays is not obstructed but the broad line region is. The lack of X-ray absorption along with the presence of broad H-alpha lines but not broad H-beta mean that it cannot be easily categorised as a particular type of Seyfert galaxy. Nearby galaxies NGC 7679 forms a pair with NGC 7682. NGC 7682 lies at a distance of 269.7 arcseconds, which corresponds to a projected distance of 97 kpc. The two galaxies are connected by a hydrogen bridge, a sign of a closer encounter in the past 500 million years. It is possible that the interaction of the two galaxies caused star formation in NGC 7679. A fainter galaxy has been found superimposed on the eastern arm of the galaxy, but it is actually located in the background. See also NGC 7130 - A similar active and starburst galaxy Gallery References External links Interacting galaxies Barred lenticular galaxies Peculiar galaxies Luminous infrared galaxies Seyfert galaxies Pisces (constellation) 7679 12618 216 71554 534
NGC 7679
[ "Astronomy" ]
686
[ "Pisces (constellation)", "Constellations" ]
65,785,668
https://en.wikipedia.org/wiki/Fully%20Automatic%20Installation
Fully Automated Installation (FAI) is a group of shell and Perl scripts that install and configure a complete Linux distribution quickly on a large number of computers. It's the oldest automated deployment system for Debian. Automation FAI allows for installing Debian and Ubuntu distributions. But it also supports CentOS, Rocky Linux and SuSe Linux. In the past it supported Scientific Linux Cern. By default a network installation is done, but it's easy to create an installation ISO for booting from CD or USB stick. There's a web service for FAI which is called FAI.me, which allows creating customized installation images without setting up your own FAI server. This service also creates cloud images and live images This service supports Debian and Ubuntu. Debian's cloud team uses FAI for creating their official cloud images. Similar software exists for Red Hat (Kickstart), SuSE (AutoYaST, YaST and alice), Solaris (Jumpstart) and likely other operating systems. References External links Official website FAI.me web service System administration Network management
Fully Automatic Installation
[ "Technology", "Engineering" ]
229
[ "Information systems", "Computer networks engineering", "Network management", "System administration" ]
65,785,766
https://en.wikipedia.org/wiki/Japanese%20Central%20Railway%20School
was a Japanese National Railways educational institution in Kokubunji, Tokyo. Overview The school was not recognised as a university or college by the Ministry of Education, Science, Sports and Culture, even though Shinji Sogō, the 4th president of JNR, made serious efforts to raise the school to the status of "JNR University" so that its students could be university graduates. Professors from the University of Tokyo, Hitotsubashi University, and others were employed as part-time teaching staff. It was said that the school provided the equivalent of a university education under Sogō's administration. Courses of study could last anywhere from a few days to three years, covering subjects such as train operation, train inspections and track maintenance. At its peak, it had more than 1000 students. It was the place for training new staff members, retraining employees who were changing roles or positions, and educating the work force on new policies or equipment. The school was closed due to the privatization and division of Japan National Railways in 1987. Another building now occupies the site, but some parts of the earlier structure are preserved as "the remains of Tōsandō Musashimichi". References External links Aerial Views of Japanese Central Railway School (then and now) The monument on the site where Japanese Central Railway School existed|2018/03/15 Rail transport organizations based in Japan 1961 establishments in Japan 1987 disestablishments in Japan History of education in Japan Vocational education in Japan Schools in Tokyo Transport education
Japanese Central Railway School
[ "Physics" ]
304
[ "Physical systems", "Transport", "Transport education" ]
65,785,999
https://en.wikipedia.org/wiki/Soft%20privacy%20technologies
Soft privacy technologies fall under the category of PETs, Privacy-enhancing technologies, as methods of protecting data. Soft privacy is a counterpart to another subcategory of PETs, called hard privacy. Soft privacy technology has the goal of keeping information safe, allowing services to process data while having full control of how data is being used. To accomplish this, soft privacy emphasizes the use of third-party programs to protect privacy, emphasizing auditing, certification, consent, access control, encryption, and differential privacy. Since evolving technologies like the internet, machine learning, and big data are being applied to many long-standing fields, we now need to process billions of datapoints every day in areas such as health care, autonomous cars, smart cards, social media, and more. Many of these fields rely on soft privacy technologies when they handle data. Applications Health care Some medical devices like Ambient Assisted Living monitor and report sensitive information remotely into a cloud. Cloud computing offers a solution that meets the healthcare need for processing and storage at an affordable price. Together, this system is used to monitor a patient's biometric conditions remotely, connecting with smart technology when necessary. In addition to monitoring, the devices can also send a mobile notification when certain conditions pass a set point such as a major change in blood pressure. Due to the nature of these devices, which report data constantly and use smart technology, this type of medical technology is subject to a lot of privacy concerns. Soft privacy is thus relevant for the third-party cloud service, as many privacy concerns center there, including risk in unauthorized access, data leakage, sensitive information disclosure, and privacy disclosure. One solution proposed for privacy issues around cloud computing in health care is through the use of Access control, by giving partial access to data based on a user's role: such as a doctor, family, etc. Another solution, applicable for wireless technology that moves data to a cloud, is through the usage of Differential privacy. The differential privacy system typically encrypts the data, sends it to a trusted service, then opens up access to it for hospital institutions. A strategy that is often used to prevent data leakage and attacks works by adding ‘noise’ into the data which changes its values slightly. The real underlying information can be accessed through security questions. A study by Sensors concluded that differential privacy techniques involving additional noise helped achieve mathematically-precise guaranteed privacy. Adding noise to data by default can prevent devastating privacy breaches. In the mid-90's the Commonwealth of Massachusetts Group Insurance Commission released anonymous health records while hiding some sensitive information such as addresses and phone numbers. Despite this attempt to hiding personal information while providing a useful database, privacy was still breached–by correlating the health databases with public voting databases, individuals' hidden data could be rediscovered. Without the differential privacy techniques of encrypting data and adding noise, it is possible to link data that may not seem related, together. Autonomous cars Autonomous cars raise concerns about location tracking, because they are sensor-based vehicles. To achieve full autonomy, these cars would need access to a massive database with information on the surroundings, paths to take, interaction with others, weather, and many more circumstances that need to be accounted for. This leads to privacy and security questions: how the data will all be stored, who is it shared with, and what type of data is being stored. A lot of this data is potentially sensitive and could be used maliciously if it was leaked or hacked. Additionally, there are concerns over this data being sold to companies, as the data can help predict products the consumer would like or need. This could be undesirable as data here may expose health conditions and alert some companies to advertise to said customers with location-based spam or products. In terms of the legal aspect of this technology, there are rules and regulations governing some parts of these cars, but for other areas, laws are left unwritten or oftentimes outdated. Many of the current laws are vague, leaving the rules to be open to interpretation. For example, there are federal laws dating back many years ago governing computer privacy, which were extended to cover phones when they arose, and now are being extended again to apply to the “computer” inside most driverless cars. Another legal concern surrounds government surveillance: can governments gain access to driverless car data, giving them the opportunity for mass surveillance or tracking with no warrants? Finally, companies may try to use this data to improve their technology and target their marketing data to fit the needs of their consumers. In response to this controversy, automakers pre-empted government action on driverless cars with the Automotive Information Sharing and Analysis Center (Auto-ISAC) in August 2015 to establish protocols for cybersecurity and decide how to handle vehicular communication systems in a safe manner through with other autonomous cars. Vehicle-to-grid, known as V2G, plays a big role in the energy consumption, cost, and effectiveness of smart grids. This system is what electric vehicles use to charge and discharge, as they communicate with the power grid to fill up the appropriate amount. Although this ability is extremely important for electric vehicles, it does open up privacy issues surrounding the visibility of the car's location. For example, the driver's home address, place of work, place of entertainment, and record of frequency may be reflected in the car's charging history. With this information, which could potentially be leaked, there are a wide variety of security breaches that can occur. For example, someone’s health could be deduced by the car's number of hospital visits, or based on a user's parking patterns, they may receive location-based spam, or a thief could benefit from knowing their target's home address and work schedule. Like with handling sensitive health data, a possible solution here is to use differential privacy to add noise to the mathematical data so leaked information may not be as accurate. Cloud storage Using the cloud is important for consumers and businesses, as it provides a cheap source of virtually unlimited storage space. One challenge cloud storage has faced was search functions. Typical cloud computing design would encrypt every word before it enters the cloud. If a user wants to search by keyword for a specific file stored in the cloud, the encryption hinders an easy and fast method of search. You can't scan the encrypted data for your search term anymore, because the keywords are all encrypted. So encryption ends up being a double-edged sword: it protects privacy but introduces new, inconvenient problems. One performance solution lies in modifying the search method, so that documents are indexed entirely, rather than just their keywords. The search method can also be changed by searching with a term that gets encrypted, so that it can be matched to encrypted keywords without de-encrypting any data. In this case, privacy is achieved and it is easy to search for words that match with the encrypted files. However, there is a new issue that arises from this, as it takes a longer time to read and match through an encrypted file and decrypt the whole thing for a user. Mobile data Police and other authorities also benefit from reading personal information from mobile data, which can be useful during investigations. Privacy infringements and potential surveillance in these cases has often been a concern in the United States, and several cases have reached the Supreme Court. In some instances, authorities used GPS data to track down suspects' locations, and monitored data over long periods of time. This practice is now limited because of the Supreme Court case Riley v. California. There it was unanimously decided to prevent warrantless searches of mobile data. Mobile privacy has also been an issue in the realm of spam calls. In the effort to reduce these calls, many people are being taken advantage of by apps that promise to help block spam. The problem with these apps is that many are known to collect personal phone data, including callers, phone honeypot call detail records (CDRs), and call recordings. Although some of this information is necessary for creating blacklists of spam, not all of it is, but these apps don't always prioritize privacy in their collection of data. As these are typically small-scaled apps with varying budget degrees, differential privacy is not always top of mind, and differential privacy, rather than ignoring privacy concerns, is often more expensive. This is because the dataset needed to construct a good algorithm that achieves local differential privacy is much larger than a basic dataset. Danger of using VPNs VPNs are used to create a remote user and a home network and encrypt their package usage. This allows the user to have their own private network while on the internet. However, this encrypted tunnel must trust a third party to protect the privacy of the user, since it acts as a virtual leased line over the shared public infrastructure of the internet. Additionally, VPNs have a difficult time when it comes to mobile applications, because the cell network may be constantly changing and can even break, thus endangering the privacy that the VPN gives from its encryption. VPNs are susceptible to attackers that fabricate, intercept, modify, and interrupt traffic. They become a target, because sensitive information is often being transmitted over these networks. Quick VPNs generally provide faster tunnel establishment and less overhead, but they downgrade the effectiveness of VPNs as a security protocol. One mitigation centers around the simple practice of changing long usernames (IP addresses) and passwords frequently, which is important to achieving security and protection from malicious attacks. Smart cards Newer smart cards are a developing technology used to authorize users for certain resources. Using biometrics such as fingerprints, iris images, voice patterns, or even DNA sequences as access control for sensitive information such as a passport can help ensure privacy. Biometrics are important because they are basically unchangeable and can be used as the access code to one’s information, in some cases granting access to virtually any data about the particular person. Currently they are being used for telecommunications, e-commerce, banking applications, government applications, healthcare, transportation, and more. Danger of using biometrics Biometrics contain unique characteristic details about a person. If they are leaked, it would be fairly easy to trace the endangered user. This poses a great danger, because biometrics are based on features that rarely change, like a user's fingerprint, and many sensitive applications use biometrics. There are some possible solutions to this: Anonymous Biometric Access Control System (ABAC): This method authenticates valid users into the system without knowing who individuals are. For example, a hotel should be able to admit a VIP guest member into a VIP room without knowing any details about that person, even though the verification process is still utilizing biometrics. One way to do this was developed by the Center for Visualization and Virtual Environment. They designed an algorithm that uses techniques like hamming distance computation, bit extraction, comparison, and result aggregation, all implemented with a homomorphic cipher, to allow a biometric server to confirm a user without knowing their identity. This is done by taking the biometric saved and encrypting it. While saving the biometrics, there are specific processes to de-identify features such as facial recognition when encrypting the data so even if it was leaked, there would be no danger of tracing someone's identity. Online videos Online learning through videos has gotten very popular. One of the biggest challenges for privacy in this field is the practice of video prefetching. When a user selects an online video, rather than making them wait for it to slowly load, prefetching helps the user save time by loading part of the video before the user has even started watching. It seems to be a perfect and necessary solution as we stream more content, but prefetching faces privacy concerns because it heavily relies on prediction. For an accurate prediction to happen, it is necessary to access a user's view history and preferences. Otherwise, prefetching will be more of a waste to bandwidth than a benefit. After learning the user’s viewing history and opinions on popular content, it is easier to predict the next video to prefetch, but this data is valuable and possibly sensitive. There has been research on applying differential privacy to video prefetching, to improve user privacy. Third party certification E-commerce has been growing rapidly, so there have been initiatives to reduce consumer perceptions of risk while shopping online. Firms have found ways to gain trust from new consumers through the use of seals and certifications off of third-party platforms. A study done by Electronic Commerce Research found that payment providers can reduce perceptions of risk from consumers by having third-party logos and seals on the checkout page to enhance visitor conversion. These logos and certificates serve as an indicator for the consumer, so they feel safe about inputting their payment information and shipping address to an otherwise unknown or untrusted website. Future of soft privacy technology Mobile concern and possible solutions mIPS - mobile Intrusion Prevention System looks to be location-aware and help protect users when utilizing future technology such as virtualized environments where the phone acts as a small virtual machine. Some cases to be wary about in the future includes stalking attacks, bandwidth stealing, attack on cloud hosts, and PA2: traffic analysis attack on the 5G device based on a confined area. Current privacy protection programs are prone to leakage and do not account for the changing of Bluetooth, locations, and LAN connections which affect how often leakage can occur. Public key-based access control In the context of sensor net, public-key based access control (PKC) may be a good solution in the future to cover some issues in wireless access control. For sensor net, the danger from attackers includes; impersonation which grants access to malicious users, replay attack where the adversary captures sensitive information by replaying it, interleaving which selectively combines messages from previous sessions, reflection where an adversary sends an identical message to the originator similar to impersonation, forced delay which blocks communication message to be sent at a later time, and chosen-text attack where the attacker tries to extract the keys to access the sensor. The solution to this may be public key-based cryptography as a study done by Haodong Wang shows that PKC-based protocol presented is better than the traditional symmetric key in regards to memory usage, message complexity, and security resilience. Social media Privacy management is a big part of social networks and this paper presents several solutions to this issue. For example, users of various social networks has the ability to control and specify what information they want to share to certain people based on their trust levels. Privacy concerns arise from this, for example in 2007 Facebook received complaints about their advertisements. In this instance Facebook’s partner collects information about a user and spreads it to the user’s friends without any consent. There are some proposed solutions in prototype stage by using a protocol that focuses on cryptographic and digital signature techniques to ensure the right privacy protections are in place. Massive dataset With increasing data collection one source may have they become prone to privacy violations and a target for malicious attacks due to the abundance of personal information they hold. Some solutions proposed would be to anonymize the data by building a virtual database while that anonymizes both the data provider and the subjects of the data. The proposed solution here is a new and developing technology called l-site diversity. References Privacy Data protection Information technology
Soft privacy technologies
[ "Technology" ]
3,201
[ "Information and communications technology", "Information technology" ]
41,468,418
https://en.wikipedia.org/wiki/Evolution%20of%20photosynthesis
The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. It is believed that the pigments used for photosynthesis initially were used for protection from the harmful effects of light, particularly ultraviolet light. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water. Origin Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles were the first photochemically active pigments, the photosynthetic organisms were anaerobic and relied on without relying on H2 emitted by alkaline hydrothermal vents. The divergence of anoxygenic photosynthetic organisms at the photic zone could have led to the ability to strip electrons from more efficiently under ultraviolet radiation. There is geochemical evidence that suggests that anaerobic photosynthesis emerged 3.3 to 3.5 billion years ago. The organisms later developed a Chlorophyll F synthase. They could have also stripped electrons from soluble metal ions although it is unknown. The first oxygenic photosynthetic organisms are proposed to be -dependent. It is also suggested photosynthesis originated under sunlight, using emitted by volcanoes and hydrothermal vents which ended the need for scarce H2 emitted by alkaline hydrothermal vents. Oxygenic photosynthesis uses water as an electron donor, which is oxidized to molecular oxygen () in the photosynthetic reaction center. The biochemical capacity for oxygenic photosynthesis evolved in a common ancestor of extant cyanobacteria. The first appearance of free oxygen in the atmosphere is sometimes referred to as the oxygen catastrophe. The geological record indicates that this transforming event took place during the Paleoproterozoic era at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of blue-greens. Cyanobacteria remained principal primary producers throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined blue-greens as major primary producers on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251–65 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did primary production in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic. Timeline of photosynthesis on Earth Source: Symbiosis and the origin of chloroplasts Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges and sea anemones. It is presumed that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks Elysia viridis and Elysia chlorotica also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies. This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins that they need to survive. An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosomes, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts still possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those in cyanobacteria. DNA in chloroplasts codes for redox proteins such as photosynthetic reaction centers. The CoRR Hypothesis proposes that this Co-location is required for Redox Regulation. Evolution of photosynthetic pathways In its simplest form, photosynthesis is adding water to to produce sugars and oxygen, but a complex chemical pathway is involved, facilitated along the way by a range of enzymes and co-enzymes. The enzyme RuBisCO is responsible for "fixing"  – that is, it attaches it to a carbon-based molecule to form a sugar, which can be used by the plant, releasing an oxygen molecule along the way. However, the enzyme is notoriously inefficient, and just as effectively will also fix oxygen instead of in a process called photorespiration. This is energetically costly as the plant has to use energy to turn the products of photorespiration back into a form that can react with . Concentrating carbon The C4 metabolic pathway is a valuable recent evolutionary innovation in plants, involving a complex set of adaptive changes to physiology and gene expression patterns. About 7600 species of plants use carbon fixation, which represents about 3% of all terrestrial species of plants. All these 7600 species are angiosperms. C4 plants evolved carbon concentrating mechanisms. These work by increasing the concentration of around RuBisCO, thereby facilitating photosynthesis and decreasing photorespiration. The process of concentrating around RuBisCO requires more energy than allowing gases to diffuse, but under certain conditions – i.e. warm temperatures (>25 °C), low concentrations, or high oxygen concentrations – pays off in terms of the decreased loss of sugars through photorespiration. One type of C4 metabolism employs a so-called Kranz anatomy. This transports through an outer mesophyll layer, via a range of organic molecules, to the central bundle sheath cells, where the is released. In this way, is concentrated near the site of RuBisCO operation. Because RuBisCO is operating in an environment with much more than it otherwise would be, it performs more efficiently. In C4 photosynthesis, carbon is fixed by an enzyme called PEP carboxylase, which, like all enzymes involved in C4 photosynthesis, originated from non-photosynthetic ancestral enzymes. A second mechanism, CAM photosynthesis, is a carbon fixation pathway that evolved in some plants as an adaptation to arid conditions. The most important benefit of CAM to the plant is the ability to leave most leaf stomata closed during the day. This reduces water loss due to evapotranspiration. The stomata open at night to collect , which is stored as the four-carbon acid malate, and then used during photosynthesis during the day. The pre-collected is concentrated around the enzyme RuBisCO, increasing photosynthetic efficiency. More is then harvested from the atmosphere when stomata open, during the cool, moist nights, reducing water loss. CAM has evolved convergently many times. It occurs in 16,000 species (about 7% of plants), belonging to over 300 genera and around 40 families, but this is thought to be a considerable underestimate. It is found in quillworts (relatives of club mosses), in ferns, and in gymnosperms, but the great majority of plants using CAM are angiosperms (flowering plants). Evolutionary record These two pathways, with the same effect on RuBisCO, evolved a number of times independently – indeed, C4 alone arose 62 times in 18 different plant families. A number of 'pre-adaptations' seem to have paved the way for C4, leading to its clustering in certain clades: it has most frequently developed in plants that already had features such as extensive vascular bundle sheath tissue. Whole-genome and individual gene duplication are also associated with C4 evolution. Many potential evolutionary pathways resulting in the phenotype are possible and have been characterised using Bayesian inference, confirming that non-photosynthetic adaptations often provide evolutionary stepping stones for the further evolution of . The C4 construction is most famously used by a subset of grasses, while CAM is employed by many succulents and cacti. The trait appears to have emerged during the Oligocene, around ; however, they did not become ecologically significant until the Miocene, . Remarkably, some charcoalified fossils preserve tissue organised into the Kranz anatomy, with intact bundle sheath cells, allowing the presence C4 metabolism to be identified without doubt at this time. Isotopic markers are used to deduce their distribution and significance. C3 plants preferentially use the lighter of two isotopes of carbon in the atmosphere, 12C, which is more readily involved in the chemical pathways involved in its fixation. Because C4 metabolism involves a further chemical step, this effect is accentuated. Plant material can be analysed to deduce the ratio of the heavier 13C to 12C. This ratio is denoted . C3 plants are on average around 14‰ (parts per thousand) lighter than the atmospheric ratio, while C4 plants are about 28‰ lighter. The of CAM plants depends on the percentage of carbon fixed at night relative to what is fixed in the day, being closer to C3 plants if they fix most carbon in the day and closer to C4 plants if they fix all their carbon at night. It is troublesome procuring original fossil material in sufficient quantity to analyse the grass itself, but fortunately there is a good proxy: horses. Horses were globally widespread in the period of interest, and browsed almost exclusively on grasses. There's an old phrase in isotope palæontology, "you are what you eat (plus a little bit)" – this refers to the fact that organisms reflect the isotopic composition of whatever they eat, plus a small adjustment factor. There is a good record of horse teeth throughout the globe, and their has been measured. The record shows a sharp negative inflection around , during the Messinian, and this is interpreted as the rise of C4 plants on a global scale. When is C4 an advantage? While C4 enhances the efficiency of RuBisCO, the concentration of carbon is highly energy intensive. This means that C4 plants only have an advantage over C3 organisms in certain conditions: namely, high temperatures and low rainfall. C4 plants also need high levels of sunlight to thrive. Models suggest that, without wildfires removing shade-casting trees and shrubs, there would be no space for C4 plants. But, wildfires have occurred for 400 million years – why did C4 take so long to arise, and then appear independently so many times? The Carboniferous period (~) had notoriously high oxygen levels – almost enough to allow spontaneous combustion – and very low , but there is no C4 isotopic signature to be found. And there doesn't seem to be a sudden trigger for the Miocene rise. During the Miocene, the atmosphere and climate were relatively stable. If anything, increased gradually from before settling down to concentrations similar to the Holocene. This suggests that it did not have a key role in invoking C4 evolution. Grasses themselves (the group which would give rise to the most occurrences of C4) had probably been around for 60 million years or more, so had had plenty of time to evolve C4, which, in any case, is present in a diverse range of groups and thus evolved independently. There is a strong signal of climate change in South Asia; increasing aridity – hence increasing fire frequency and intensity – may have led to an increase in the importance of grasslands. However, this is difficult to reconcile with the North American record. It is possible that the signal is entirely biological, forced by the fire- and grazer- driven acceleration of grass evolution – which, both by increasing weathering and incorporating more carbon into sediments, reduced atmospheric levels. Finally, there is evidence that the onset of C4 from is a biased signal, which only holds true for North America, from where most samples originate; emerging evidence suggests that grasslands evolved to a dominant state at least 15Ma earlier in South America. See also Photorespiration Evolution of plants References Evolutionary biology Photosynthesis
Evolution of photosynthesis
[ "Chemistry", "Biology" ]
2,985
[ "Biochemistry", "Evolutionary biology", "Photosynthesis" ]
41,468,520
https://en.wikipedia.org/wiki/Phi3%20Ceti
{{DISPLAYTITLE:Phi3 Ceti}} Phi3 Ceti is a solitary, orange-hued star in the equatorial constellation of Cetus. It is faintly visible to the naked eye with an apparent visual magnitude of 5.31. Based upon an annual parallax shift of as seen from Earth, it is located approximately 530 light years from the Sun, give or take 20 light years. The star is drifting closer with a radial velocity of −25.5 km/s. This is an evolved K-type giant star with a stellar classification of K5 III. It has about 1.4 times the mass and 44 times the radius of the Sun. The star radiates 441 times the solar luminosity from its photosphere at an effective temperature of 3,974 K. References K-type giants Cetus Ceti, Phi3 BD-12 0162 Ceti, 22 004371 005437 0267
Phi3 Ceti
[ "Astronomy" ]
200
[ "Cetus", "Constellations" ]
41,468,528
https://en.wikipedia.org/wiki/Phi4%20Ceti
{{DISPLAYTITLE:Phi4 Ceti}} Phi4 Ceti is a solitary, orange-hued star in the equatorial constellation Cetus. It is faintly visible to the naked eye with an apparent visual magnitude of 5.61. Based upon an annual parallax shift of as seen from Earth, it is located approximately 334 light years from the Sun. At that distance, the visual magnitude of the star is diminished by an extinction factor of 0.10 due to interstellar dust, giving it an absolute magnitude of 0.70. It is drifting closer with a radial velocity of −19 km/s. This is an evolved G-type giant star with a stellar classification of G8 III. At the estimated age of 1.5 billion years, is a red clump giant on the horizontal branch, which indicates it is generating energy through helium fusion at its core. The star has about 1.76 times the mass of the Sun and has expanded to 11 times the Sun's radius. It is radiating 60 times the solar luminosity from its photosphere at an effective temperature of 4,903 K. References G-type giants Horizontal-branch stars Cetus Ceti, Phi4 BD-12 0173 Ceti, 23 005722 004587 0279
Phi4 Ceti
[ "Astronomy" ]
269
[ "Cetus", "Constellations" ]
41,469,087
https://en.wikipedia.org/wiki/Aknadinine
Aknadinine (also known as 4-demethylhasubanonine) is an opioid alkaloid isolated from members of the genus Stephania (Stephania cepharantha and Stephania hernandifolia.). Structurally it is a member of the hasubanan family of alkaloids and features an isoquinoline substructure. See also Hasubanonine References Opioids Pyrrolidine alkaloids
Aknadinine
[ "Chemistry" ]
98
[ "Alkaloids by chemical classification", "Pyrrolidine alkaloids" ]
41,469,103
https://en.wikipedia.org/wiki/Bisnortilidine
Bisnortilidine is an opioid metabolite. It is formed from tilidine by demethylation in the liver. References Human metabolites Cyclohexenes
Bisnortilidine
[ "Chemistry" ]
42
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
41,469,136
https://en.wikipedia.org/wiki/Chloroxymorphamine
Chloroxymorphamine is an opioid and a derivative of oxymorphone which binds irreversibly as an agonist to the μ-opioid receptor. See also β-Chlornaltrexamine Naloxazone Oxymorphazone References Opioids Alkylating agents Mu-opioid receptor agonists Irreversible agonists Nitrogen mustards Chloroethyl compounds Tertiary alcohols Cyclohexanols
Chloroxymorphamine
[ "Chemistry" ]
102
[ "Alkylating agents", "Reagents for organic chemistry" ]
41,469,142
https://en.wikipedia.org/wiki/Clocinnamox
Clocinnamox (CCAM or C-CAM; developmental code name NIH-10443) is a selective and irreversible antagonist of the μ-opioid receptor. Closely related compounds include methocinnamox (MCAM) and methoclocinnamox (MCCAM). They were derived via structural modification of buprenorphine. Clocinnamox was first described in the scientific literature by 1992. References 4-Chlorophenyl compounds 4,5-Epoxymorphinans Abandoned drugs Acrylamides Amides Cyclopropyl compounds Heterocyclic compounds with 5 rings Mu-opioid receptor antagonists Nitrogen heterocycles Oxygen heterocycles
Clocinnamox
[ "Chemistry" ]
156
[ "Amides", "Drug safety", "Functional groups", "Abandoned drugs" ]
41,469,157
https://en.wikipedia.org/wiki/Esmethadone
Esmethadone (; developmental code name REL-1017), also known as dextromethadone, is the (S)-enantiomer of methadone. It acts as an N-methyl-D-aspartate receptor (NMDAR) antagonist, among other actions. Unlike levomethadone, it has low affinity for opioid receptors and lacks significant respiratory depressant action and abuse liability. Esmethadone is under development for the treatment of major depressive disorder. As of August 2022, it is in phase 3 clinical trials for this indication. There is an asymmetric synthesis available to prepare both esmethadone (S-(+)-methadone) and levomethadone (R-(−)-methadone). References Enantiopure drugs Experimental drugs NMDA receptor antagonists Opioids https://seekingalpha.com/article/4679773?gt=007e6ed9bf2fefc2
Esmethadone
[ "Chemistry" ]
220
[ "Stereochemistry", "Enantiopure drugs" ]
41,469,204
https://en.wikipedia.org/wiki/Epistemological%20Letters
Epistemological Letters (French: Lettres Épistémologiques) was a hand-typed, mimeographed "underground" newsletter about quantum physics that was distributed to a private mailing list, described by the physicist and Nobel laureate John Clauser as a "quantum subculture", between 1973 and 1984. Distributed by a Swiss foundation, the newsletter was created because mainstream academic journals were reluctant to publish articles about the philosophy of quantum mechanics, especially anything that implied support for ideas such as action at a distance. Thirty-six or thirty-seven issues of Epistemological Letters appeared, each between four and eighty-nine pages long. Several well-known scientists published their work there, including the physicist John Bell, the originator of Bell's theorem. According to John Clauser, much of the early work on Bell's theorem was published only in Epistemological Letters. Interpretations of quantum physics According to the Irish physicist Andrew Whitaker, a powerful group of physicists centred on Niels Bohr, Wolfgang Pauli and Werner Heisenberg made clear that "there was no place in physics – no jobs in physics! – for anybody who dared to question the Copenhagen interpretation" (Bohr's interpretation) of quantum theory. John Clauser writes that any inquiry into the "wonders and peculiarities" of quantum mechanics and quantum entanglement that went outside the "party line" was prohibited, in what he argues amounted to an "evangelical crusade". Samuel Goudsmit, editor of the prestigious Physical Review and Physical Review Letters until he retired in 1974, imposed a formal ban on the philosophical debate, issuing instructions to referees that they should feel free to reject material that even hinted at it. Alternative publications Articles questioning the mainstream position were therefore distributed in alternative publications, and Epistemological Letters became one of the main conduits. The newsletter was sent out by the L'Institut de la Méthode of the Association Ferdinand Gonseth, which had been established in honour of the philosopher Ferdinand Gonseth. The newsletter described itself as "an open and informal journal allowing confrontation and ripening of ideas before publishing in some adequate journal." According to Clauser, it announced that the usual stigma against discussing certain ideas, such as hidden-variable theories, was to be absent. The newsletter's editors included Abner Shimony. Several eminent physicists published their material in Epistemological Letters, including John Bell, the originator of Bell's theorem. Clauser writes that much of the early work on Bell's theorem was published only in Epistemological Letters. Bell's paper, "The Theory of Local Beables" (beable, as opposed to observable, referring to something that exists independently of any observer), appeared there in March 1976. Abner Shimony, John Clauser and Michael Horne published responses to it, also in the Letters. Henry Stapp was another prominent physicist who wrote for the Letters. H. Dieter Zeh published a paper in the Letters on the many-minds interpretation of quantum mechanics in 1981. Digitization of the Epistemological Letters Don Howard, Professor of Philosophy at the University of Notre Dame, was a Ph.D. student of Abner Shimony, one of the editors of the newsletter; as such, he had an almost complete set. In collaboration with Sebastian Murgueitio Ramirez (then his graduate student, now assistant professor of philosophy at Purdue University), the set was completed and digitized in 2018–2019, in order to make this very rare document available to the community of historians and philosophers of physics. The entire set is available to the public at the Epistemological Letters digital archive, and the original newsletter is in Special Collections at the University Library. See also Fundamental Fysiks Group Physics Physique Физика References Further reading Friere, Olival (2003). "A Story Without an Ending: The Quantum Physics Controversy 1950–1970", Science & Education, 12, pp. 573–586. Gusterson, Hugh (18 August 2011). "Physics: Quantum outsiders", Nature, 476, pp. 278–279. "Epistemological Letters", digital archive at the University of Notre Dame. External links Index to the Letters at Information Philosopher Contemporary philosophical literature Defunct journals English-language journals Interpretations of quantum mechanics Quantum mind Philosophy of physics Philosophy of science literature Physics journals Academic journals established in 1973 Publications disestablished in 1984 Underground press Irregular journals
Epistemological Letters
[ "Physics" ]
924
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Quantum mechanics", "Quantum mind", "Interpretations of quantum mechanics" ]
41,469,238
https://en.wikipedia.org/wiki/Thevinone
Thevinone is a derivative of thebaine. References Opioids
Thevinone
[ "Chemistry" ]
17
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
41,469,269
https://en.wikipedia.org/wiki/Cytochrophin-4
Cytochrophin-4 is an opioid peptide. References Opioid peptides
Cytochrophin-4
[ "Chemistry" ]
22
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
41,469,325
https://en.wikipedia.org/wiki/Nu%20Leporis
Nu Leporis, Latinized from ν Leporis, is a probable astrometric binary star system in the constellation Lepus. It is visible to the naked eye as a faint star with an apparent visual magnitude of 5.29. Based upon an annual parallax shift of 7.70 mas as seen from the Earth, it is 420 light years from the Sun. The visible component is a B-type star with an estimated 3.3 times the mass of the Sun. Lesh (1968) gave a stellar classification of B7 IVnn, which would indicate this is a somewhat evolved subgiant star. The 'nn' notation indicates especially "nebulous" absorption lines caused by rapid rotation. Houk and Smith-Moore (1978) listed it as B7/8 V, suggesting this is instead a B-type main sequence star that has not yet consumed all the hydrogen at its core. Nu Leporis is spinning rapidly with a projected rotational velocity of 285 km/s. The star has a radius about three times that of the Sun and is radiating 138 times the Sun's luminosity from its photosphere at an effective temperature of 12,417 K. References B-type main-sequence stars Astrometric binaries Lepus (constellation) Leporis, Nu Durchmusterung objects Leporis, 07 034863 024873 1757
Nu Leporis
[ "Astronomy" ]
288
[ "Lepus (constellation)", "Constellations" ]
41,469,403
https://en.wikipedia.org/wiki/Kappa2%20Lupi
{{DISPLAYTITLE:Kappa2 Lupi}} κ2 Lupi, Latinized as Kappa2 Lupi, is a white-hued star in the southern constellation of Lupus, and forms a double star with Kappa1 Lupi. It is visible to the naked eye as a dim point of light with an apparent visual magnitude is 5.64. This star is located around 181 light years distant from the Sun. It is an A-type main-sequence star with a stellar classification of A3/5V. However, Levato (1973) classed the star as A3IV, which would suggest it is already evolving off the main sequence. The star has a high rotation rate, showing a projected rotational velocity of 160 km/s. References A-type main-sequence stars A-type subgiants Lupus (constellation) Lupi, Kappa2 Durchmusterung objects 134482 074380 5647
Kappa2 Lupi
[ "Astronomy" ]
194
[]
41,470,199
https://en.wikipedia.org/wiki/Kappa%20Hydrae
κ Hydrae, Latinised as Kappa Hydrae, is a solitary star in the equatorial constellation of Hydra. Its apparent visual magnitude is 5.06, which is bright enough to be faintly visible to the naked eye. The distance to this star is around , based upon an annual parallax shift of 7.48 mas. It may be a variable star, meaning it undergoes repeated fluctuations in brightness by at least 0.1 magnitude. This is an evolving B-type star with a stellar classification of B4 IV/V, having a luminosity class intermediate between a subgiant and a giant star. It has an estimated five times the mass of the Sun and 3.4 times the Sun's radius. Kappa Hydrae has a high rate of spin with a projected rotational velocity of 115.0 km/s, and is only about 31 million years old. The star radiates 328 times the solar luminosity from its outer atmosphere at an effective temperature of 16,150 K. Name This star was one of the set assigned by the 16th century astronomer Al Tizini to Al Sharāsīf (ألشراسيف), the Ribs (of Hydra), which included the stars from β Crateris westward through κ Hydrae. According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Sharāsīf were the title for two stars : β Crateris as Al Sharasīf II and κ Hydrae as Al Sharasīf I. In Chinese, (), meaning Extended Net, refers to an asterism consisting of Kappa Hydrae, Upsilon1 Hydrae, Lambda Hydrae, Mu Hydrae, HD 87344, and Phi1 Hydrae. Consequently, Kappa Hydrae itself is known as (), "the Fifth Star of Extended Net". References B-type main-sequence stars B-type subgiants Suspected variables Hydra (constellation) Hydrae, Kappa Durchmusterung objects Hydrae, 38 083754 047452 3849
Kappa Hydrae
[ "Astronomy" ]
428
[ "Hydra (constellation)", "Constellations" ]
41,470,260
https://en.wikipedia.org/wiki/Chi2%20Hydrae
{{DISPLAYTITLE:Chi2 Hydrae}} Chi2 Hydrae, Latinised from χ2 Hydrae, is a binary star system in the equatorial constellation of Hydra. Based upon an annual parallax shift of 4.6 mas as seen from Earth, it is located roughly 685 light years from the Sun. It is visible to the naked eye with a combined apparent visual magnitude of about 5.7. This is a detached eclipsing binary star system with an orbital period of 2.27 days and an essentially circular orbit having a measured eccentricity of 0.00. The eclipse of the primary by the secondary component reduces the visual magnitude of the system by 0.29, while the eclipse of the secondary diminishes the magnitude by 0.27. The primary, component A, is a magnitude 5.85 B-type star with a stellar classification of B8 III-IVe, suggesting it may be part way along the path of evolving into a giant star from a subgiant. It has about 3.6 times the mass of the Sun and 4.4 times the Sun's radius, although it may be tidally deformed since its radius is 86% of the Roche radius. With an estimated age of 158 million years, it has a projected rotational velocity of 112 km/s. Component B is a magnitude 7.57 B-type main sequence star with a class of B8.5 V. It has 2.6 times the Sun's mass and 2.16 times the radius of the Sun. The star is filling 60% of its Roche radius. References B-type giants B-type subgiants B-type main-sequence stars Eclipsing binaries Hydra (constellation) Hydrae, Chi2 096314 054255 4317 Durchmusterung objects
Chi2 Hydrae
[ "Astronomy" ]
377
[ "Hydra (constellation)", "Constellations" ]
41,470,653
https://en.wikipedia.org/wiki/Society%20of%20Pharmacovigilance%2C%20India
The Society of Pharmacovigilance, India (SoPI), is an Indian national non-profit scientific organisation, which aims at organizing training programmes and providing expertise in pharmacovigilance and enhance all aspects of the safe and proper use of medicines The International Society of Pharmacovigilance (ISoP) granted status of 'associated society' to Society of Pharmacovigilance India (SoPI). It is the second professional society in the world after ISoP. The founder of SoPI is KC Singhal. Annual Events 14th Annual Conference of SoPI (SoPICON 2014), Aligarh, India 19th Annual Conference of SoPI (SoPICON 2022), Online/Hybrid Official Publication Journal of Pharmacovigilance and Drug Safety (ISSN: 0972-8899) Office Bearers Prof. KC Singhal (Patron) Dr. Sandeep Agarwal (President) Prof. Syed Ziaur Rahman (General Secretary) See also Uppsala Monitoring Centre (WHO) Council for International Organizations of Medical Sciences EudraVigilance References Further reading External links Official Website of SoPI Society of Pharmacovigilance, India Official Journal of SoPI Journal of Pharmacovigilance and Drug Safety Pharmaceuticals policy Drug safety Pharmacological societies Pharmacology journals Clinical research Clinical trials Pharmacological classification systems
Society of Pharmacovigilance, India
[ "Chemistry" ]
288
[ "Pharmacological classification systems", "Pharmacology", "Pharmacological societies", "Drug safety" ]
41,470,905
https://en.wikipedia.org/wiki/Space%20selfie
A space selfie is a selfie (self-portrait photograph typically posted on social media sites) that is taken in outer space. This include selfies taken by astronauts (also known as astronaut selfies), machines (also known as space robot selfies and rover selfies) and by indirect methods. Astronauts The first known space selfie (during an EVA - an earlier shot inside the capsule was taken on Gemini 10 by Michael Collins) was taken by Buzz Aldrin during the Gemini 12 mission. The extra-vehicular activity (EVA) equipment used by astronauts during spacewalks contains a specially designed camera for photography in outer space. The main purpose of the EVA camera is to take pictures of the subjects related to the missions. There have been many space selfies, some of which use the visor of another astronaut's helmet as the mirror. Early space selfies after the word "selfie" was first used in 2002 without assistance from another astronaut included Donald Pettit and Stephen Robinson. Pettit took one during the Expedition 6 in January 2003. Robinson took his during the repair of the Space Shuttle Discovery on August 3, 2005, as part of the STS-114 mission. Another notable space selfie was taken by Japanese astronaut Akihiko Hoshide during the six-hour, 28-minute spacewalk on September 5, 2012. Hoshide's photo became a viral phenomenon after Commander Chris Hadfield uploaded the photo to his Twitter account on September 30, 2013. Coincidentally, Oxford University Press, the publisher of the Oxford English Dictionary, announced in November 2013 that "selfie" was the word of the year for 2013. The picture topped many selfie lists of the year. Another space selfie of Hoshide also showed up on Instagram and appeared on a list of top selfies of 2013. Machines Space selfies can be dated back to 1976 when the lander of the Viking 2 mission took the photo of its deck after landing on Mars; however they were not considered by Discovery News as true selfies in its list of top 10 space robot selfies. In 1989, the Galileo spacecraft took a selfie using its near-infrared mapping spectrometer (NIMS). The image was taken in order to judge how parts of the spacecraft would block the instrument's view. The resulting image was fuzzy and warped by Galileos spin. An unusual approach was taken in 2010 by IKAROS, launched by Japan Aerospace Exploration Agency (JAXA). It included two wireless cameras that were ejected out of the spacecraft for the sole purpose of taking "hand free" space selfies. A blog entry about the photos was posted in 2010 and the link was posted on Twitter in 2013. Orbital Express On June 22, 2007, DARPA's Orbital Express spacecraft captured perhaps the first space selfie by an autonomous robot. Taken near the end of mission on July 22, 2007, the selfie was intended to capture a family portrait of the two spacecraft in a mated configuration. The selfie shows the ASTRO "servicing satellite" at left, and the NEXTSat "client satellite" at right. The robot arm used to capture the selfie can be seen in white at the bottom of the frame. The photo has a dark, high-contrast quality to it due to the use of the arm-mounted camera, not intended for general photography, but used to autonomously track and acquire the NEXTSat. Curiosity rover Curiosity, which landed on Mars in 2012, was equipped with the Mars Hand Lens Imager (MAHLI) camera. It can maneuver its robotic arm and turn the attached camera around to take its head shots. Discovery News described the maneuver as the way to take a truly authentic selfie and gave it the title King of Selfies in 2013. The first space selfie on another planet was taken by the Curiosity rover on September 7, 2012, based on the local time at Jet Propulsion Laboratory, the base of the operations in Pasadena, California. It was taken while the clear dust cover of the lens was closed giving a blurry image. The image was slightly modified and posted on its Facebook account on September 8, 2012, with the message: On November 19, 2013, one day after Oxford announced that "selfie" was the word of the year, the @MarsCuriosity Twitter account posted a space selfie with the message: Opportunity rover In February 2018, the Opportunity rover used its MER Microscopic Imager to take a selfie to mark 5,000 sols on Mars. The MER Microscopic Imager has a fixed focus and a fairly narrow field of view resulting in the selfie having to be made up of a series of stitched, out of focus shots. In order to reduce the amount of data that needed to be transmitted the images were scaled down in size and compressed before being transmitted from the rover. This was the first time images from the MER Microscopic Imager had been scaled down by the rover. Indirect methods During a brief period, an alternative method was presented as available by which a person could, without being in outer space, indirectly take space selfies. This was promoted as part of the crowdfunding efforts for the Planetary Resources's ARKYD mission. The ARKYD "space selfie" method would have allowed donors to upload their own photos to the telescope orbiting the Earth; the telescope would have had a robotic arm equipped with a camera and a small screen to display the picture of the donor on one surface of the telescope, and the on-screen image of the donor was to be visible to the lower part of the camera (with the Earth as the background) allowing a space selfie to be taken. A similar service was launched in 2014 by Belgian startup SpaceBooth. The SpaceBooth Low Earth Orbit pico-satellite will project uploaded images in front of a transparent window and then take a picture of the projection with space in the background. The space selfie will then be sent back to the Earth. In November 2019, Spelfie, the selfie from space, was launched to allow users to take a selfie at the exact time that a satellite camera captures their location from space. Users of the app click on the event they are attending, then, once they are at the venue, the app provides coordinates so the user knows precisely where to position themselves and at what time. They then take a photo of themselves at the moment the satellite is taking its photo and later the same day the app sends back the satellite image juxtaposed with the selfie to be viewed in its gallery. The tool, which uses Airbus satellites, was demonstrated as part of a BBC documentary showing a village of people spelling out the words Act Now on a beach in Bali, with the image captured on camera from space. Spelfie is primarily aimed at people attending major sports and cultural events but for its second phase of development, the app will extend beyond specific events and allow users to give a specific location anywhere in the world, and be alerted if the satellite is going to pass overhead. CrunchLabs, founded by YouTuber Mark Rober, plans to offer space selfies as a service via its satellite SatGus. Customers can upload their selfies, which are then displayed on the satellite's external screen and photographed using an onboard camera with Earth in the background. See also The Blue Marble The Day the Earth Smiled First images of Earth from space Pale Blue Dot References External links 2000s neologisms Astrophotography Internet culture Spaceflight Selfies
Space selfie
[ "Astronomy" ]
1,527
[ "Spaceflight", "Outer space" ]
41,471,188
https://en.wikipedia.org/wiki/Oil%20Pollution%20Act%20of%201973
The Oil Pollution Act of 1973 or Oil Pollution Act Amendments of 1973, 33 U.S.C. Chapter 20 §§ 1001-1011, was a United States federal law which amended the United States Statute . The Act of Congress sustained the United States commitment to control the discharge of fossil fuel pollutants from nautical vessels and to acknowledge the embargo of coastal zones in trans-boundary waters. The H.R. 5451 legislation was passed by the United States 93rd Congressional session and enacted by the 37th President of the United States Richard Nixon on October 4, 1973. History of OILPOL The International Convention for the Prevention of Pollution of the Sea by Oil (OILPOL) was an international convention organized by the United Kingdom in 1954. The convention was held in London, England from April 26, 1954 to May 12, 1954. The international meeting was convened to recognize the disposal of hazardous waste which could potentially yield toxic contamination to the marine ecosystems. The International Convention for the Prevention of the Pollution of the Sea by Oil, 1954 original text was authored in English and French. The environmental protocol was amended in 1962, 1969, and 1971. The 1971 OILPOL amendments imposed irrevocable oceanic jurisdictions for the Great Barrier Reef located in the Coral Sea. The international convention amendments introduced design control provisions for sea-going vessels which specified tank formation arrangement and tank size limitations for nautical transport ships. Provisions of the Act The 1973 amendments accentuated the International Convention for the Prevention of Pollution of the Sea by Oil, 1954 by complying with the 1969 and 1971 international convention agreement amendments. Definitions Oily mixture means a mixture with any oil content. Discharge in relation to instantaneous rate of discharge of oil content means the rate of discharge of oil in liters per hour at any instant divided by the speed of the ship in knots at the same instant. Discharge of oil or oily mixture from a ship is prohibited unless I.) ship is proceeding en route II.) the instantaneous rate of discharge of oil content does not exceed per Discharge of oil or oily mixture from a ship, other than tankers is prohibited unless I.) oil content of the discharge is less than one hundred parts per one million parts of the mixture II.) oil content of the discharge is made as far as practicable from the nearest land Discharge of oil or oily mixture from tankers is prohibited unless I.) Discharges from machinery space bilges shall be governed by the above provisions for ships other than tankers II.) Total quantity of oil discharge on a ballast voyage does not exceed 1/15000 of the total cargo carrying capacity III.) Tanker is more than from the nearest land Nearest land means more than from a coastline Secretary means the Secretary of the department which governs the operations of the United States Coast Guard Tank Ship Construction Standards Tankers built in the United States shall be constructed in accordance with the provisions of annex C of the International Convention for the Prevention of the Pollution of the Sea by Oil, as amended in 1971, relating to tank arrangement and limitation of tank size. The construction standard has an effective date for all tankers built in the United States as of; I.) Delivery of the tanker is after January 1, 1977 II.) Delivery of the tanker is not later than January 1, 1977, and the building contract is placed after January 1, 1972 III.) In cases where no building contract has previously been placed, the keel is laid or the tanker is at a similar stage of construction, after June 30, 1972 United States tankers are required to have on board a certificate of compliance attesting to the construction of the nautical vessel in accordance with annex C to the convention as specified by tank arrangement and limitation of tank size. Zone Prohibitions Australian Zone - northeastern coast of Australia or Queensland designated by a line drawn from a point on the coast of Australia in latitude 11 degrees south, longitude 142 degrees 08 minutes east () to a point in latitude 10 degrees 35 minutes south, longitude 141 degrees 55 minutes east (). Great Barrier Reef - Coral reef protection zone of the Earth's largest coral reef system. thence to a point latitude 10 degrees 00 minutes south, longitude 142 degrees 00 minutes east () thence to a point latitude 9 degrees 10 minutes south, longitude 143 degrees 52 minutes east () thence to a point latitude 9 degrees 00 minutes south, longitude 144 degrees 30 minutes east () thence to a point latitude 13 degrees 00 minutes south, longitude 144 degrees 00 minutes east () thence to a point latitude 15 degrees 00 minutes south, longitude 146 degrees 00 minutes east () thence to a point latitude 18 degrees 00 minutes south, longitude 147 degrees 00 minutes east () thence to a point latitude 21 degrees 00 minutes south, longitude 153 degrees 00 minutes east () thence to a point on the coast of Australia in latitude 24 degrees 42 minutes south, longitude 153 degrees 15 minutes east () Oil Record Book The oil record book is to be completed on each occasion, on a tank-to-tank basis, whenever any of the following operations take place on a ship and tanker. Repeal of Oil Pollution Act of 1973 The 1973 United States public law was repealed by the enactment of Act to Prevent Pollution from Ships on October 21, 1980. See also Ballast tank Ballast water discharge and the environment Environmental impact of shipping International Maritime Organization Marine Protection, Research, and Sanctuaries Act of 1972 MARPOL 73/78 Oil Pollution Act of 1924 Oil Pollution Act of 1990 United States energy law References External links 93rd United States Congress 1973 in the environment 1973 in American law Ocean pollution United States federal environmental legislation United States energy law
Oil Pollution Act of 1973
[ "Chemistry", "Environmental_science" ]
1,136
[ "Ocean pollution", "Water pollution" ]
41,472,254
https://en.wikipedia.org/wiki/C12H23N
{{DISPLAYTITLE:C12H23N}} The molecular formula C12H23N (molar mass: 181.32 g/mol, exact mass: 181.1830 u) may refer to: Dicyclohexylamine Leptacline
C12H23N
[ "Chemistry" ]
59
[ "Isomerism", "Set index articles on molecular formulas" ]
41,472,882
https://en.wikipedia.org/wiki/C19H23NO
{{DISPLAYTITLE:C19H23NO}} The molecular formula C19H23NO (molar mass: 281.39 g/mol, exact mass: 281.1780 u) may refer to: Alimadol Blarcamesine (ANAVEX2-73) Cinnamedrine Diphenylpyraline Naphyrone SCH-5472
C19H23NO
[ "Chemistry" ]
83
[ "Isomerism", "Set index articles on molecular formulas" ]
41,472,956
https://en.wikipedia.org/wiki/C13H23N
{{DISPLAYTITLE:C13H23N}} The molecular formula C13H23N (molar mass: 193.33 g/mol) may refer to: Adapromine Precoccinelline
C13H23N
[ "Chemistry" ]
48
[ "Isomerism", "Set index articles on molecular formulas" ]
41,473,916
https://en.wikipedia.org/wiki/68%20Draconis
68 Draconis is the Flamsteed designation for a star in the northern circumpolar constellation of Draco. It has an apparent visual magnitude of 5.69, so, according to the Bortle scale, it is faintly visible to the naked eye from suburban skies at night. Measurements made with the Gaia spacecraft show an annual parallax shift of , which is equivalent to a distance of around from the Sun. It is moving closer to the Earth with a heliocentric radial velocity of –14.6 km/s. The star has a relatively high proper motion, traversing the celestial sphere at a rate of per year. The stellar classification of 68 Draconis is F5 V, indicating that it is a main sequence star that is fusing hydrogen into helium at its core to generate energy. The star appears to be over-luminous for a member of its class, being 0.73 magnitudes brighter than expected. This may indicate that this is a binary system with an unresolved secondary component. It has 15% more mass than the Sun but is less than half as old, with an estimated age of 1.7 billion years. The star is radiating 11 times the Sun's luminosity from its photosphere at an effective temperature of 6,137 K, giving it the yellow-white hue of an F-type star. References F-type main-sequence stars Draco (constellation) 7727 BD+61 1983 Draconis, 68 192455 099500
68 Draconis
[ "Astronomy" ]
312
[ "Constellations", "Draco (constellation)" ]
41,475,151
https://en.wikipedia.org/wiki/Swell%20Radio
Swell Radio was a mobile radio streaming application that learned user listening preferences based on listening behavior, community filtering, and a proprietary algorithm. Originally designed for use while commuting to and from work, the service focused on delivering spoken-word audio content to users. Major streaming partners included ABC News Radio, NPR, PRI, and TED According to the company website, the app was available on iOS devices worldwide but content was customized to the United States and Canada. The application was “ad-free” and the company was not monetizing. In July 2014, Apple acquired the technology behind the Swell app for $30 million, discontinuing the app and folding its backbone and technology into Apple Podcasts. History Concept.io, creator of Swell Radio, raised $5.4 million in Series A Funding led by venture capital firm Draper Fisher Jurvetson. The application originally launched the application on the iOS platform in Canada in early 2013 and officially launched in the United States on June 27, 2013. References Further reading 6 Apps That Turn Your Phone into a Radio Swell's iPhone App Aims to Take the Pain Out of Podcasts A Swell App for Discovering Podcasts Swell App Makes Podcasts Work For Smartphone Generation Streaming media systems IOS software Podcasting software 2013 software Apple Inc. acquisitions 2014 mergers and acquisitions Defunct online companies of the United States
Swell Radio
[ "Technology" ]
271
[ "Mobile technology stubs", "Computer systems", "Streaming media systems", "Telecommunications systems", "Mobile software stubs" ]
41,475,222
https://en.wikipedia.org/wiki/Aceturic%20acid
Aceturic acid (N-acetylglycine) is a derivative of the amino acid glycine. The conjugate base of this carboxylic acid is called aceturate, a term used for its esters and salts. Preparation Aceturic acid can be prepared by warming glycine either with a slight excess of acetic anhydride in benzene, or with an equal molar amount of acetic anhydride in glacial (concentrated) acetic acid. See also Aceglutamide (α-N-Acetylglutamine) N-Acetylaspartic acid N-Acetylcysteine N-Acetylglutamic acid N-Acetylleucine Nε-Acetyllysine N-Acetyltyrosine Aceburic acid References Acetamides
Aceturic acid
[ "Chemistry" ]
179
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
41,476,229
https://en.wikipedia.org/wiki/C4H7NO3
{{DISPLAYTITLE:C4H7NO3}} The molecular formula C4H7NO3 (molar mass: 117.104 g/mol) may refer to: Aceturic acid L-Aspartic-4-semialdehyde Molecular formulas
C4H7NO3
[ "Physics", "Chemistry" ]
60
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
41,476,532
https://en.wikipedia.org/wiki/7%20Persei
7 Persei is a star in the constellation Perseus, located 774 light years away from the Sun. While the star bears the Bayer designation Chi Persei, it is not to be confused with the entire cluster NGC 884, commonly referred to as Chi Persei. It is faintly visible to the naked eye as a dim, yellow-hued star with an apparent visual magnitude of 5.99. This object is moving closer to the Earth with a heliocentric radial velocity of −12.5 km/s. This is an evolved giant star with a stellar classification of G7 III, most likely (93% chance) on the horizontal branch. At the age of 191 million years, it has 3.84 times the mass of the Sun but has expanded to 24 times the Sun's radius. The star is radiating 316 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,974 K. References G-type giants Horizontal-branch stars Perseus (constellation) Persei, Chi Durchmusterung objects Persei, 07 013994 010729 0662
7 Persei
[ "Astronomy" ]
237
[ "Perseus (constellation)", "Constellations" ]
41,477,668
https://en.wikipedia.org/wiki/Clowes%E2%80%93Campusano%20LQG
The Clowes–Campusano LQG (CCLQG; also called LQG 3 and U1.28) is a large quasar group, consisting of 34 quasars and measuring about 2 billion light-years across. It is one of the largest known superstructures in the observable universe. It is located near the larger Huge-LQG. It was discovered by the astronomers Roger Clowes and Luis Campusano in 1991. Characteristics Lying at a distance of 9.5 billion light years away, the CCLQG is a cosmic decoupling of 34 individual quasars (highly luminous active galactic nuclei powered by supermassive black holes) spanning a region roughly 2 billion light-years in length, and about 1 billion light years wide, making it one of the largest and most exotic cosmic structures known in the observable universe. It was named U1.28 because of its average redshift of 1.28, and is located in the constellation of Leo. It was also notable because it is located in the ecliptic, the line where the Sun seems to travel in the entire year. It was 1.8 billion light-years away from the Huge-LQG, a group of 73 quasars discovered in 2012. Its proximity to the Huge-LQG has attracted the attention of scientists. First, because it was very close to the Huge-LQG, the region where the two LQG's are located are different, or "lumpy", when compared to other regions in the universe with the same size and redshift. Second, because of their close locations, it has been suggested that the two structures are really a single structure in itself, and only connected by hidden intergalactic filament; however, no such evidence has been found. See also CfA2 Great Wall Galaxy filament Hercules–Corona Borealis Great Wall Large-scale structure of the cosmos List of largest cosmic structures Pisces–Cetus Supercluster Complex Sloan Great Wall References Leo (constellation) Galaxy filaments Quasars Large quasar groups Large-scale structure of the cosmos Astronomical objects discovered in 1991
Clowes–Campusano LQG
[ "Astronomy" ]
453
[ "Leo (constellation)", "Constellations" ]
41,477,830
https://en.wikipedia.org/wiki/9%20Persei
9 Persei is a single variable star in the northern constellation Perseus, located around 4,300 light years away from the Sun. It has the Bayer designation i Persei; 9 Persei is the Flamsteed designation. This body is visible to the naked eye as a faint, white-hued star with an apparent visual magnitude of about 5.2. It is moving closer to the Sun with a heliocentric radial velocity of −15.2 km/s. The star is a member of the Perseus OB1 association of co-moving stars. This is a blue supergiant with a stellar classification of A2 Ia, a massive star that has used up its core hydrogen and is now fusing heavier elements. It is an Alpha Cygni variable (designated V474 Persei), a type of non-radial pulsating variable. It ranges in magnitude from 5.15 down to 5.25. The star has 10.5 times the mass of the Sun and has expanded to 89 times the Sun's radius. It is radiating over 12,000 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 9,840 K. 9 Persei has one visual companion, designated component B, at an angular separation of and magnitude 12.0. References A-type supergiants Alpha Cygni variables Perseus (constellation) Persei, i Persei, 09 BD+55 598 014489 011060 0685 Persei, V474
9 Persei
[ "Astronomy" ]
318
[ "Perseus (constellation)", "Constellations" ]
41,477,858
https://en.wikipedia.org/wiki/57%20Persei
57 Persei, or m Persei, is a suspected triple star system in the northern constellation of Perseus. It is at the lower limit of visibility to the naked eye, having a combined apparent visual magnitude of 6.08. The annual parallax shift of provides a distance measure of 199 light years. 57 Persei is moving closer to the Sun with a radial velocity of about −23 km/s and will make perihelion in around 2.6 million years at a distance of roughly . The primary member, 57 Persei, is a magnitude 6.18, yellow-white hued F-type main-sequence star with a stellar classification of F0 V, indicating it is generating energy by fusing its core hydrogen. It is an estimated 1.6 billion years old and is spinning with a projected rotational velocity of 90 km/s. The star has 1.3 times the mass of the Sun and is radiating 11 times the Sun's luminosity from its photosphere at an effective temperature of around . An unseen companion has been identified via slight changes to the proper motion of the primary. The third possible member of the system, designated component B, is a magnitude 6.87 F-type star at an angular separation of 120.13 arc seconds. This star has a different parallax and space velocity than the primary, so it may just be a wide visual companion. There are three other nearby visual companions that are not physically associated with the 57 Persei system. References F-type main-sequence stars Triple star systems Perseus (constellation) Persei, m Persei, 57 Durchmusterung objects 028704 021242 1434 A-type main-sequence stars
57 Persei
[ "Astronomy" ]
355
[ "Perseus (constellation)", "Constellations" ]
41,477,875
https://en.wikipedia.org/wiki/42%20Persei
42 Persei is a binary star system in the northern constellation of Perseus. It has the Bayer designation n Persei, while 42 Persei is the Flamsteed designation. The system is visible to the naked eye as a dim, white-hued point of light with an apparent visual magnitude of 5.11. It is located around distant from the Sun, but is drifting closer with a radial velocity of −12.4 km/s. 42 Persei is a single-lined spectroscopic binary with an orbital period of 1.77 days and an eccentricity of just 0.056. It is a variable star, ranging in brightness from magnitude 5.05 to 5.18, and was assumed at discovery to be a close, but detached, eclipsing variable. Closer studies of the light variations and the orbit have shown that the main brightness changes are due to rotation of the distorted primary star, although it is predicted from the likely inclination of the orbit that shallow eclipses could also occur. The visible component is an A-type main-sequence star with a stellar classification of A3V; a star that is fusing its core hydrogen. It has been reported as a mild Am star, but this is considered questionable. The star has twice the mass of the Sun and 3.5 times the Sun's radius. It has a high rate of spin, showing a projected rotational velocity of 91 km/s. The star is radiating 59 times the luminosity of the Sun from its photosphere at an effective temperature of 8,892 K. The unseen companion star is likely to be a dim red dwarf with 38% of the Sun's mass. In Chinese astronomy, 42 Persei is called 天讒, Pinyin: Tiānchán, meaning Celestial Slander, because this star is marking itself and stand alone in Celestial Slander asterism, Hairy Head mansion (see : Chinese constellation). References A-type main-sequence stars M-type main-sequence stars Rotating ellipsoidal variables Perseus (constellation) Persei, n BD+32 0667 Persei, 42 023848 017886 1177 Persei, V467 Am stars
42 Persei
[ "Astronomy" ]
454
[ "Perseus (constellation)", "Constellations" ]
41,477,889
https://en.wikipedia.org/wiki/43%20Persei
43 Persei is a binary star system in the northern constellation Perseus. It is visible to the naked eye as a dim, yellow-white hued star with an apparent visual magnitude of 5.28. The system is located around distant from the Sun, based on parallax. This is a double-lined spectroscopic binary with an orbital period of 30.4 days and an eccentricity of 0.6. The primary component is an F-type main-sequence star with a stellar classification of F5V, a star that is fusing its core hydrogen. It has 1.54 times the mass of the Sun, 2.4 times the Sun's radius, and is spinning with a projected rotational velocity of . The star shines 10.8 times brighter than the Sun at an effective temperature of . There are distant companions B (separation 75.5" and magnitude 10.66), C (separation 85.6" and magnitude 12.18), and D (separation 68" and magnitude 13.43). References F-type main-sequence stars Spectroscopic binaries Perseus (constellation) Persei, A BD+50 860 Persei, 43 024546 018453 1210
43 Persei
[ "Astronomy" ]
257
[ "Perseus (constellation)", "Constellations" ]
41,478,545
https://en.wikipedia.org/wiki/Photoaffinity%20labeling
Photoaffinity labeling is a chemoproteomics technique used to attach "labels" to the active site of a large molecule, especially a protein. The "label" attaches to the molecule loosely and reversibly, and has an inactive site which can be converted using photolysis into a highly reactive form, which causes the label to bind more permanently to the large molecule via a covalent bond. The technique was first described in the 1970s. Molecules that have been used as labels in this process are often analogs of complex molecules, in which certain functional groups are replaced with a photoreactive group, such as an azide, a diazirine or a benzophenone. References Molecular biology techniques
Photoaffinity labeling
[ "Chemistry", "Biology" ]
147
[ "Molecular biology techniques", "Molecular biology" ]
41,479,084
https://en.wikipedia.org/wiki/Altered%20Schaedler%20flora
The altered Schaedler flora (ASF) is a community of eight bacterial species: two lactobacilli, one Bacteroides, one spiral bacterium of the Flexistipes genus, and four extremely oxygen sensitive (EOS) fusiform-shaped species. The bacteria are selected for their dominance and persistence in the normal microflora of mice, and for their ability to be isolated and grown in laboratory settings. Germ-free animals, mainly mice, are colonized with ASF for the purpose of studying the gastrointestinal (GI) tract. Intestinal mutualistic bacteria play an important role in affecting gene expression of the GI tract, immune responses, nutrient absorption, and pathogen resistance. The standardized microbial cocktail enabled the controlled study of microbe and host interactions, role of microbes, pathogen effects, and intestinal immunity and disease association, such as cancer, inflammatory bowel disease, diabetes, and other inflammatory or autoimmune diseases. Also, compared to germfree animals, ASF mice have fully developed immune system, resistance to opportunistic pathogens, and normal GI function and health, and are a great representation of normal mice. History The GI tract is particularly difficult to study due to its complex host-pathogen interaction. With 107-1011 bacteria, 400-plus species, and variations between individuals, there are many complications in the study of a normal gastrointestinal system. For example, it is problematic to assign biological function to specific microbes and community structure, and to investigate the respective immune responses. Furthermore, the varying mice microbiome need to be under controlled conditions for repetitions of the experiments. Germfree mice and specific pathogen free (SPF) mice are helpful in addressing some of the issues, but inadequate in many areas. Germfree mice are not a good representation of normal mice, with issues of enlarged cecum, low reproductive rates, poorly developed immune system, and reduced health. SPF mice still contain varying microbiota, just without certain known pathogen species. There is a need in the scientific field for a known bacterial mixture that is necessary and sufficient for healthy mice. In the mid-1960s, Russell W. Schaedler, M.D., isolated and grew bacteria from conventional and SPF laboratory mice. Aerobic and less oxygen-sensitive anaerobic bacteria are easy to culture. Fusiform-shaped anaerobes and other EOS bacteria are much more difficult to culture, even though they represent the majority of the normal rodent microbiota. He selected for the bacteria that dominated and can be isolated in culture, and then colonized germfree mice with different bacteria combinations. For example, one combination could include Escherichia coli, Streptococcus fecalis, Lactobacillus acidophilus, L. salivarius, Bacteroides distasonis, and an EOS fusiform-shaped Clostridium spp., . Certain defined microflora are able to restore germfree mice to resemble normal mice with reduced cecal volume, restored reproductive ability, colonization resistance, and well developed immune system. So named Schaedler flora, the defined microflora combinations were widely used in gnotobiotic studies. In 1978, the National Cancer Institute requested Roger Orcutt of Charles River Laboratories, whose Ph.D. mentor was Dr. Schaedler, to revise a new microflora for standardizing all of its isolator-maintained nuclear stocks and strains of mice. In what was named "the altered Schaedler flora", four bacteria of the original mixture were kept from the original "Schaedler cocktail" microflora: the two Lactobacilli, the Bacteroides, and the EOS fusiform-shaped bacterium. Four more bacteria from the microbiome isolates were added: a spirochete-shaped bacterium and three new EOS fusiform-shaped bacteria. Due to the limited technology of the time, not much was known of the specific bacterial Genus and species. These bacteria are persistent and dominant in normal and SPF mice GI tracts. Confirmation of the correct microbiota presence was limited to looking at the cell Morphology (biology), biochemical traits and growth characteristics Dr. Orcutt lamented that he would have included the Segmented Filamentous Bacterium of the small intestine of mice in the altered Schaedler flora, which is so intimately involved with the host's immune system, if it could have been cultured in vitro. However, to this day, over 40 years later, it still can only be maintained in vivo and eludes being isolated in pure culture. Bacteria With the recent advancement in biotechnology, researchers were able to determine the precise Genus and species of the ASF bacteria using sequence analysis of 16S rRNA. The strains identified are different from the presumptive identities. The distribution of the bacteria species in the gut depends on their need of and aversion to oxygen, flow rate, and substrate abundance, with variability based on age, gender and other microorganisms present in the mice. ASF 360 and ASF 361 are Lactobacilli. Lactobacilli are rod-shaped, Gram-positive, aerotolerant bacteria, and common colonizers of the squamous epithelia of the stomach of mice. ASF 360 was thought to be L. acidophilus. However, 16SrRNA results showed that it is closely related to but distinct from L. acidophilus. ASF 360 is a novel lactobacillus species; clustered with L. acidophilus and L. lactis. ASF 361 has nearly identical 16S rRNA sequences to L. murinus and L. animalis. Both species are routinely found in GI tracts of mice and rats. A thorough examination of the two species and strains is necessary to determine the identity of ASF 361 with more confidence. ASF 361 is completely distinct from the L. salivarius that it was believed to be. ASF 360 and ASF 361 colonize in high numbers in the stomach and then slough off and travel through the small intestine and the cecum. ASF 519 is related to B. distasonis, the species it was mistaken to be before 16S RNA sequencing was available. However, like the previous bacteria, it is a distinct species by 16S rRNA evidence. Bacteroides species are often found in GI tracts of mammals, and included non-motile, Gram-negative, anaerobic, rod-shaped bacteria. Recently, many of Bacteroides species are being recognized as actually belonging to other genera, like Porphyromonas and Prevotella. In the case of ASF 519, it belongs to the newly named Parabacteroides genus, along with the bacteria formerly known as [B.] distasonis, [B.] merdae, CDC group DF-3, and [B.] forsythus. The spiral-shaped obligate anaerobe ASF 457 can be found in small amounts in the small intestine, and in high concentration in the large intestine. This bacterium is related to G. ferrireducens, Deferribacter thermophilus, and Flexistipes sinusarabici. ASF 457 is later named Mucispirillum schaedleri. The species is related to the Flexistipes phylum with iron-reducing environmental isolates. EOS fusiform bacteria make up the great majority of the authocthonous intestinal microbiota, and are mainly found in the large intestine. They vastly outnumber facultative anaerobic and aerobic bacteria. All four fusiform-shaped anaerobes belong to the low G+C content, Gram-positive bacteria group. ASF 356 is of the Clostridium Genus, closely related to Clostridium propionicum. ASF 502 is most related to Ruminococcus gnavus. ASF 492 is confirmed by 16S rRNA sequences as Eubacterium plexicaudatum, and is closely related to Roseburia ceciola. ASF 356, ASF 492, and ASF 502 are all part of the low G+C, Gram-positive bacteria of the Clostridium cluster XIV. ASF 500 is a deeper branch into the low G+C, Gram-positive bacteria of Firmicutes, Bacillus-Clostridium group, but not much can be found in the GenBank database on this branch of Clostridium cluster Mouse models Only mice have been colonized with ASF in experiments, since ASF bacteria originate from mice intestinal microbiome. Germfree mice are colonized by ASF through one of two methods. Pure culture of each living ASF bacterium can be grown in anaerobic conditions in laboratory setting. Lactobacilli and Bacteroides are given by gavage to germfree mice first to establish a microbial environment in the GI tract, which then supports the colonization of the spiral-shaped and fusiform bacteria that are given later. An alternative way is to inoculate the drinking water of germfree mice with fresh feces from cecum and colon of gnotobiotic mice (ASF mice), over a period of four days. The establishment and concentration of each bacteria species vary slightly depending on the age, gender, and environmental conditions of the mice. Experimental results validate the dominance and persistence of the ASF in the colonized mice even after four generations. The mice can be treated in the same standards as germfree mice, such as sterilized water, germfree environment, and careful handling. Although this ensures the definite ASF propagation in mice intestine, it is labor-intensive and not a good representation of physiological conditions. ASF mice can also be raised in the same conditions as normal mice, because they have addressed the immunological, pathological, and physiological weaknesses of the germfree mice. ASF mice can maintain the eight bacteria species under normal conditions. However, variations in strains of the bacteria and introduction of minor amounts of other commensal, mutualist or pathogenic microbes could occur over time. Isogenic mice that cohabit showed little variation in ASF profile, while litter split among different cages showed divergence in bacteria strains. Once the ASF community are established though, it is highly stable over time without environmental or housing perturbation Uses in research ASF can be used to study a variety of activities involving the intestinal tract. This includes the study of gut microbiome community, metabolism, immunity, homeostasis, pathogenesis, inflammation, and diseases. Experiments comparing germfree, ASF, and pathogen-infected mice can demonstrate the role of commensals in maintaining the host health. Intestinal homeostasis is maintained by host-microbe interactions and host immunity. This is critical for digestion of food and protection against pathogens. Bouskra, et al. studied the regulation of intestinal flora and the immune system. They found IgA producing B cells in the Peyer's patches, intestinal lymphoid tissues and follicles, and mesenteric lymph nodes. They used ASF to test the maturation of lymphoid follicles into large B cell clusters by the toll-like receptor signaling. In another study, the innate detection system generates adaptive immune system to maintain intestinal homeostasis. Geuking, et al. examined the role of regulatory T cells in limiting microbe-triggered intestinal inflammation and the T cell compartment. Using ASF, they found intestinal colonization resulted in activation and generation of colonic Treg cells. In germfree mice, Th17 and Th1 response dominate. Bacteria microenvironment is very important in the pathogenesis of clinical and experimental chronic intestinal inflammation. Whary, et al. examined Helicobacter rodentium infection and the resulting ulcerative typhlocolitis, sepsis, and morbidity. Using ASF mice, they showed a decrease in disease progression due to colonization resistance in the lower bowel from the impacts of normal anaerobic flora. In another summary, Fox examined the relationship between microbiome of the gut and the onset of inflammatory bowel disease (IBD) with the infection of H. bilis. H. bilis is noted to elicit heterologous immune response to lower gut flora, in both activating pro-inflammatory cytokine and dendritic cell activity and probiotic anti-inflammatory activity due to the presentation of mutualist antigens. ASF Lactobacilli and Bacteroides help moderate bowel inflammation in a balanced manner in pathogen infection studies. Beyond the study of bacterial pathogen, microflora community, intestinal immune system interactions and diseases, ASF has been used in experiments examining the transmission of retrovirus. In the paper by Kane, et al., they found the mouse mammary tumor virus is transmitted most efficiently through bacteria colonized mucosal surfaces. The retrovirus evolved to rely on the interaction with microbiota and toll-like receptor to evade immune pathways. Problems ASF is not a comprehensive representation of the over 400 diverse bacteria species that normally occupy the mice GI tract. Even in SPF mice, there are many Helicobacter and Filamentous species not included in ASF1. Not to mention the many bacteria that could not be cultured under laboratory settings due to inadequate environment and symbiosis needs. The gut bacteria make up a complex microbial community that supports each other, and the development of the host GI tract and the immune system. Many bacteria are associated specifically for the production of certain metabolites or signaling pathway that maintains the survival of the microflora. For example, hippurate and chlorogenic acid metabolite level in mice change due to microflora. The synthesis pathway depends on multiple bacteria species, which are not all present in ASF. This limits the bioavailability of nutrients to both host and microbe. Additional strains of bacteria might need to be added for certain studies with metabolism, pathogenesis, or microbe interactions. It is impossible to study the complete organization of the gut microbiome and all its contributions to the host system, especially with relations to disease development and nutrition, with only eight microbes. Furthermore, there are differences between mice and human microflora. So there are limitations to studies using ASF mice to depict human inflammatory diseases like IBD, arthritis, and cancer. ASF is only a basis for developing hypotheses for mice with complex microflora. See also Microbiome References Bacteria
Altered Schaedler flora
[ "Biology" ]
3,090
[ "Prokaryotes", "Microorganisms", "Bacteria" ]
41,479,303
https://en.wikipedia.org/wiki/Judith%20Pipher
Judith Lynn Pipher (, June 18, 1940 – February 21, 2022) was a Canadian-born American astrophysicist and observational astronomer. She was Professor Emerita of Astronomy at the University of Rochester and directed the C. E. K. Mees Observatory from 1979 to 1994. She made important contributions to the development of infrared detector arrays in space telescopes. Early life and education Judith Lynn Bancroft was born on June 18, 1940, in Toronto, Ontario, to Earl Lester Alexander Bancroft and Agnes May Kathleen ( McGowan) Bancroft. She was named Junior Miss Homemaker of Ontario when she was sixteen years old. She graduated from Leaside High School in 1958 and earned a B.A. in astronomy from the University of Toronto in 1962. Following her graduation, she moved to the Finger Lakes region of upstate New York where she taught science and attended Cornell University. In the late 1960s, she worked as a graduate student of Martin Harwit on a cryogenic rocket telescope experiment. She received her Ph.D from Cornell in 1971. Her dissertation, Rocket Submillimeter Observations of the Galaxy and Background, led her into research in the nascent fields of submillimeter and infrared astronomy. Career and research Pipher joined the faculty of the University of Rochester's Physics and Astronomy Department in 1971 as an Instructor. From 1979 to 1994, Pipher was director of University of Rochester's C. E. K. Mees Observatory. In the 1970s and 1980s, she made observations from the Kuiper Airborne Observatory. Pipher and William J. Forrest achieved promising results with a 32×32-pixel array of indium antimonide (InSb) detectors at a NASA Ames workshop. They reported their results in 1983. That year Pipher and her colleagues were among the first to use an infrared array camera to capture starburst galaxies. For the next two decades, Pipher developed ultra-sensitive infrared InSb arrays with the help of colleague William J. Forrest. The Infrared Array Camera (IRAC) for the Spitzer Space Telescope was launched in August 2003. She has also worked with Dan Watson and on the development of mercury cadmium telluride (HgCdTe) arrays. Pipher's observational research has concentrated on star formation studies and the arrays she designed have been used to observe astronomical phenomena such as planetary nebulae, brown dwarfs, and the Galactic Center. She has authored over 200 papers and scientific articles. Pipher was a member of a team at the University of Rochester that developed the NEOCam sensor, a HgCdTe infrared-light sensor intended for the proposed Near-Earth Object Camera. The sensor improves the ability to detect potentially hazardous objects such as asteroids. Honors and awards Pipher received the Susan B. Anthony Lifetime Achievement Award from the University of Rochester in 2002. She was inducted into the National Women's Hall of Fame in 2007 and became involved with its administration. A 2009 article in Discover magazine indicated that Pipher was "considered by many to be the mother of infrared astronomy." Asteroid 306128 Pipher was named in her honor. The official naming citation was published by the Minor Planet Center on January 31, 2018 (). She was elected a Legacy Fellow of the American Astronomical Society in 2020. Personal life and death While at Cornell, Judith met Robert E. Pipher (1934–2007), who brought her four stepchildren when the couple married in 1965. The Piphers lived at Cayuga Lake in Seneca Falls, New York, where she was vice president of the Seneca Museum board of directors. On the occasion of her 80th birthday, June 18, 2020, was proclaimed to be "Dr. Judy Pipher Day" in the Town of Seneca Falls. She died on February 21, 2022, at the age of 81. References Further reading 1940 births 2022 deaths Scientists from Toronto American astrophysicists Cornell University alumni University of Rochester faculty University of Toronto alumni Women astronomers People from Seneca Falls, New York Scientists from New York (state) Fellows of the American Astronomical Society 20th-century American women scientists 21st-century American women scientists
Judith Pipher
[ "Astronomy" ]
840
[ "Women astronomers", "Astronomers" ]
41,479,526
https://en.wikipedia.org/wiki/BSAFE
Dell BSAFE, formerly known as RSA BSAFE, is a FIPS 140-2 validated cryptography library, available in both C and Java. BSAFE was initially created by RSA Security, which was purchased by EMC and then, in turn, by Dell. When Dell sold the RSA business to Symphony Technology Group in 2020, Dell elected to retain the BSAFE product line. BSAFE was one of the most common encryption toolkits before the RSA patent expired in September 2000. It also contained implementations of the RCx ciphers, with the most common one being RC4. From 2004 to 2013 the default random number generator in the library was a NIST-approved RNG standard, widely known to be insecure from at least 2006, containing a kleptographic backdoor from the American National Security Agency (NSA), as part of its secret Bullrun program. In 2013 Reuters revealed that RSA had received a payment of $10 million to set the compromised algorithm as the default option. The RNG standard was subsequently withdrawn in 2014, and the RNG removed from BSAFE beginning in 2015. Cryptography backdoors Dual_EC_DRBG random number generator From 2004 to 2013, the default cryptographically secure pseudorandom number generator (CSPRNG) in BSAFE was Dual_EC_DRBG, which contained an alleged backdoor from NSA, in addition to being a biased and slow CSPRNG. The cryptographic community had been aware that Dual_EC_DRBG was a very poor CSPRNG since shortly after the specification was posted in 2005, and by 2007 it had become apparent that the CSPRNG seemed to be designed to contain a hidden backdoor for NSA, usable only by NSA via a secret key. In 2007, Bruce Schneier described the backdoor as "too obvious to trick anyone to use it." The backdoor was confirmed in the Snowden leaks in 2013, and it was insinuated that NSA had paid RSA Security US$10 million to use Dual_EC_DRBG by default in 2004, though RSA Security denied that they knew about the backdoor in 2004. The Reuters article which revealed the secret $10 million contract to use Dual_EC_DRBG described the deal as "handled by business leaders rather than pure technologists". RSA Security has largely declined to explain their choice to continue using Dual_EC_DRBG even after the defects and potential backdoor were discovered in 2006 and 2007, and has denied knowingly inserting the backdoor. As a cryptographically secure random number generator is often the basis of cryptography, much data encrypted with BSAFE was not secure against NSA. Specifically it has been shown that the backdoor makes SSL/TLS completely breakable by the party having the private key to the backdoor (i.e. NSA). Since the US government and US companies have also used the vulnerable BSAFE, NSA can potentially have made US data less safe, if NSA's secret key to the backdoor had been stolen. It is also possible to derive the secret key by solving a single instance of the algorithm's elliptic curve problem (breaking an instance of elliptic curve cryptography is considered unlikely with current computers and algorithms, but a breakthrough may occur). In June 2013, Edward Snowden began leaking NSA documents. In November 2013, RSA switched the default to HMAC DRBG with SHA-256 as the default option. The following month, Reuters published the report based on the Snowden leaks stating that RSA had received a payment of $10 million to set Dual_EC_DRBG as the default. With subsequent releases of Crypto-C Micro Edition 4.1.2 (April 2016), Micro Edition Suite 4.1.5 (April 2016) and Crypto-J 6.2 (March 2015), Dual_EC_DRBG was removed entirely. Extended Random TLS extension "Extended Random" was a proposed extension for the Transport Layer Security (TLS) protocol, submitted for standardization to IETF by an NSA employee, although it never became a standard. The extension would otherwise be harmless, but together with the Dual_EC_DRBG, it would make it easier to take advantage of the backdoor. The extension was previously not known to be enabled in any implementations, but in December 2017, it was found enabled on some Canon printer models, which use the RSA BSAFE library, because the extension number conflicted a part of TLS version 1.3. Varieties Crypto-J is a Java encryption library. In 1997, RSA Data Security licensed Baltimore Technologies' J/CRYPTO library, with plans to integrate it as part of its new JSAFE encryption toolkit and released the first version of JSAFE the same year. JSAFE 1.0 was featured in the January 1998 edition of Byte magazine. Cert-J is a Public Key Infrastructure API software library, written in Java. It contains the cryptographic support necessary to generate certificate requests, create and sign digital certificates, and create and distribute certificate revocation lists. As of Cert-J 6.2.4, the entire API has been deprecated in favor of similar functionality provided BSAFE Crypto-J JCE API. BSAFE Crypto-C Micro Edition (Crypto-C ME) was initially released in June 2001 under the name "RSA BSAFE Wireless Core 1.0". The initial release targeted Microsoft Windows, EPOC, Linux, Solaris and Palm OS. BSAFE Micro Edition Suite is a cryptography SDK in C. BSAFE Micro Edition Suite was initially announced in February 2002 as a combined offering of BSAFE SSL-C Micro Edition, BSAFE Cert-C Micro Edition and BSAFE Crypto-C Micro Edition. Both SSL-C Micro Edition and Cert-C Micro Edition reached EOL in September 2014, while Micro Edition Suite remains supported with Crypto-C Micro Edition as its FIPS-validated cryptographic provider. SSL-C is an SSL toolkit in the BSAFE suite. It was originally written by Eric A. Young and Tim J. Hudson, as a fork of the open library SSLeay, that they developed prior to joining RSA. SSL-C reached End Of Life in December 2016. SSL-J is a Java toolkit that implements TLS. SSL-J was released as part of RSA JSAFE initial product offering in 1997. Crypto-J is the default cryptographic provider of SSL-J. Product suite support status On November 25, 2015, RSA announced End of Life (EOL) dates for BSAFE. The End of Primary Support (EOPS) was to be reached on January 31, 2017, and the End of Extended Support (EOXS) was originally set to be January 31, 2019. That date was later further extended by RSA for some versions until January 31, 2022. During Extended Support, even though the support policy stated that only the most severe problems would be patched, new versions were released containing bugfixes, security fixes and new algorithms. On December 12, 2020, Dell announced the reversal of RSA's past decision, allowing BSAFE product support beyond January 2022 as well as the possibility to soon acquire new licenses. Dell also announced it was rebranding the toolkits to Dell BSAFE. References External links BSAFE Cert-J Support Page BSAFE Crypto-J Support Page BSAFE SSL-J Support Page BSAFE Crypto-C Micro Edition Support Page BSAFE Micro Edition Suite Support Page C (programming language) libraries Cryptographic software Transport Layer Security implementation 1996 software
BSAFE
[ "Mathematics" ]
1,630
[ "Cryptographic software", "Mathematical software" ]
41,479,528
https://en.wikipedia.org/wiki/Haselgebirge
Haselgebirge is an evaporite sedimentary rock type composed of 10-70wt% halite, with clasts of anhydrite, mudrock, and polyhalite in a halite matrix. Bodies of pure rocksalt within the haselgebirge display red to dark color bands of foliation. Tectonic deformation occurred between the Late Jurassic to the Neogene, resulting in a two-component tectonite, haselgebirge and kerngebirge (more than 70wt% halite). Within the fault zones the haselgebirge forms protocataclasites, while the kerngebirge, and pure rock salt, form mylonites and ultramylonites. The Haselgebirge Formation is mined in Altaussee, Berchtesgarden, and Dürrnberg. References Sedimentary rocks Salt production
Haselgebirge
[ "Chemistry" ]
186
[ "Salt production", "Salts" ]
41,479,531
https://en.wikipedia.org/wiki/Sink%20works
Sink works or sinkworks (from German Sinkwerke) is a method of salt mining from salt deposits in mountainous areas. It is similar to brine wells in that salt was extracted by dissolving it in water. Both approaches simulate natural brine springs. It is one of the earliest methods of salt extraction from salt domes. A sinkwerk is a chamber in a salt mine filled with water to dissolve salt. The resulting brine is then pumped via brine pipelines to saltworks. This approach is commonly used when salt deposits are heavily contaminated (or, alternatively, when salt content in the deposit is low), so that mining of rock salt is not feasible. This method is common in most salt mines in Alps, where the saltrock-mudrock-tectonite known as Haselgebirge is widespread, with average halite content of 30-65%. In industrial cases, a complex structure of underground chambers interconnected by tunnels is created. References Mining techniques Salt production
Sink works
[ "Chemistry" ]
203
[ "Salt production", "Salts" ]
41,479,533
https://en.wikipedia.org/wiki/Brine%20pipeline
A brine pipeline is a pipeline to transport brine. It is a common way to transport salt from salt mines, salt wells and sink works to the places of salt evaporation (salterns, salt pans). Brine pipelines are also used in the oil and gas industries, and to remove salts and contaminants from water supplies. Salt mining Brine pipelines were originally made of hollowed wood. One of the earliest known wooden pipelines ran from Bad Reichenhall to Traunstein to Rosenheim, Germany, in 1619. An ancient brine pipeline may be traced along the Sentier du Sel, a 12.5 km trail in Chablais vaudois, Switzerland. References Salt production
Brine pipeline
[ "Chemistry" ]
153
[ "Salt production", "Salts" ]
41,479,658
https://en.wikipedia.org/wiki/Brine%20spring
A brine spring or salt spring is a saltwater spring. Brine springs are not necessarily associated with halite deposits in the immediate vicinity. They may occur at valley bottoms made of clay and gravel which became soggy with brine seeped downslope from the valley sides. Historically, brine springs have been early sources of U.S. salt production, as in the case of the salterns in Syracuse, New York and at the Illinois Salines. See also Saline seep Salt lick Mineral spring References Salts Springs (hydrology)
Brine spring
[ "Chemistry", "Environmental_science" ]
113
[ "Hydrology", "Hydrology stubs", "Springs (hydrology)", "Salts" ]
61,964,293
https://en.wikipedia.org/wiki/Mozilla%20Corp.%20v.%20FCC
Mozilla Corp. v. FCC, 940 F. 3d 1 (D.C. Cir., 2019) was a ruling the United States Court of Appeals for the District of Columbia Circuit in 2019 related to net neutrality in the United States. The case centered on the Federal Communications Commission (FCC)'s decision in 2017 to rollback its prior 2015 Open Internet Order, reclassifying Internet services as an information service rather than as a common carrier, deregulating principles of net neutrality that had been put in place with the 2015 order. The proposed rollback had been publicly criticized during the open period of discussion, and following the FCC's issuing of the rollback, several states and Internet companies sued the FCC. These cases were consolidated into the one led by the Mozilla Corporation. The Appeals Court ruled in favor of the FCC in October 2019, relying on the Supreme Court decision of National Cable & Telecommunications Ass'n v. Brand X Internet Services (2005) that the FCC has such authority to reclassify Internet services, and thus allowed the rollback. However, the Court ruled against the FCC in attempting to block state- and local-level laws that regulated the Internet, enabling states to pass net neutrality legislation. Background Net neutrality in the United States has been of concern since the Internet became open to public use through Internet service providers (ISPs). Net neutrality broadly encompasses the idea that all data traffic on the Internet should be treated equal, counter to past and planned actions of ISPs to offered tiered service plans that block or throttle access to selected sites at lower payment tiers, among other provisions. Within the United States, proponents of net neutrality, which typically includes Democrats and large technology companies, believe it is necessary to prevent ISPs from blocking access to Internet services or to throttle the connection to these services as to maintain an open flow of information across the Internet without discrimination. Opponents of net neutrality, principally Republicans and ISPs, argue that the ability to offer tiered rates of service would help promote competition in providing Internet service across America, and enforced net neutrality regulations would stifle growth. A central facet of the debate around net neutrality is how ISPs are classified under the Communications Act of 1934. Under this act, they may either treated as "information service" under Title I of the Act, or as a "common carrier service" under Title II. Common carrier classification under Title II would mean that the FCC, which is granted authority to oversee communication services in the United States, could apply regulations to ISPs, which would include enforcing the principles of net neutrality. But under Title I, the FCC would not have significant authority to regulate ISPs. The FCC's authority to classify ISPs in this matter was upheld in the Supreme Court case National Cable & Telecommunications Ass'n v. Brand X Internet Services in 2005, based principle on the Chevron deference for the judiciary to allow federal agencies' interpretations of ambiguous Congressional language. With Brand Xs ruling left in place the FCC's decision to classified cable-based ISPs as information services under Title I. Case background The FCC in the early 2010s, while under the Barack Obama administration, were favorable to implementing net neutrality positions. The first FCC Commissioner in Obama's term, Julius Genachowski, was a strong proponent of net neutrality. Tom Wheeler, appointed as the new FCC Commissioner in 2013, had stated he was for an open Internet but favored the notion of "fast lanes" for some traffic, but eventually agreed to full net neutrality goals as to maintain the goals of an open Internet. The FCC introduced the FCC Open Internet Order 2010 that enshrined principles of net neutrality. The order was challenged by ISPs, and in 2014, the DC Appeals Court ruled in Verizon Communications Inc. v. FCC that the FCC did not have the authority to set net neutrality requirements on ISPs unless they were classified as a common carrier. This led the FCC to issue a new Open Internet Order in 2015 that not only continued to enshrine net neutrality principles but also classified ISPs as Title II common carriers. Again, ISPs sued the FCC over this. In 2016, the DC District Court ruled in United States Telecom Ass'n v. FCC in favor of the FCC and upholding the reclassification of ISPs and the neutrality rules. With Donald Trump becoming president in 2016, Ajit Pai, a current member of the FCC was elevated to the new FCC Commissioner. Pai was a strong critic of the FCC having oversight of net neutrality prior to this position, believing that the FCC should allow ISPs to self-regulate, and that if there was to be regulation on net neutrality, it needed to come from Congress. With Pai as Commissioner, and with Republican-appointed members outnumbering the Democrat-appointed ones 3–2, Pai led the FCC to propose a rollback of the 2015 Open Internet Order. By May 2017, the FCC voted to proceed the process of rolling back the 2015 Order by issuing a public notice of the FCC's intent of the rule change, titled "Restoring Internet Freedom". In addition to the rollback, the proposed rule would prevent states and local governments from passing legislation on net neutrality. The proposed changed drew heavy criticism from proponents of net neutrality, with several organized activism events created to draw the public's attention to this. During the public commenting period, running from May to August 2017, the FCC received more than 21 million comments related to the change. Analysis of the comments following their release after the public period found a near majority of these opposed to the rule change, even after accounting for potential fraud that had been discovered. Despite the public stance on the rule change, the FCC voted on December 14, 2017, to enforce the rollback. Appeals Court Immediately after the FCC vote in December, several states issued statements of the intent to sue the FCC over this change. Once the rule change was published in the Federal Register in February 2018, twenty-two states and several other state and local agencies joined in the complaint lodged by Mozilla Corporation and Vimeo against the FCC, to be heard in the DC Appeals Court. The suit asserted that the FCC had been "arbitrary and capricious" in handling net neutrality, and disregarded "critical record evidence on industry practices and harm to consumers and businesses". The case was heard before the DC Appeals Court on February 1, 2019, before Senior Circuit Judge Stephen F. Williams and Circuit Judges Patricia Millett and Robert L. Wilkins. Mozilla and the states argued on points that the FCC, in making the rollback rule, failed to analyze current market conditions correctly, account for public safety, erred in how it classified ISPs as an information service, and asserted that the FCC did not have authority to block states from passing net neutrality laws. The FCC argued that the former net neutrality rules had harmed broadband investment, and that consumers would be protected by required transparency reports from ISPs. Several ISPs and trade groups representing them also joined the FCC to argue that there was evidence to suggest that net neutrality rules were required to prevent consumers from being harmed by aggressive throttling and blocking. A major point brought up by the judges was related to the 2018 California wildfires, in which it had been discovered that first responders' communications, operating through Verizon, were being throttled due to going over a certain bandwidth capacity, which significantly reduced the effectiveness of the responders to react to the changing fire conditions. Concurrent actions While the case was pending in the Appeals Court, California passed the California Internet Consumer Protection and Net Neutrality Act of 2018 in September 2018. The law was written to enforce net neutrality principles for ISPs operating in California. The law was passed despite the fact that the FCC order passed earlier asserted states could not set net neutrality regulations themselves. Shortly after passage, the FCC sued California for this law on the basis of the 2017 rule; the FCC was joined by several ISPs and agencies representing them in suing the state. Both sides agreed to hold off any further legislation or enforcement of the law pending the decision of the Mozilla case. Decision The DC Appeals Court issued its decision on October 1, 2019. In the per curiam decision, the Court voted to uphold the FCC's rollback of the 2015 Open Internet Order, ruling that the agency had been affirmed to have that authority through the prior Brand X case and was not at liberty to challenge the Supreme Court ruling there. The decision affirmed several of the points made by the FCC in relation to the potential harm to ISPs by imposing net neutrality, and agreeing that ISPs are more common to information services than common carriers. However, on the matter of limiting state and local-level enforcement of net neutrality, the Court ruled that the FCC had overstepped its bounds, and reversed that part of the rule. The opinion denied the claim that the FCC had implied authority on this matter, and stated "If Congress wanted Title I to vest the Commission with some form of Dormant-Commerce-Clause-like power to negate States' statutory (and sovereign) authority just by washing its hands of its own regulatory authority, Congress could have said so." In addition, the Court ordered the FCC to review its rule change on the effect for public safety as related to the California wildfire incident, and the impact on low-income telecommunications services. Each of the three judges wrote an additional opinion. Both Circuit Judges Millett and Wilkens wrote in concurrence of the per curiam decision, while Senior Circuit Judge Williams wrote in part concurrence and part in dissent. All three reflected on the change of the Internet structure since Brand X was decided and pointing out that today, ISPs are better classified as information services than common carriers. At the time of Brand X, the Supreme Court had relied on the existing of core technologies like caching at the ISP level and the Domain Name Service to justify that ISPs are information services. In this ruling, the judges noted that now, Internet services are defined by information that consumers can access, and as such, ISPs should be able to offer those consumers who want faster access to that information the capabilities to offer tiered service without regulation. Subsequent actions Because of the Appeals Court decision that only partially upheld the FCC's ruling, either side may seek appeals with the Supreme Court. Parties that supported the plaintiffs, including the Computer & Communications Industry Association, other trade groups, and some of the states that joined with them sought an en banc hearing from the full Appeals Court, but this request was denied in February 2020. While the decision was seen as a short-term defeat for net neutrality proponents, the decision clearing the way for state and local regulation was seen as net positive. While California's net neutrality law is still barred from enforcement until the FCC's case against the state is complete, state lawmakers saw the decision as a victory since they expect the courts to clear the state of any wrongdoing in passing the legislation. Four other states have passed or have pending legislation to enforce net neutrality at the state level while an additional 33 other states along with the District of Columbia have proposed bills favoring net neutrality. However, it is expected that there will be ongoing legal challenges from ISPs for these state bills that may limit their enforcement. Another potential route would be for Congress to pass legislation overriding the FCC's decision to classify ISPs as common carriers or otherwise enforce net neutrality. Some lawmakers have attempted to pass such laws, but none has succeeded. There are also concerns that in haste to pass legislation, Congress may support poorly-thought out legislation, or legislation that also contains "trojan horse" clauses favorable to special interests. Mark Stanley of Demand Progress suggested that ISPs and telecomm groups may seek to lobby Congress to pass a weaker form of net neutrality to minimally satisfy its proponents but still would be beneficial to ISPs. The FCC issued another public commenting period from February to March 2020 to gain input on the factors of public safety, reduced infrastructure spending, and impact on the Lifeline program as identified in the Appeals Court ruling. Following the public comment period, the FCC voted again in October 2020 on the same 3–2 lines to uphold their prior decision, with Pai, in a separate post, stating that based on the public comments he was "confident that the regulatory framework we set forth in the Restoring Internet Freedom Order appropriately and adequately addresses each issue." References Further reading External links - "Restoring Internet Freedom" FCC Order, published February 22, 2018 DC Circuit decision Federal Communications Commission litigation History of the Internet 2019 in United States case law Net neutrality Computer case law United States Court of Appeals for the District of Columbia Circuit cases Mozilla
Mozilla Corp. v. FCC
[ "Engineering" ]
2,585
[ "Net neutrality", "Computer networks engineering" ]
61,964,928
https://en.wikipedia.org/wiki/Cockade%20of%20France
The cockade of France () is the national ornament of France, obtained by circularly pleating a blue, white and red ribbon. It is composed of the three colors of the French flag, with blue in the center, white immediately outside and red on the edge. History The French tricolor cockade was devised at the beginning of the French Revolution. On 12 July 1789 – two days before the storming of the Bastille – the revolutionary journalist Camille Desmoulins, calling on the Parisian crowd to revolt, asked the protesters what color to adopt as a symbol of the revolution, proposing either green (representing hope) or the blue of the American revolution, symbol of freedom and democracy. The protesters responded "The green! The green! We want green cockades!" Desmoulins then took a green leaf from the ground and pinned it to his hat. However, the green was abandoned after just one day because it was also the color of the king's brother, the reactionary Count of Artois, later King Charles X. The following day, 13 July, an opportunity arose to create a cockade of different colors when those bourgeois who hoped to limit revolutionary excesses established a citizen militia. It was decided that the militia should be given a distinctive badge in the form of a two-colored cockade in the ancient colors of Paris, blue and red. On 17 July, King Louis XVI went to Paris to meet the new French National Guard: its members wore the blue and red cockade of the militia, to which it would appear that the Marquis of Lafayette, commander of the Guard, had added a white band representing loyalty to the Sovereign. Louis XVI put it on his hat and – with some reluctance – approved the appointment of the revolutionary Jean Sylvain Bailly as mayor of Paris, and the formation of the National Guard led by Lafayette. Thus was born the French tricolor cockade. On the same day, the Count of Artois left France, along with members of the nobility supportive of absolute monarchy. The tricolor cockade became the official symbol of the revolution in 1792, with the three colors now said to represent the three estates of French society: the clergy (blue), the nobility (white) and the third estate (red). The use of the three colors spread, and a law of 15 February 1794 made them the colors of the French national flag. From August 1789, Italian demonstrators in sympathy with the French revolution began to use simple cockades of green leaves inspired by the primitive French cockade. From these evolved the red, white and green Italian tricolor cockade. Use Use on institutional vehicles Decree no. 89-655 of 13 September 1989 forbids the use of the tricolor cockade on all land, sea and air vehicles, with the following exceptions: by the president of the French Republic; by members of the government of France; by members of French Parliament; by the president of the Constitutional Council; by the vice president of the Council of State; by the President of the Economic, Social and Environmental Council; by prefects in their own departments, and by sub-prefects on official duties in their arrondissements. The use of the tricolor cockade is not permitted for mayors' vehicles, and offenders risk up to one year's imprisonment and a fine of €15,000. Use on state aircraft The use of the cockade on French military aircraft was first mandated by the Aéronautique Militaire in 1912, and subsequently became widespread during World War I. The French practice inspired the adoption of a similar roundel (with colours reversed) by the British Royal Flying Corps, and of comparable insignia by other nations. Cockades were, and still are, painted on the aircraft fuselages as the primary military aircraft insignia of the French Air Force; modified designs are used for other French government aircraft. Cockades continue to be used on French state aircraft. After World War II a yellow border was added to the cockade, which was removed in 1984. Other uses The tricolor cockade is also used on certain elite uniforms, both military and civilian, which include headwear decorated with it. It is likewise an attribute of Marianne, the national allegorical representation of France, who is conventionally depicted wearing a Phrygian cap, sometimes decorated with a tricolor cockade. The cockade appears on mayors' badges; and on the sash worn by Miss France, as well as French-made "méduses" (jellyfish in English) plastic beach sandals. See also Flag of France Democratic-Republican Party, whose logo was nearly identical to the Cockade Citations National symbols of France France
Cockade of France
[ "Mathematics" ]
946
[ "Cockades", "Symbols" ]
61,969,031
https://en.wikipedia.org/wiki/Odile%20Favaron
Odile Zink-Favaron (born May 3, 1938) is a French mathematician known for her research in graph theory, including work on well-covered graphs, factor-critical graphs, spectral graph theory, Hamiltonian decomposition, and dominating sets. She is retired from the Laboratory for Computer Science (LRI) at the University of Paris-Sud. Favaron earned a doctorate at Paris-Sud University in 1986. Her dissertation, Stabilité, domination, irrédondance et autres paramètres de graphes [Independence, domination, irredundance, and other parameters of graphs], was supervised by Jean-Claude Bermond. Personal life Her father was poet and professor . Michel Zink and Anne Zink are her siblings. References French mathematicians French women mathematicians Graph theorists Paris-Sud University alumni 1938 births Living people
Odile Favaron
[ "Mathematics" ]
175
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
61,969,687
https://en.wikipedia.org/wiki/Friedberg%E2%80%93Muchnik%20theorem
In mathematical logic, the Friedberg–Muchnik theorem is a theorem about Turing reductions that was proven independently by Albert Muchnik and Richard Friedberg in the middle of the 1950s. It is a more general view of the Kleene–Post theorem. The Kleene–Post theorem states that there exist incomparable languages A and B below K. The Friedberg–Muchnik theorem states that there exist incomparable, computably enumerable languages A and B. Incomparable meaning that there does not exist a Turing reduction from A to B or a Turing reduction from B to A. It is notable for its use of the priority finite injury approach. See also Post's problem References Notes Mathematical logic
Friedberg–Muchnik theorem
[ "Mathematics" ]
148
[ "Foundations of mathematics", "Mathematical logic", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
61,970,353
https://en.wikipedia.org/wiki/Mac%20Lane%20coherence%20theorem
In category theory, a branch of mathematics, Mac Lane's coherence theorem states, in the words of Saunders Mac Lane, “every diagram commutes”. But regarding a result about certain commutative diagrams, Kelly is states as follows: "no longer be seen as constituting the essence of a coherence theorem". More precisely (cf. #Counter-example), it states every formal diagram commutes, where "formal diagram" is an analog of well-formed formulae and terms in proof theory. The theorem can be stated as a strictification result; namely, every monoidal category is monoidally equivalent to a strict monoidal category. Counter-example It is not reasonable to expect we can show literally every diagram commutes, due to the following example of Isbell. Let be a skeleton of the category of sets and D a unique countable set in it; note by uniqueness. Let be the projection onto the first factor. For any functions , we have . Now, suppose the natural isomorphisms are the identity; in particular, that is the case for . Then for any , since is the identity and is natural, . Since is an epimorphism, this implies . Similarly, using the projection onto the second factor, we get and so , which is absurd. Proof Coherence condition (Monoidal category) In monoidal category , the following two conditions are called coherence conditions: Let a bifunctor called the tensor product, a natural isomorphism , called the associator: Also, let an identity object and has a left identity, a natural isomorphism called the left unitor: as well as, let has a right identity, a natural isomorphism called the right unitor: . Pentagon and triangle identity To satisfy the coherence condition, it is enough to prove just the pentagon and triangle identity, which is essentially the same as what is stated in Kelly's (1964) paper. See also Coherency (homotopy theory) Monoidal category Symmetric monoidal category Coherence condition Notes References Section 5 of Saunders Mac Lane, Further reading External links Category theory
Mac Lane coherence theorem
[ "Mathematics" ]
444
[ "Functions and mappings", "Mathematical structures", "Category theory stubs", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
61,972,259
https://en.wikipedia.org/wiki/Photonic%20topological%20insulator
Photonic topological insulators are artificial electromagnetic materials that support topologically non-trivial, unidirectional states of light. Photonic topological phases are classical electromagnetic wave analogues of electronic topological phases studied in condensed matter physics. Similar to their electronic counterparts, they, can provide robust unidirectional channels for light propagation. The field that studies these phases of light is referred to as topological photonics. History Topological order in solid state systems has been studied in condensed matter physics since the discovery of integer quantum Hall effect. But topological matter attracted considerable interest from the physics community after the proposals for possible observation of symmetry-protected topological phases (or the so-called topological insulators) in graphene, and experimental observation of a 2D topological insulator in CdTe/HgTe/CdTe quantum wells in 2007. In 2008, Haldane and Raghu proposed that unidirectional electromagnetic states analogous to (integer) quantum Hall states can be realized in nonreciprocal magnetic photonic crystals. This prediction was first realized in 2009 in the microwave frequency regime. This was followed by the proposals for analogous quantum spin Hall states of electromagnetic waves that are now known as photonic topological insulators. It was later found that topological electromagnetic states can exist in continuous media as well--theoretical and numerical study has confirmed the existence of topological Langmuir-cyclotron waves in continuous magnetized plasmas. Platforms Photonic topological insulators are designed using various photonic platforms including optical waveguide arrays, coupled ring resonators, bi-anisotropic meta-materials, and photonic crystals. More recently, they have been realized in 2D dielectric and plasmonic meta-surfaces. Despite the theoretical prediction, no experimental demonstration of photonic topological insulator in continuous media has been reported. Chern number As an important figure of merit for characterizing the quantized collective behaviors of the wavefunction, Chern number is the topological invariant of quantum Hall insulators. Chern number also identifies the topological properties of the photonic topological insulators (PTIs), thus it is of crucial importance in PTI design. The full-wave finite-difference frequency-domain (FDFD) method based MATLAB program for computing the Chern number has been written. Recently, the finite-difference method has been extended to analyze the topological invariant of non-Hermitian topological dielectric photonic crystals by first-principle Wilson loop calculation. All MATLAB codes can be found at GitHub website. See also Symmetry-protected topological order Metamaterial References Photonics Electromagnetism
Photonic topological insulator
[ "Physics" ]
540
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
61,972,973
https://en.wikipedia.org/wiki/Age%20of%20consent%20by%20country
The age of consent is the age at which a person is considered to be legally competent to consent to sexual acts and is thus the minimum age of a person with whom another person is legally permitted to engage in sexual activity. The distinguishing aspect of the age of consent laws is that the person below the minimum age is regarded as the victim, and their sex partner is regarded as the offender, unless both are underage. Definitions Restricted by age difference: younger partner is deemed able to consent to having sex with an older one as long as their age difference does not exceed a specified amount. Restricted by authority: younger partner is deemed able to consent to having sex with an older one as long as the latter is not in a position of trust or authority, or is not recognised to be abusing the inexperience of the younger one. Unrestricted: age from which one is deemed able to consent to having sex with anyone else at or above the age of consent or the marriageable age if they must be married. Different jurisdictions express these definitions differently, like Germany, may say the age of consent is 18, but an exception is made down to 6 years of age, if the older partner is not in a position of authority over the younger one. The data below reflects what each jurisdiction's legislation actually means, rather than what it states on the surface. Governed by State or Province Australia Mexico United States Rest of the world See also Age of consent Ages of consent in Africa Ages of consent in Asia Ages of consent in Europe Ages of consent in North America Ages of consent in the United States Ages of consent in Oceania Ages of consent in South America Age of consent reform Age of consent reform in Canada Age of consent reform in the United Kingdom Age of Consent Act, 1891 French petition against age of consent laws Youth Youth suffrage Youth rights Legal age Legal drinking age Age of majority Age of reason (canon law) Age of criminal responsibility Mature minor doctrine Emancipation of minors Fitness to plead, law of England and Wales Minors and abortion Convention on the Rights of the Child Child sexual abuse Sex-positive movement Age disparity in sexual relationships Comprehensive sex education Adult film industry regulations Sodomy law The Maiden Tribute of Modern Babylon References Sex laws Sexuality Minimum ages
Age of consent by country
[ "Biology" ]
448
[ "Behavior", "Sexuality", "Sex" ]
61,975,691
https://en.wikipedia.org/wiki/C12H16O6
{{DISPLAYTITLE:C12H16O6}} The molecular formula C12H16O6 (molar mass: 256.25 g/mol, exact mass: 256.0947 u) may refer to: Diethylsuccinoylsuccinate Phenyl--galactopyranoside
C12H16O6
[ "Chemistry" ]
71
[ "Isomerism", "Set index articles on molecular formulas" ]
61,977,183
https://en.wikipedia.org/wiki/Nokia%206.2
The Nokia 6.2 is an Android smartphone designed by HMD Global. It was announced at IFA Berlin on 6 September 2019, with two models launching at costs of €199 and €249. Specifications Design The Nokia 6.2 has a 6.3-inch display with a 19:9 aspect ratio and is nearly bezel-less, with a dewdrop notch at the top. The display is HDR10 certified with 1 billion colours and real time SDR to HDR conversion. The phone weighs . The Nokia 6.2 has three cameras: a main sensor, an ultrawide sensor, and a depth sensor. The phone has a dedicated night mode, which helps improve low-light photography. The smartphone also features Nokia face iD, which is made by Truly Secure. Another change on the exterior of the phone is the addition of a dedicated Google Assistant button on the left of the phone. This can be pressed to quickly activate the Google Assistant or held and released for the Google Assistant to start and stop listening. It can also be remapped in Settings to open an app of the user's choice. The phone has a smooth metallic finish and a satin glass back, rather than the milled aluminium 6000 back found in the Nokia 6.1 and 6. Internal components The Nokia 6.2 comes with the Snapdragon 636 system on a chip (SoC). It has a 3,500 mAh battery with a maximum storage of 512 GB. Models The €199 model comes with 3 GB RAM (random access memory) and 32 GB storage, whereas the €249 model comes with 4 GB RAM and 64 GB storage. A more expensive model with 4 GB RAM and 128 GB storage will be available in some countries in the future. The Nokia 6.2 comes in two colours: Ceramic Black and Ice. Reception Reviews for the Nokia 6.2 were mostly positive. TechRadar stated, "The Nokia 6.2 looks to be a strong contender at the affordable end of the Android smartphone market". Gizmochina gave the phone an 8.4 out of 10. Trusted Reviews said, "the screen looks great – especially for this price" and "With its three cameras on the back, nice-looking display and Android One software, the Nokia 6.2 looks like it could be a rival to the Moto G series and the likes of Redmi, Realme and Honor." GSMArena said that "the colours looked nice and outdoor visibility was okay" and said it's "reasonably well specced for the segment it needs to fight in". They also commented on the charging, saying "We're not too thrilled about the 5V/2A charging capabilities - it'll take a while to fill...[the] 3,500mAh batteries." See also Differences between Nokia 6.2 and 7.2 References 6.2 Mobile phones introduced in 2019 Phablets Mobile phones with multiple rear cameras Mobile phones with 4K video recording
Nokia 6.2
[ "Technology" ]
614
[ "Crossover devices", "Phablets" ]
61,977,369
https://en.wikipedia.org/wiki/Xiaomi%20Mi%209%20Pro
The Xiaomi Mi 9 Pro 5G is a flagship Android smartphone developed by Xiaomi. It was announced in September 2019 as an upgraded version of the Mi 9. Specifications Design The Xiaomi Mi 9 Pro 5G is similar to the Mi 9 externally, with Gorilla Glass 6 on both the front and rear and a 7000 series aluminum frame. It is available in Dream White or Titanium Black. Hardware The Xiaomi Mi 9 Pro 5G is powered by the Qualcomm Snapdragon 855+ SoC, with 8 GB or 12 GB of LPDDR4X RAM and the Adreno 640 GPU. Storage options include 128 GB, 256 GB, or 512 GB. The display remains the same, with a 6.39-inch (162.3 mm) 1080p (1080 × 2340) AMOLED panel and an 85.5% screen-to-body ratio. The battery is larger at 4000 mAh, and supports 40 W fast charging over USB-C with 30 W fast wireless charging. It can also charge other Qi-compatible smartphones at 10 W. The under-display optical fingerprint sensor is carried over from the Mi 9. The cameras are unchanged as well, with a 48 MP main camera, a 12 MP telephoto lens with 2x optical zoom and a 16 MP ultrawide lens at the rear, with a 20 MP front camera. Software It runs on Android 9 Pie, with Xiaomi's custom MIUI 10 skin. Later it was updated to MIUI 13 based on Android 11. References Android (operating system) devices Xiaomi smartphones Mobile phones introduced in 2019 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Mobile phones with infrared transmitter Discontinued flagship smartphones
Xiaomi Mi 9 Pro
[ "Technology" ]
358
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
60,784,711
https://en.wikipedia.org/wiki/Digital%20sublime
The digital sublime is the mythologization of the impact of computers and cyberspace on human experiences of time, space and power. It is also known as cyber sublime or algorithmic sublime. It is a philosophical conception of emotions that captivate the collective conscience with the emergence of these new technologies and the promises and predictions that emerge from them. These emotions are the awe, the astonishment, the rationality-subsuming glory, and the generally intense spiritual experience. This feeling is essentially provoked by intentionally black-boxed algorithms or by the lack of knowledge about algorithms. The sublime can be either utopian or dystopian depending on the individual's interpretation of their emotional response. The utopian interpretation of the digital sublime is known as digital utopianism and the dystopian is referred to as digital dystopia. Classical notion of the sublime The classical notion of the sublime was fathered by Immanuel Kant in his work Observations on the Feeling of the Beautiful and Sublime (1764). He defined the Sublime in his piece Critique of Judgment (1790) as: “an object (of nature) the presentation of which determines the mind to think of nature's inability to attain to an exhibition of ideas.”The nature of the classical sublime according to Kant was the sensation produced in the individual when confronted with something that: Was beyond the realms of the mind's comprehension Overawed the imagination The result was an overwhelming sense of empowerment at being able to stand before such a spectacle and exhilaration at how fragile a person is in the face of such tremendous power and immensity. Examples for Kant were standing before a mountain or overlooking the raging sea. Edmund Burke's (1756) work, "Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful" is another contributor to this classical notion, written at a similar time to Kant. For him, the sublime emerges from the terrible or that which invokes terror. Origins of the digital sublime Vincent Mosco is one of the leading thinkers in the development and distinction of the digital sublime as a highly respected academic amongst the international community and is currently a professor at Queen's University in Canada. His seminal work "The Digital Sublime: Myth, Power, and Cyberspace" explains that the digital sublime did not have a definite beginning. However, he outlines how it emerged as a progression from the technological sublime, which was the beginning of a shift in conceptions of the sublime connected to the industrial revolutions of the late 19th century and early 20th Century. Inventions such as the railroad, electricity, the radio, and the aeroplane all captivated the collective conscience in the possibility of ushering in a global village. Mosco argues that there has not been a significant change in our approach to the appearance of new technologies, with the same prophecies of revolutionizing the human experience of time, space and power, even to the extent of ending world conflict. Mosco identifies these same promises heralded by Computers and Cyberspace. The digital revolution and information revolution are captivating the imagination and attention of global onlookers in an almost identical fashion. Skowronska, an emerging academic from the University of Sydney and contemporary artist, proposes that it was with the emergence of new technologies such as graphics cards for video games, open source programs, three dimensional computer processing engines, the digital video screen and others opened up whole new possibilities of virtuality as a creative tool. She proposes that it was in the ability of these technologies to represent the invisible that truly distinguished the digital sublime from its classical notion and that it did so "through a virtual channel of mathematical coding, or algorithms, that act as correlates for this invisible world, translating it into a visual field perceptible by human optics". Artistic expression The classical notion of the sublime has been represented by various artists such as J. M. W. Turner drawing on the inspiration described by both Kant and Burke: the natural world. Artwork on the digital sublime has now emerged attempting to capture our awe and excitement around Big Data, New Media and Web 2.0. Huang proposes that the digital sublime in artistic expression is the representation of something unpresentable. A prime example of this is digital composite-photography which involves stitching photographs together to create images that would not be possible without recent technology in order to conceptualise complex ideas through image. Skowronska, however, associates the digital sublime in art as a move away from the massive to the minutiae. She proposes that the representation of what is not physically perceptible to the eye, but that has a representation in the virtual facilitated by new technology is the distinctive mark of this art form. She has done significant work of her own in this field as well as developing the digital sublime conceptually through her thesis. Her individual work focuses on the manipulation, projection and representation of data through different digital forms. Criticisms The digital sublime, for some theorists, is not only unhelpful in understanding and engaging with new technology, but they believe that Mosco's myths inhibit and endanger cyberculture in a way that Burkart likens to the endangerment of biodiversity. Such theorists argue that the digital sublime blinds users to the risks and vulnerabilities of cyberspace. The digital sublime and political economy The Digital Sublime has been taken by media theorists to obscure and obfuscate the interworking of Web 2.0. Media theorists have worked to critically analyse and evaluate the processes, algorithms, and functions behind the user interface in order to unveil the driving forces of development and updates online. Political economic theorists have proposed that the myths espoused by the digital sublime of the internet providing a faultless user experience providing everything desired at the user's fingertips are inaccurate. Underneath the surface, business owners are manipulating the infrastructure and digital architecture of platforms in order create the most profit. The digital sublime and the music industry The digital sublime has seemingly encouraged the narrative that streaming services and cloud based storage will lead to unprecedented freedom and access music. While it is true that our physical limitations in access to music content has been reduced to the requirement of having an electronic device with internet connection, the truthfulness of this emancipation has been brought under scrutiny. Patrick Burkart proposes that the emancipation of content is limited to that of the mobilisation of content. As opposed to freeing up content, access is still limited by algorithms giving preference to more popular content and consequently further obscuring the greater diversity of content that is actually available. Instead, he argues that it is those who disrupt our perception of the seamless and all encompassing nature of music streaming services that reveal to us the technical and legal barriers that benefit content providers and are limiting, even shepherding, user experience so as to meet their goals. He sees pirates of media content as the key dissenters to this otherwise invisible vertical integration and that they are symptomatic of the fragility of cyber-communities. See also Cybernetic art Digital empathy Techno-animism Sublime (philosophy) References Concepts in aesthetics Information technology Digital art
Digital sublime
[ "Technology" ]
1,438
[ "Information and communications technology", "Information technology" ]
60,784,756
https://en.wikipedia.org/wiki/Pulse%20electrolysis
Pulse electrolysis is an alternate electrolysis method that utilises a pulsed direct current to initiate non-spontaneous chemical reactions. Also known as pulsed direct current (PDC) electrolysis, the increased number of variables that it introduces to the electrolysis method can change the application of the current to the electrodes and the resulting outcome. This varies from direct current (DC) electrolysis, which only allows the variation of one value, the voltage applied. By utilising conventional pulse width modulation (PMW), multiple dependent variables can be altered, including the type of waveform, typically a rectangular pulse wave, the duty cycle, and the frequency. Currently, there has been a focus on theoretical and experimental research into PDC electrolysis in terms of the electrolysis of water to produce hydrogen. Claims have been made that there is a possibility it can result in a higher electrical efficiency in comparison to DC water electrolysis, but past research has shown this is not the case. The varying voltage and current added on top of the DC cause additional energy consumption with no effect on the hydrogen production. Because of the increasing energy consumption, attempts to replicate claimed benefits experimentally have not succeeded, and have found negative effects on the electrolyser longevity instead. PDC electrolysis is not only confined to the electrolysis of water. Uses in industry such as electroplating and electrocrystallisation are also undergoing research due to the wider range of properties that can be achieved. The various and alterable effects of using intermittent pulses in PDC electrolysis has resulted in an area of interest that could benefit industry. However, as it is still being researched and has produced conflicting results, a consistent and reliable answer to how dependent electrolysis efficiency is on the properties of an electrical pulse has not been determined, hence, other forms of electrolysis such as polymer electrolyte membrane and alkaline water electrolysis are being used in industry. Research history PDC electrolysis was first considered theoretically in 1952, and experimental research began as early as 1960 however it was originally focused on its technical applications to industry and the possibilities of improving the quality and rate of metal deposition. It partially succeeded, providing promising results its ability to create smoother, denser deposits, and reducing the amount of metal required in electroplating. The first instance it was considered to initialise the electrolysis of water was from the perspective of magnetolysis in 1985, where high strength magnets, or in this case electromagnets, are used in conjunction with homopolar propellers. Ghoroghichian and Bockris conducted this experimental research to determine how a pulsed current can impact the rate of hydrogen production and provide economic advantages. A current density ratio of 2.07 was observed, demonstrating, for the first time, that a pulsed current can double the production of hydrogen, in comparison to a steady state current. Since hydrogen gas cannot be collected in its free form, and it can be used to provide a source of renewable and clean energy through fuel cells, discovering an electrolysis method with the greatest efficiency is valued. With early experimental and theoretical success, many patents began to be developed until as recent as 2002, but since 1985, it has only been researched intermittently with varying levels of success. Experimental research With the perspective that the current use of non-renewable fuel sources is a main cause of global environmental problems, hydrogen is being viewed as a possible renewable fuel source replacement. For this to be feasible, the production of hydrogen, through methods such as electrolysis, must be efficient in terms of the energy, cost and time required. Whilst multiple methods of pulse electrolysis have been studied, and experimental results are mixed, the underlying theory behind this experimental approach seems to remain consistent. Theoretical Concept When a voltage is applied to an electrolysis cell, immediately following this an Electric Double Layer (EDL), or a diffusion layer, is theoretically formed. This can create a capacitance, or can cause the electrolyser to act as a capacitor. When this is present, excess voltage must be supplied by the direct current to compensate for the loss in the 'capacitor', which rises the required voltage supplied to what is called the thermo-neutral voltage. One of the aims of PDC electrolysis is to overcome this, and theoretically, when the PMW switches the current on, a capacitance will be stored, and when the duty cycle is over, it will be released, continuing the flow of current whilst reducing the EDL that is formed. Poláčik and Pospíšil believe that by manipulating the dependent variables, such as the duty cycle, can increase or decrease the effectiveness of pulse electrolysis at reducing this layer. A theoretical equation, the Sand equation, is used to calculate the amount of time required to allow the EDL to fall to zero, and allow PDC electrolysis to achieve its highest efficiencies. Use in Magnetolisis Electrolysers require high currents produced by very low voltages. A homopolar generator has the ability to do this, so in Bockris and Ghoroghchian's original experiment in 1985, they followed Faraday's idea. Using a magnetic field of 0.86T produced by permanent magnets, they placed a stainless-steel disc in between. The disc needed a rotation speed of 2000 rpm to reach the correct electrical potential for electrolysis. The difference between Faraday's original model and Bockris and Ghorogchian's is that their disc will only rotate when it is in contact with an electrolyte. They encountered one large problem, a viscous force created by the electrolyte, that slowed down the motion of the disc. The two ways they could fix this is to rotate the disc and solution together or increase the magnetic field used. The latter being most practicable, the required magnetic field was calculated according to the power consumption rate or producing a cubic meter of hydrogen. It was discovered a magnetic field of 11T was needed for effective electrolysis, more than 16 times greater than what was originally used. Since superconducting magnets would be required, and they can become too expensive to justify their use, ruling this out as a possible method. Their final decision was to use a homopolar generator as an external source of power. This follows Faraday's method more closely. In this method, a pulse potential was created to take advantage of previous studies that give an effectiveness factor of 2 when either a nickel electrode or a Teflon-bonded platinum electrode was used. The generator was constructed with a magnetic flux density of 0.6T, a propeller radius of 30 cm and a loop coated with copper strips. To increase the output potential, and reducing the rotation speed required, these were connected in series. Pulses of 2-3V that were sustained for 1ms were achieved. This was the first instance of a successful application of pulse electrolysis for the production of hydrogen. However, it still presents its own limitations in the possibility for it to be used in industry. Conflicting research A comparison between a pulsed and non-pulsed dc current electrolysers was explored in 1993 by Shaaban, that demonstrated a non-pulsed current used the least electrical power. The experimental electrolyser separated the anolyte and catholyte compartments and used a 324-Naflon membrane to allow the ion exchange. The distance between the anode, made with platinum coated titanium, and the cathode, stainless steel, was 3mm and was immersed in a 10 weight percent sulfuric acid electrolyte. He conducted tests under several different frequencies that included '0.01 Hz, 0.5 kHz, 5 kHz, i kHz, 10 kHz, 25 kHz, and 40 kHz' and with four duty cycles, '10, 25, 50, and 80%'. Initial observations revealed that the off-period resulted in a reversal in polarity, causing the reaction to reverse. This effected the cathode, which displayed a 2g loss after experimentation. A diode was input into the circuit to rectify the polarity. However, the cell was prevented from dropping to 0 V during the off-period, maintaining a higher value of 2.3V. This further impacted the experiment, distorting the square wave produced by the function generator Shaaban used, as the electrical potential provided needed to overcome the cell voltage of 2.3V before current could flow. Bokris et al. records that current would continue to flow, discharging ions from the EDL, but this was contradicted in this experiment. This only occurred when the diode was in place but it prevented a current spike in the duty cycle as well. With a 10% duty cycle at a 1 kHz pulse, temperature increases of nearly 7 °C greater than in the non-pulsed experimental electrolysis, were found. Temperature increases can prevent the circuit Calculating the power consumption, it was determined a non-pulsed current had power demand losses of 3.5%, and a pulsed current resulted in 13 - 16% losses. It also opposes the idea from Bockris et al. that the effectiveness of non-pulsed dc current electrolysis increases by a factor of 2 when a pulsed current is applied. Industrial Uses The possible increased effect a pulsed current will have on the corrodibility of metals was first looked at by de la Rive in 1837. It was investigated around 60 years later by Coehn regarding the effect of a current with a rectangular waveform, on the plating of zinc deposits, resulting in a successful application for a patent. A full review on using PDC electrolysis in electroplating, also known as electrodeposition or 'pulse plating', was only published in 1954 by Baeyens, this being the first area of research into the use of pulse electrolysis in industry. A pulsed current can be varied in many ways that increases the possible outcomes and can vary the properties of deposited metals during electroplating. Hansel and Roy, in their review of the third European Pulse Plating Seminar, concluded that each deposition system must have a unique sequence developed in order to optimise the process and gain the desired results, opposing the inability of traditional plating to be as freely tailored to a situation. The nucleation and crystallisation of the deposition metal is directly affected and can have favourable or unfavourable circumstances if specific conditions are not met. It is reported that pulse plating can encourage nucleation causing grain refinement, and reducing grain size, as well as increasing the deposit density that can improve micro hardness. These effects were first researched on zinc by Coehn. It was discovered a pulsed current at a high frequency can produce deposits of higher quality, with properties ranging from a smoother finish by the reduction in grain size, as well as lowering its corrosion rate. This is beneficial as it is mainly used as a sacrificial anode in industry. Claimed advantages In theoretical electrolysis of water, a voltage of only 1.23 V is required to split water into hydrogen and oxygen. The formation of an EDL increases this to its thermo-neutral voltage of 1.45 V. It is claimed that minimising the EDL formed during pulse electrolysis is advantageous, as it can reduce the thermo-neutral voltage and the energy input required, increasing energy efficiency. However, this claim follows from a misconseption regarding energy consumption in the system when varying current and voltage waveforms are applied. The hydrogen production rate in the process is determined by the mean of the current waveform, according to the Faraday's law of electrolysis, but the mean of the voltage waveform is not sufficient to evaluate the rate of energy consumption. Instead, the mean of the product of instantaneous current and voltage should be assessed, revealing increased energy consumption due to the alternating current and voltage waveforms, in comparison to DC water electrolysis with an equal hydrogen production rate. Disadvantages Whilst the method of PDC electrolysis has been claimed by Ghoroghichian and Bockris in 1952 and 1985 to work extremely well in theory, it is difficult to replicate with consistently positive results in practical experimentation. As further research about the dynamic operation of water electrolysis have found only negative impact from alternating the current and voltage supplied to the system, both from energetical and longetivity point of view, the claimed benefits of pulsed electrolysis might not have basis in reality. The energy consumption of a system with only positive resistance (cf. negative resistance) can only increase as a function of current and voltage amplitude. According to Shabaan, during the pulse-off period, if the electrolytic cell is not constructed properly, the current polarity can reverse. This can cause the cathode to deteriorate. In electrolysis, the cathode is where the reduction of hydrogen occurs, forming the desired hydrogen gas. Any loss in mass can reduce the speed and effectiveness of the electrolytic reaction, reducing the overall efficiency of the pulse electrolysis method. Shaaban also states that due to expected internal losses, such as through heat, the current density required will increase, which increases the required voltage. As a result, greater over potentials are needed that further converts to heat. References Chemical processes Electrolysis Electrochemistry
Pulse electrolysis
[ "Chemistry" ]
2,725
[ "Chemical processes", "Electrochemistry", "nan", "Electrolysis", "Chemical process engineering" ]
60,785,291
https://en.wikipedia.org/wiki/Design%20infringement
Design is a form of intellectual property right concerned with the visual appearance of articles which have commercial or industrial use. The visual form of the product is what is protected rather than the product itself. The visual features protected are the shape, configuration, pattern or ornamentation. A design infringement is where a person infringes a registered design during the period of registration. The definition of a design infringement differs in each jurisdiction but typically encompasses the purported use and make of the design, as well as if the design is imported or sold during registration. To understand if a person has infringed the monopoly of the registered design, the design is assessed under each jurisdiction's provisions. The infringement is of the visual appearance of the manufactured product rather than the function of the product, which is covered under patents. Often infringement decisions are more focused on the similarities between the two designs, rather than the differences. Legislation Australia In Australia, a person infringes a registered design if a party manufactures and sells, uses or imports the same or similar design to the registered design without permission of the registered owner. This is held under s71 of the Design Act 2003 (Cth). The following is an extract of s71 of the Designs Act 2003 (Cth), under Infringement of Design."(1) A person infringes a registered design if, during the term of registration of the design, and without the licence or authority of the registered owner of the design, the person: (a) makes or offers to make a product, in relation to which the design is registered, which embodies a design that is identical to, or substantially similar in overall impression to, the registered design; or (b) imports such a product into Australia for sale, or for use for the purposes of any trade or business’ or (c) sells, hires or otherwise disposes of, or offers to sell, hire or otherwise dispose of, such a product; or (d) uses such a product in any way for the purposes of any trade or business; or (e) keeps such a product for the purpose of doing any of the things mentioned in paragraph (c) or (d)"The Designs Act recognises two types of infringement: primary and secondary infringement. A primary infringement relates to s71(1)(a), where a person directs, causes or procures the product to be made by a third party. Secondary infringement relate to ss 71(1)(b), (c), (d), (e), where a person infringes a registered design if there is no licence or authority given. A parallel import of a registered design is allowed in Australia. The Designs Act 2003 replaced the Designs Act 1906, having a particular change to the way design infringements are identified. Key changes included removing tests of obvious and fraudulent imitations. Also introduced was that a certificate of examination must be issued prior to infringement proceedings. The test for infringement is significantly broader as it expressly requires an assessment of the similarities and differences between the registered design and the purported infringing design. United Kingdom Under the Registered Designs Act 1949, a design right is infringed when a person without consent from the registered design holder makes, offers, import or exports the product. The infringement rights in registered designs is laid out within Section 7A of the Registered Designs Act 1949. An infringement of the right in a registered design is actionable by the registered proprietor. The Act advises that the right in a registered design is not infringed if, the act is done in private and not commercial in nature, is experimental, or a reproduction for teaching purposes. The UK Court of Appeal confirmed that in determining an infringing article, the registered design, the alleged infringing object, and the prior art must be evaluated. Simply the test is a visual comparison between the two designs. The Act also provides an exemption for innocent infringers. Damages are not awarded against a defendant if there is sufficient evidence to prove that he/she was not aware that the design was registered. An alternative protection that the United Kingdom legislation offers is the principle of an unregistered design. To prosecute for an infringement, the unregistered design right holder must prove that they created the design in the first place and that the infringing article is a deliberate duplication. Further it must be proved that the shape and overall configuration of the protected product is not the same as any products that have been publicised before the design was created. United States In the United States, designs are governed by the patent statute, set out in 35 USC § 171 Chapter 16. Here, protection is given for a new, original and ornamental design of an article. As with other jurisdictions, the design patent within the US only provides protection for the visual design aspects of the article, rather than the function. Chapter 28 of 35 USC § 171 covers the infringement of patents, and defines and infringement as without authority, makes, uses, offers, or sells a patented design. An infringement also covers any attempt in infringing a design, and selling components of patented articles. Section 127 outlines both direct and indirect infringement. Direct infringement encompasses the unauthorised importation of patented products (35 U.S.C. § 271(a)), and unauthorised importation of products of a patented process (35 U.S.C. § 271(a)). Indirect infringement imposes liability upon those why gave aided another in direct infringement of a registered design (35 U.S.C. § 271(b)) or contributed to infringement (35 U.S.C. § 271(c)). This highlights the most common type of infringement, where the infringer's knowledge of the actions taken are established to confirm if there is an infringement. Proof of intent is also necessary to show the contribution to infringement. The statute does not provide protection for unregistered designs. To gain protection from infringement, and any design patent right, it is necessary to file a patent application. Testing infringement Infringement of a registered design can be identified through ‘the eyes of an ordinary observer’ test. This means that the appearance of an accused design is seen to be an infringement, if the design is significantly similar, and one may purchase the accused design product thinking that it is the patented design. This test is based on an ordinary observer being familiar with a product and being able to distinguish between the registered design and prior art designs. The case of Egyptian Goddess Inc. v Swisa Inc. was key in adopting the ordinary observer test. The Court found that for a design to be infringed, the accused design must have appropriated the registered design. The infringement lies within the similarity between the designs what is distinguished from the prior art base. In assessing the overall similarity of designs, for example, s19 of the Designs Act 2003 (Cth) provides a list of factors to consider in testing infringement. This includes understanding the differences between designs rather than emphasising the similarities. More weight must be given to the similarities between two designs. If one aspect of a design is substantially similar, weight must be given to the importance of that aspect of the design. The decision maker must further take the point of view from the standard of a person who is familiar with the products, the informed user. The ‘informed user’ test can also be used to identify an infringement of a registered or unregistered design. An informed user can range from a consumer to a sectoral expert with technical proficiency. The user will be able to notice small difference between design and can be seen as particularly observant having had a personal experience with, or key knowledge of the product. The informed user should not always be an expert of consumer to be the test in all cases. The selected informed user must be a person who has significant familiarity with the product's appearance, use and nature. Audiences Within design and patent law, experts are seen to be the primary decision makers in assessing the similarities and differences of designs. As infringement is judged from different audiences of the design, e.g., consumers and experts, the motivations for infringement are distinguished. Consumers note accused designs to be substitutes where they function in similar ways. To consumers the designs would be interchangeable. The test for infringement of a design patent draws much more from trademark than from patent law. As the test evokes an audience of reasonable purchasers of the design or product, similar to that of the trademark test. As mentioned, infringement is judged “in the eye of an ordinary observer”. From this, the audience for the test of infringement is an ordinary observer who is placed in the position to determine the similarities of the designs. Enforcement To commence enforcement proceedings it must be decided if the Court will establish that the product is infringing the registered design, and if the design is a valid registration. The registered design owner can only consider enforcement proceedings once a certificate of examination has been provided. To enforce design rights against an infringing designer the owner of the registered design must initiate the process of examination. This is in line with the certificate of examination. A design Registrar will not grant a certificate of examination of the design is found to be invalid as there is no newness or distinctiveness to the design. The examination will consist of a comparison of the designs that existed prior to the lodgement of the design application. The test of the ordinary observer enables registered designs with significant similarities to have a broader scope of enforcement. Among many jurisdictions is common for a party threatened with infringement to be allowed to seek relief even through the design has not yet been certified. Action may be taken to protect the goodwill and reputation of the design holder. Courts Courts are an essential aspect in the enforcement of design infringement. Court appointed experts are beneficial to enforcement proceedings, as a panel of assessors such as patent attorneys, designers and engineers enhance the limited technical knowledge a judge may have in a certain area. The Courts will assess damages based on the loss of profit and reputation of the design holder, and the profits made by the infringer. Case management is supported by the Court to enable the most economic and efficient method to bring the infringement proceedings to trial. Alternative dispute resolution Alternative dispute resolution can be a more effective way of resolving design infringement, as enforcement mechanisms are often not suited to the common disputes that arise. Design disputes can involve complex technical and commercial issues that can be better determined by an expert within alternative dispute resolution rather than employing witnesses within the courts. Arbitration and mediation are suitable for resolving intellectual property disputes, as most common disputes involve small claims for damages. For intellectual property disputes, alternative dispute resolution provides benefits including confidentiality, greater control over the process and a more neutral outcome. Alternative dispute resolution is more cost effective than litigation, therefore more attractive to smaller companies and individuals without the resources, time and funding to resolve cases in court. References Design Intellectual property infringement Intangible assets
Design infringement
[ "Engineering" ]
2,234
[ "Design" ]
60,785,404
https://en.wikipedia.org/wiki/Nuclear%20acoustic%20resonance
Nuclear acoustic resonance is a phenomenon closely related to nuclear magnetic resonance. It involves utilizing ultrasound and ultrasonic acoustic waves of frequencies between 1 MHz and 100 MHz to determine the acoustic radiation resulted from interactions of particles that experience nuclear spins as a result of magnetic and/or electric fields. The principles of nuclear acoustic resonance are often compared with nuclear magnetic resonance, specifically its usage in conjunction with nuclear magnetic resonance systems for spectroscopy and related imaging methodologies. Due to this, it is denoted that nuclear acoustic resonance can be used for the imaging of objects as well. However, for most cases, nuclear acoustic resonance requires the presence of nuclear magnetic resonance to induce electron spins within specimens in order for the absorption of acoustic waves to occur. Research conducted through experimental and theoretical investigations relative to the absorption of acoustic radiation of different materials, ranging from metals to subatomic particles, have deducted that nuclear acoustic resonance has its specific usages in other fields other than imaging. Experimental observations of nuclear acoustic resonance was first obtained in 1963 by Alers and Fleury in solid aluminum. History Nuclear acoustic resonance was first discussed in 1952 when Semen Altshuler proposed that the acoustic coupling to nuclear spins should be visible. This was also proposed by Alfred Kastler around the same time. From his specialization in the field, Altshuler theorized the nuclear spin-acoustic phonon interactions which resulted with experimentation in 1955. The experiments led physicists to suggest that nuclear acoustic resonance coupling in metals could be formulated and observed, with modern physicists discussing the many properties of nuclear acoustic resonance, although it is not a widely known concept. Concepts of nuclear acoustic resonance in objects have been theorized and predicted by many physicists, but it was not until in 1963 when the first observation of the phenomenon occurred in solid aluminum along with observation of its dispersion in 1973, and subsequently, the first experimental nuclear acoustic resonance in liquid gallium in 1975. However, the aspect of acoustic spin resonance has been observed by Bolef and Menes in 1966 through samples of indium antimonide where nuclear spins were shown to absorb acoustic energy exhibited by the sample. Theory of nuclear acoustic resonance Nuclear Spin and Acoustic Radiation The nuclei is deduced to spin due to its different properties ranging from magnetic to electric properties of different nuclei within atoms. Commonly this spin is utilized within the field of nuclear magnetic resonance, where an external RF (or ultra-high frequency range) magnetic field is used to excite and resonate with the nuclei spin within the internal system. This in turn allows the absorption or dispersion of electromagnetic radiation to occur, and allows magnetic resonance imaging equipment to detect and produce images. However, for nuclear acoustic resonance, the energy levels that determine the orientation of the spinning while under internal or external fields are transitioned by acoustic radiation. As acoustic waves are often between frequencies of 1 MHz and 100 MHz, they are usually characterized as ultrasound or ultrasonic (sound of frequencies above the audible range of ). Comparison with Nuclear Magnetic Resonance Similar to nuclear magnetic resonance, both phenomena introduces and utilizes external sources such as a DC magnetic field or different frequencies, and results from both methods produce similar data sets and trends in different variables. However, there are distinct differences in the methodologies of the two concepts. Nuclear acoustic resonance involves inducing internal spin-dependent interactions while nuclear magnetic resonance denotes interactions with external magnetic fields. Due to this, nuclear acoustic resonance is not solely dependent on nuclear magnetic resonance, and can be operated independently. Such cases where nuclear acoustic resonance is a better substitute for nuclear magnetic resonance include resonance in metals where electromagnetic waves can be difficult to penetrate and resonate, such as amorphous metals and alloys, while acoustic waves can easily pass through. However, the suitability for using nuclear acoustic resonance or nuclear magnetic resonance is reliant on the material to be used in order to achieve the most efficient and evident results. Physics of Nuclear Acoustic Resonance Nuclear acoustic resonance implements physics from both nuclear magnetic resonance and acoustics, involving the use of laws of quantum mechanics to derive theory on acoustic resonance in objects with nuclei that have a nonzero angular momentum (I), with its magnitude given by . In elements where , the characteristic of the nuclei spin also includes electric moments, also known as the electric quadrupole moment (denoted as ) for the weakest electric moment. This moment () influences the electric field gradients within the nucleus as a result of surrounding charges relative to the nucleus. In effect, the results of nuclear magnetic resonance used to induce nuclear acoustic resonance is affected. By utilizing the magnetic spin of nuclei under RF magnetic fields, and their spin-lattice relaxation properties after excitement from the external field to higher energy states, it is possible for acoustic waves to interact with nuclear spins, which often involves externally generated phonon. However, interactions of acoustic waves with nuclear spins do not guarantee the observation of acoustic resonance in objects. During the interactions, the acoustic waves experience a slight change in magnitude caused by the absorption by the object under nuclear spin, and the measurement of the change is crucial to observe and detect nuclear acoustic resonance in the object. Hence due to the difficulties analyzing nuclear acoustic resonance, it is only observed indirectly. However, as further propositions are made, ultrasonic pulse-echo techniques are introduced to detect changes in acoustic attenuation in specimens during experiments due to its capability of detecting changes in solids around 1 part in , which is capable of detecting background attenuation, although not for nuclear spin-phonon coupling, in which has attenuation coefficients from to dB/cm. Hence a combination of a continuous wave (CW) ultrasonic composite-resonator technique and nuclear magnetic resonance techniques is required to actually detect nuclear acoustic resonance. Nuclear Acoustic Resonance in Metals Coherent or incoherent generated phonon entice the nuclear spins in nuclear acoustic resonance processes, and as a result is compared with the direct spin-lattice relaxation mechanism. Due to this, spins are de-excited from interactions with resonant thermal phonon at low frequencies, which is often denoted to be insignificant. This is certainly the case when compared with the indirect or Raman process where multiple phonon are involved. However, as the direct spin-lattice relaxation characterizes solids at specific temperatures due to formations of a small percentage of the lattice vibration spectrum, it is proposed that solids can be subjected to acoustic energy using ultrasound with energy ranging from 1010 to 1012 in terms of density greater than energy from the incoherent thermal phonon. From this theory, it is predicted that observations of nuclear spin can be achieved at high temperatures using nuclear acoustic resonance principles and techniques, unlike normal circumstances where they are only visible at low temperatures. The initial direct observation of nuclear acoustic resonance occurred in 1963 with the use of samples of aluminum under an applied magnetic field, in which created an electromagnetic field that minimally affected the properties of the sound waves being used, specifically its velocity and attenuation. The experimental analysis deduced that the effects on velocity and attenuation by the external magnetic field was proportional to its square, which allowed the acoustic attenuation coefficient to be calculated for any nuclear spin systems undergoing absorption of acoustic energy, which is characterized as , where , with being the incident acoustic power per unit area. is determined by being the density of the metal, as the velocity of the propagated sound wave, and being the peak value of the strain. Furthermore, , the power per unit volume being absorbed by the system undergoing nuclear spin, is characterized by where N is the count of nuclear spins per unit volume of the metal, v is the frequency, and being the magnetic dipole coupling value. However, this formula does not factor in the effect of eddy currents on the metal caused by the magnetic fields. Nevertheless, the results of the experimental observation of nuclear acoustic resonance in aluminum devised propositions of further investigations in the field such as single crystals of metals with weak quadrupole moments and nuclear spins of 1/2. Nuclear Acoustic Resonance in Liquids Due to the different properties of liquids when compared to solids, it is typically impossible to detect nuclear acoustic resonance in liquids due to difficulties when inducing resonance in liquids. In solids, the spin transitions of nuclear acoustic resonance are induced by two different coupling mechanisms. However, objects in the liquid state are strongly affected by their thermal properties, which also influences the dynamic electric field gradient, leading to a near impossibility of inducing nuclear acoustic resonance in liquids via the coupling method. Hence in the first experimental attempt to observe nuclear acoustic resonance in a liquid sample, a metallic specimen was used as the object of interest. Further experimentation led to usage of external factors such as using piezoelectric nano-particles to detect nuclear acoustic resonance in liquids, particularly in fluids. In the initial successful experimental investigation on nuclear acoustic resonance in liquid, a coherent electromagnetic wave inside the metal sample was produced by sound waves generated by external dc magnetic fields surrounding the metallic object; the generated sound wave resonate with the nuclear spins of the object, allowing nuclear acoustic resonance to be theoretically observed. The theoretical predictions were confirmed when samples of liquid gallium were observed and measured. From this experimental observation, it was proposed that nuclear acoustic resonance in liquids metals requires magnetic dipole interactions due to the properties of liquids, and in which creates a dependence on the distance between particles in the liquid metal instead of the ultrasonic displacement field as seen in solids. Due to this, and the fact that the total displacement field for the generated electromagnetic field is the superposition of the displacement fields, the electromagnetic field can be modeled by a sum of the coherent and incoherent parts due to Maxwell's equations. Hence Unterhorst, Muller, and Schanz devised that nuclear acoustic resonance in liquid metals can be achieved and observed if the diffusion length during the relation time is relatively small compared to the ultrasonic wavelength of the sound wave. Imaging By utilizing ultrasound acoustic waves via propagation onto objects such as patients, imaging is possible when resonance is achieved. This is then computed by a system of equipment that combines techniques and concepts from both ultrasound and magnetic resonance imaging to produce images for medical purposes. However, due to the specific requirements of attaining nuclear acoustic resonance and the characteristics of ultrasound and magnetic resonance imaging, while imaging via nuclear acoustic resonance is achievable, experimentally limitations exist. Typical ultrasound techniques for imaging can obtain detection of acoustic attenuation differences of approximately 1 part in 1000, in which is not within the range of the required detection capability for nuclei spin systems which has acoustic coefficients from to dB/cm. Harmonic Correlation Although experimental nuclear acoustic resonance techniques on objects such as metals can achieve acoustic resonance, it is not a viable option for medical imaging, although it may be useful for spectroscopy in non-organic compounds. Hence the concept of harmonic correlation is introduced. This allows a new method of obtaining, amplifying, and analyzing acoustic signals. This method allows the sensitivity of the detection technique to be enhanced by implementing broadband signals into narrow-band signals for analysis. Harmonic correlation in general determines the correlation between the amplitude functions of two harmonically related narrow-band signals directed towards a patient, in which the assumption that they originate from the same source is made in order for the processing algorithm that collects that data and simulates them to boost the sensitivity of the signal detection of the analysis. Hence harmonic correlation clarifies the consequences of the absorption process of the induced nuclear spin phonon, however, such a process is very complicated and requires rigorous treatment of the data collected. See also Nuclear magnetic resonance Ultrasound Magnetic resonance imaging Resonance Relaxation (NMR) Electromagnetic radiation Spectroscopy Acoustic Resonance Acoustic Resonance Spectroscopy References Nuclear magnetic resonance spectroscopy Ultrasound Acoustics
Nuclear acoustic resonance
[ "Physics", "Chemistry" ]
2,354
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy", "Classical mechanics", "Acoustics", "Spectroscopy" ]
60,785,472
https://en.wikipedia.org/wiki/Shoulder%20injury%20related%20to%20vaccine%20administration
Shoulder injury related to vaccine administration (SIRVA) is "shoulder pain and limited range of motion occurring after the administration of a vaccine intended for intramuscular administration in the upper arm... thought to occur as a result of unintended injection of vaccine antigen or trauma from the needle into and around the underlying bursa of the shoulder". Cause SIRVA is caused by improper insertion of the needle used in injections. It is "a preventable occurrence caused by the injection of a vaccine into the shoulder capsule rather than the deltoid muscle. As a result, inflammation of the shoulder structures causes patients to experience pain, a decreased range of motion, and a decreased quality of life." A 2022 review of the literature suggested that SIRVA was a possible complication of COVID-19 vaccination, and that needle technique should be carefully monitored in view of the scale of the COVID-19 vaccination programme. Treatment "Treatment for SIRVA is the same as treatment for routine inflammatory injuries." People who suffer from SIRVA typically require physical therapy, pain management medications, and in some severe cases, surgery. Compensation In the United States, SIRVA was added to the list of compensable injuries on the Vaccine Injury Table used by the National Vaccine Injury Compensation Program in 2017. This inclusion allowed persons claiming an injury to seek compensation from a government fund set up under the program, while immunizing vaccine manufacturers and administrators from legal liability. By 2020, SIRVA injuries amounted to 54% of filings for vaccine injury compensation. In April 2020, the U.S. Department of Health and Human Services began considering a proposal to remove the injury from that table, following a substantial increase in the number of claims asserting this injury, and on July 20, 2020, the department posted its official notice that it would seek to remove SIRVA (as well as vasovagal syncope) from the vaccine injury compensation scheme. In support of its proposed removal of the injury from the table, the department asserted: A number of contrary opinions were filed in response to the proposal, but the removal was made final on January 21, 2021. This removal was, in turn, reversed by a rule promulgated on April 21, 2021, restoring SIRVA to the table. References Vaccine controversies Injuries of shoulder and upper arm
Shoulder injury related to vaccine administration
[ "Chemistry", "Biology" ]
473
[ "Vaccination", "Drug safety", "Vaccine controversies" ]
60,786,023
https://en.wikipedia.org/wiki/Chrysanthemum%20stone
Chrysanthemum stone, sometimes called "flower stone," is a stone "flower" produced millions of years ago due to geological movement and natural formation in the rock. The stone's pattern resembles the chrysanthemum flower. The flower is milky white and grain is clear. Chrysanthemum stone is generally dark-gray or black, and does not contain radioactive elements, so it has a high collection value. Although the composition of chrysanthemum stone itself is not very rare, the formation is uncommon, so the stone is listed as a gem. Vancouver Island in British Columbia Canada, is a well known place to find Flower Stone, mainly on the east coast. It was once commercially mined on Texada island which is off the east coast of Vancouver Island, but there is now a moratorium on mining there, although rock hounds may still hand pick it. Walking on the beach at low tide on either island flower stone can be found, along with Dalasite and Red Jasper. The basic features The main component of chrysanthemum stone is Andalusite, so the basic components of the rock are very similar to that of Andalusite. The main component of the mineral is Al2SiO5 and it typically forms a rhombic crystal system with columnar crystals. The cross section is close to the regular quadrilateral, and bicrystals are rare. Chrysanthemum stone has a hard texture. The stone is dark-gray to black in color with naturally formed white chrysanthemum shaped crystals. The "chrysanthemum" part of the "flower" is a collection of crystalline minerals. Each "petal" is a rhombohedral crystal form. The mineral composition varies according to the variety. Chysanthemum stone from Liuyang City, Hunan Province is mainly calcite and chalcedony (quartz), some with ling strontium ore and lapis lazuli. Chrysanthemum stone carving is a unique handicraft in Liuyang City. Work is created from stone formed approximately 270 million years ago. The physical and chemical analysis Assay proved: chrysanthemum stone does not contain radioactive elements, so it is harmless to human body. The content of beneficial elements such as Fe, Zn, Ca and Se is high. According to archaeological findings, about 200 million years ago, the hometown of chrysanthemum was still vast ocean and sea. Later, due to the changes of the earth, Liuyang in Hunan entered a period of recession and the sea water accumulated in low-lying places on the surface continued to evaporate. When the concentration of strontium sulfate in seawater increased to a certain extent, crystals are formed and gradually attached to the core of flint. Color classification Colored chrysanthemum stone Colored chrysanthemum stone is a natural flower in geology. It is only produced in Liuyang, Hunan province. The shape of the petals is lifelike, rich in layers, hard and fine in texture, and the jade is crystal clear, just like the autumn chrysanthemum in full bloom against the frost. It is rich in stone color, tough and fine in stone, and contains strontium, selenium and other trace elements beneficial to human body, which highlights the rarity of chrysanthemum stone. Brown chrysanthemum stone The brown chrysanthemum stone is mainly produced in Liuyang, Hunan province. It is light gray-brown and off-white, and its matrix color is either brown, light gray-black or light gray-brown. Generally, it needs coloring. Brown river stone with smaller flower shape is harder, the flower layer is less, and is easily polished. The brown river stone with larger flower shape has moderate hardness and is easy to sculpt. Black chrysanthemum stone The black chrysanthemum stone is mainly produced in Xuanen, Hubei province. Hubei chrysanthemum stone flower has the shape of irises, claw flower shape, and cylindrical flower shape. The cylindrical flower-shaped stamens are obvious, three-dimensional shaped into a rod with a certain bending. The iris-shaped ones are more common, they are not conspicuous. Petals generally range from 10 to 40, the size is not uniform, the branching is compound with presence of interpenetration and other phenomena. Authenticity identification The intact chrysanthemum stone from the naturally exposed environment has become extinct, but in general, it is fragile and easily damaged after being changed by chemical methods. Using the method of a few chemistry or physics to pledge the stone material with loose material consolidates will only strengthen the surface. The crystal in the heart of the chrysanthemum stone is its soul. In addition, it can also be judged from the chrysanthemum stone carving as a whole. The crystal in the center of the real chrysanthemum stone is basically the same color as the petal, and the petal is radially diffused around. And theoretically, a complete three-dimensional chrysanthemum can be obtained by grinding along the central axis of the petals in any direction. Some chrysanthemum stone carvings are combined with real chrysanthemum stone flowers through ordinary stone bodies. The value of this chrysanthemum stone is much lower than that of the real chrysanthemum stone, and such a finished product is also easy to find and identify with the naked eye. The difference with peony stone People often confuse the chrysanthemum stone with peony. Peony stone is also found in the Luoyang area and is also a kind of natural stone, the quality of the material is black, white or green flowers. The stone distribution is like a peony in full bloom, peony flower petals are fuller, the size is evener and different from chrysanthemum stone strip petals. Chrysanthemum stone and peony stone are called strange stones. Peony is also a natural mineral like chrysanthemum stone and can not be regenerated, so it also has a high collection value. In the world, peony stone is also recognized as rare, with collection significance and ornamental value. Peony stone originated in Luoyang, China, and its composition belongs to neutral salt rock. Although chrysanthemum stone is as rare as peony stone and is often regarded as the same object, the two are completely different. First of all, the composition of the two is different, peony stone is said to be more delicate, the color more obvious. However, the flower pattern of chrysanthemum stone is more three-dimensional and lifelike, so some collectors are more inclined to collect chrysanthemum stone. In addition, when grinding, chrysanthemum stone takes shape faster and is not easy to destroy. The meaning of culture It is said that in ancient times, there was a pair of immortals in heaven who fell in love with each other. They sprinkled chrysanthemum which fell in Liuyang river and over time, turned into today's chrysanthemum stone. There is another saying that a pair of lovers fell in love, one of them turned into a stone, the other into a chrysanthemum. They loved each other and did not wish to depart till death, so they became today's chrysanthemum stone finally. As Hunan's golden card, chrysanthemum stone carving technology came into being in 1740 and had a history of 270 years. Because chrysanthemum stone belongs to non-renewable resources, and only one place in Liuyang belongs to the concentrated origin in the world, it has the title of "the first stone in the world". In 2008, Liuyang chrysanthemum stone carving technology- with its exquisite craftsmanship, ingenious conception and unique natural strange existence- has become the second batch of national intangible cultural heritage projects. The value and significance of chrysanthemum stone collection lies in its absolute naturalness. Historical development The earliest chrysanthemum stone found in China was from the underlying rocks of Liuyang river. According to the records in Liuyang county annals, during the reign of the Qianlong Emperor of the Qing dynasty, a man called Ouxifan accidentally found chrysanthemum stone. Chrysanthemum stone is collected and exhibited in the state guesthouse, China art museum, Hunan art museum, etc. Also, great Chairman Mao, revolutionary martyr Tan sitong and others- among other favorite things- have used the chrysanthemum stone. These items are now displayed in the memorial hall. In 1915, the Panamanian World Expo- the exhibition of chrysanthemum stone carvings- surprised the world, the "stones can bloom" won the "rare treasures gold award" and has been preserved in the United Nations Museum. In 1959, during the founding of the people's Republic of China, the people of Liuyang presented a huge three-dimensional sculpture "Shi Jusen Mountain" to the Great Hall of the people in Beijing for viewing by the people of all ethnic groups. From 1997 to 1999, the whole country rejoiced, celebrating the return of Hong Kong and Macao. Liuyang people specially created two commemorative chrysanthemum stone carvings dedicated to the Hong Kong and MSAR governments. Because the formation of chrysanthemum stone requires specific physical and chemical conditions, and time, the number of chrysanthemum stone is very small in nature and rare in the world, so the related industry of chrysanthemum stone belongs to a typical resource-constrained industry. References Stones
Chrysanthemum stone
[ "Physics" ]
2,025
[ "Stones", "Physical objects", "Matter" ]
60,786,095
https://en.wikipedia.org/wiki/Trifluoromethyl%20cation
The trifluoromethyl cation is a molecular cation with a formula of . It is a carbocation due to its positively charged carbon atom. It is part of the family of carbenium ions, with three fluorine atoms as substituents in place of its hydrogen atoms. Stability Compared to methenium (the simplest carbenium ion), trifluoromethyl cation is more stable due to the presence of fluorine atoms. The fluorine atoms have lone pairs of electrons overlapping with the carbon atom. These electrons stabilize the positive charge of the central carbon atom, stabilizing the molecule as a whole. The overlap is effective due to the size of fluorine's p orbital in the molecule. Synthesis While electron-donating fluorine lone pairs are present, it does not exist as its own. The production of a cation has been described as "extremely hard". The first relevant reagent, a diaryl(trifluoromethyl) sulfonium salt () was developed in 1984 by reaction of an aryltrifluoromethyl sulfoxide 1 with followed by reaction with an electron-rich arene. Now the reaction of the source of the cation usually uses 5-(trifluoromethyl)dibenzothiophenium tetrafluoroborate as the reagent. References Cations
Trifluoromethyl cation
[ "Physics", "Chemistry" ]
296
[ "Cations", "Ions", "Matter" ]
60,786,367
https://en.wikipedia.org/wiki/NeuroVault
NeuroVault is an open-science neuroinformatics online repository of brain statistical maps atlases and parcellations. Neuroimaging researchers, having performed an neuroimaging studies, may upload their data to the site. Third-party researchers may download the data and use it, e.g., for re-analysis. NeuroVault has been widely acknowledged as a trustworthy destination for scientists to deposit neuroimaging data associated with scholarly articles. In 2019 it has ranked 5th among all scientific data repositories in terms of the number of journals’ and publishers’ policies recommending it. Deposition of data in NeuroVault has also been recommended by the Organization for Human Brain Mapping NeuroVault was created by Chris Gorgolewski but is currently maintained by the research group around Russell Poldrack, and they described the system in the scientific article NeuroVault.org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain from 2015, and later in NeuroVault.org: A repository for sharing unthresholded statistical maps, parcellations, and atlases of the human brain from 2016. See also Metascience References Neuroinformatics
NeuroVault
[ "Biology" ]
263
[ "Bioinformatics", "Neuroinformatics" ]
60,786,392
https://en.wikipedia.org/wiki/Nanotechnology%20in%20warfare
Nanotechnology in warfare is a branch of nano-science in which molecular systems are designed, produced and created to fit a nano-scale (1-100 nm). The application of such technology, specifically in the area of warfare and defence, has paved the way for future research in the context of weaponisation. Nanotechnology unites a variety of scientific fields including material science, chemistry, physics, biology and engineering. Advancements in this area, have led to categorized development of such nano-weapons with classifications varying from; small robotic machines, hyper-reactive explosives, and electromagnetic super-materials. With this technological growth, has emerged implications of associated risks and repercussions, as well as regulation to combat these effects. These impacts give rise to issues concerning global security, the safety of society, and the environment. Nanotechnology has the ability to dramatically escalate the destructive capacity of preexisting weaponry. Legislation may need to be constantly monitored to keep up with the dynamic growth and development of nano-science, due to the potential benefits or dangers of its use. Anticipation of such impacts through regulation, would 'prevent irreversible damages' of implementing defence related nanotechnology in warfare. Origins Historical use of nanotechnology in the area of warfare and defence has been rapid and expansive. Over the past two decades, numerous countries have funded military applications of this technology including; China, United Kingdom, Russia, and most notably the United States. The US government has been considered a national leader of research and development in this area, however now rivalled by international competition as appreciation of nanotechnology's eminence increases. Therefore, the growth of this area in the use of its power has a dominant platform in the front line of military interests. U.S. National Nanotechnology Initiative In 2000, the United States government developed a National Nanotechnology Initiative to focus funding towards the development of nano-science and its technology, with a heavy focus on utilizing the potential of nano-weapons. This initial US proposal has now grown to coordinate application of nanotechnology in numerous defence programs, as well as all military factions including Air Force, Army and Navy. From the financial year 2001 through to 2014, the US government contributed around $19.4 billion to nano-science, moreover the development and manufacturing of nano-weapons for military defence. The 21st Century Nanotechnology Research and Development Act (2003), envisions the United States continuing its leadership in the field of nanotechnology through national collaboration, productivity and competitiveness, to maintain this dominance. Developments Successful transitions of nanotechnology into defence products: Lifetime of material coatings increased from hours to years, however further development continuing (see below). Nano-structured silicate manipulation reducing insulation weight by 980 lbs. High Power Microwave (HPM) devices with reduced weight, shape and power consumption. The United States government has had military purposed development of nanotechnology at the forefront of its national budget and policy throughout the Clinton and Bush administrations, with the Department of Defense planning to continue with this priority throughout the 21st century. In response to America's assertive public funding of defence purposed nanotechnology, numerous global actors have since created similar programmes. China In the sub-category of nano materials, China secures second place behind the United States in the amount of research publications they have released. Conjecture stands over the purpose of China's quick development to rival the U.S., with 1/5 of their government budget spent on research (US$337million). In 2018, Tsinghua University, Beijing, released their findings where they have enhanced carbon nanotubes to now withstand the weight of over 800 tonnes, requiring just 1of material. The scientific nanotechnology team hinted at aerospace, and armour boosting applications, showing promise for defence related nano-weapons. The Chinese Academy of Science's Vice President Chunli Bai, has stated the need to focus on closing the gap between "basic research and application," in order for China to advance its global competitiveness in nanotechnology. Between 2001 and 2004, approximately 60 countries globally implemented national nanotechnology programmes. According to R.D Shelton, an international technology assessor, research and development in this area "has now become a socio-economic target...an area of intense international collaboration and competition." As of 2017, data showed 4725 patents published in USPTO by the USA alone, maintaining their position as a leader in nanotechnology for over 20 years. Current research Most recent research into military nanotechnological weapons includes production of defensive military apparatus, with objectives of enhancing existing designs of lightweight, flexible and durable materials. These innovative designs are equipped with features to also enhance offensive strategy through sensing devices and manipulation of electromechanical properties. Soldier battlesuit The Institute for Soldier Nanotechnologies (ISN), deriving from a partnership between the United States Army and MIT, provided an opportunity to focus funding and research activities purely on developing armour to increase soldier survival. Each of seven teams produces innovative enhancements for different aspects of a future U.S. soldier bodysuit. These additional characteristics include energy-absorbing material protecting from blasts or ammunition shocks, engineered sensors to detect chemicals and toxins, as well as built in nano devices to identify personal medical issues such as haemorrhages and fractures. This suit would be made possible with advanced nano-materials such as carbon nanotubes woven into fibres, allowing strengthened structural capacities and flexibility, however preparation becomes an issue due to inability to use automated manufacturing. Adaptive concealment and stealth With the use of nanoparticles, it is evidently possibly to procure "invisible" suits for soldiers that will act as the ultimate camouflage. This possibility is rooted from the fact that objects are only visible due to how light reflects off of them. Therefore, if the particle is smaller than the wavelength of the light, it is not visible. Visible light wavelengths fall within the range of 300-700 nanometers. In order to achieve this invisible cloak, the particles that make up the suit should be varying in size. Another approach for concealment includes cloaks that act as a chameleon. Instead of being completely invisible, this nanoparticle coated cloak adapts to any colors surrounding it in order to blend in. Enhanced materials Creation of sol-gel ceramic coatings has protected metals from; wear, fractures and moisture, allowing adjustability to numerous shapes and sizes, as well as aiding "materials that cannot withstand high temperature". Current research focuses on resolving durability issues, where stress cracks between the coating and material set limitations on its use and longevity. The drive for this research is finding more efficient and cost effective uses in application of nanotechnology for Airforce and Navy military groups. Integration of fibre-reinforced nano-materials in structural features, such as missile casings, can limit overheating, increase reliability, strength and ductility of the materials used for such nanotechnology. Communication devices Nanotechnology designed for advanced communication is expected to equip soldiers and vehicles with micro antenna rays, tags for remote identification, acoustic arrays, micro GPS receivers and wireless communication. Nanotech facilitates easier defence related communications due to lower energy consumption, light weight, efficiency of power, as well as smaller and cheaper to manufacture. Specific military uses of this technology include aerospace applications such as; solid oxide fuel cells to provide three times the energy, surveillance cameras on microchips, performance monitors, and cameras as light as 18g. Mini-nukes The United States, along with countries such as Russia and Germany, are sing the convenience of small nanotechnologies, adhering it to nuclear "mini-nuke" explosive devices. This weapon would weigh 5 lbs, with the force of 100 tonnes of TNT, giving it the possibility to annihilate and threaten humanity. The structural integrity would remain the same as nuclear bombs, however manufactured with nano-materials to allow production to a smaller scale. Engineers and scientists alike, realise some of these proposed developments may not be feasible within the next two decades as more research needs to be undertaken, improving models to be quicker and more efficient. Particularly molecular nanotechnology, requires further understanding of manipulation and reaction, in order to adapt it to a military arena. Implications Nanotechnology and its use in warfare promises economic growth however comes with the increased threat to international security and peacekeeping. The rapid emergence of new nanotechnologies have sparked discussion surrounding the impacts such developments will have on geo-politics, ethics, and the environment. Geo-political Difficulty in categorisation of nano-weapons, and their intended purposes (defensive or offensive) compromises the balance of stability and trust in the global environment. "A lack of transparency about an emerging technology not only negatively effects public perception but also negatively impacts the perceived balance of powers in the existing security environment." The peace and cohesion of the international structure may possibly be negatively affected with a continuing military-focused development of nanotechnology in warfare. Ambiguity and a lack of transparency in research increases difficulty of regulation in this area. Similarly, arguments put forward from a scientific standpoint, highlight the limited information known, concerning the implications of creating such powerful technology, in regards to reaction of the nano-particles themselves. "Although great scientific and technological progress has been made, many questions about the behaviour of matter at the nanoscale level remain, and considerable scientific knowledge has yet to be learned." Environmental The introduction of nanotechnology into everyday life enables potential benefits of use, yet carries the possibility of unknown consequences for the environment and safety. Possible positive developments include creation of nano-devices to decrease remaining radio-activity in areas, as well as sensors to detect pollutants and adjust fuel-air mixtures. Associated risks may involve; military personnel inhaling nanoparticles added to fuel, possible absorption of nanoparticles from sensors into the skin, water, air or soil, dispersion of particles from blasts through the environment (via wind), alongside disposal of nano-tech batteries potentially affecting ecosystems. Applications for materials or explosive devices, allow a greater volume of nano-powders to be packed into a smaller weapon, resulting in a stronger and possibly lethal toxic effect. Social and ethical It is unknown the full extent of consequences that may arise in social and ethical areas. Estimates can be made on the associated impacts as they may mirror similar progression of technological developments and affect all areas. The main ethical uncertainties entail the degree to which modern nanotechnology will threaten privacy, global equity and fairness, while giving rise to patent and property right disputes. An overarching social and humanitarian issue, branches from the creative intention of these developments. 'The power to kill or capture debate', highlights the unethical purpose and function of destruction these nanotechnological weapons supply to the user. Controversy surrounding the innovation and application of nanotechnology in warfare highlights dangers of not pre-determining risks, or accounting for possible impacts of such technology. "The threat of nuclear weapons led to the cold war. The same trend is foreseen with nanotechnology, which may lead to the so-called nanowars, a new age of destruction", stated by the U.S. Department of Defense. Similarly a report released by Oxford University, warns of the pre-eminent extinction of the human race with a 5% risk of this occurring due to development of 'molecular nanotech weapons'. Regulation International regulation for such concerns surrounding issues of nanotechnology and its military application, are non existent. There is currently no framework to enforce or support international cooperation to limit production or monitor research and development of nanotechnology for defensive use. "Even if a transnational regulatory framework is established, it is impossible to determine if a nation is non-compliant if one is unable to determine the entire scope of research, development, or manufacturing." Producing legislation to keep-up with the rapid development of products and new materials in the scientific spheres, would pose as a hindrance to constructing working and relevant regulation. Productive regulation should assure public health and safety, account for environmental and international concerns, yet not restrict innovation of emerging ideas and applications for nanotechnology. Proposed regulation Approaches to development of legislation, possibly include progression towards classified non-disclosive information pertaining to military use of nanotechnology. A paper written by Harvard Journal of Law and Technology, discusses laws that would revolve around specific export controls and discourage civilian or private research into nano-materials. This proposal suggests mimicking the U.S. Atomic Energy Act of 1954, restricting any distribution of information regarding the properties and features of the nanotechnology at creation. The Nanomaterial Registry A United States National Registry for Nanotechnology has enabled a public sphere where reports are available for curated data on physico-chemical characteristics and interactions of nanomaterials. Requiring further development and more frequent voluntary additions, the register could initiate global regulation and cooperation regarding nanotechnology in warfare. The registry was developed to assist in the standardisation, formatting, and sharing of data. With more compliance and cooperation this data sharing model may "simplify the community level of effort in assessing nanomaterial data from environmental and biological interaction studies." Analysis of such a registry would be carried out with expertise by professional nano-scientists, creating a filtering mechanism for any potentially newly developed or dangerous materials. However, this idea of a specific nonmaterial registry is not original, as several databases have been developed previously including the caNanoLab and InterNano which are both engaging and accessible to the public, informatively curated by experts, and detail tools of nano manufacturing . The National Nanomaterial Registry, is a more updated version in which information is collated from a range of these sources and multiple additional data resources. It translates a greater range of content regarding; comparison tools with other materials, encouraging standard methods, alongside compliance rating features. References
Nanotechnology in warfare
[ "Materials_science", "Engineering" ]
2,842
[ "Nanotechnology", "Materials science" ]
60,788,648
https://en.wikipedia.org/wiki/Costa%20Rica%20Thermal%20Dome
The Costa Rica Thermal Dome (CRTD; also called the Costa Rica Dome), is an oceanographic feature and marine biodiversity hotspot that varies in size from 300 to 1,000 km in diameter. The dome is located off the western coast of Central America in the Tropical Eastern Pacific. Through the interaction of wind and ocean currents, deeper waters are drawn towards the surface in a dome-like shape at this location. These waters displace the warmer, nutrient-poor waters with colder, nutrient-richer waters. An investigation by UNESCO'S World Heritage Site and International Union for Conservation of nature (IUCN) in 2016 considered it eligible to become a World Heritage Site in the near future. The average position of the center of the Costa Rica Dome is located at latitude 9°N, longitude 90°W, which is off the coast of Costa Rica. The dome is positioned above the Cocos underwater tectonic valley and provides a subaquatic cyclonic current that moves in sync with the above air flows. The Costa Rica Thermal Dome is full of biodiversity and many forms of marine life. The nutrient hotspot consists of all types of animals ranging from zoo-plankton to the blue whale. The location is also within one of the largest tuna catchment areas in the world The Costa Rica Dome is also positioned on the major seaway to the Panama Canal for transportation. The dome and its marine life provide economic benefits for countries such as Panama and Costa Rica. The ecological significance of the dome is important for many Costa Ricans. At the 12th conference of parties at the Convention on Biological Diversity (CBD) in South Korea, the dome was considered a marine zone of biological or ecological importance. Geology and geography The Costa Rica Dome operates beyond national jurisdiction and its diameter and position change yearly and in a characteristic annual cycle. The dome's upwelling consists of vertical changes, water from oceanic depths rising towards the surface. Strong winds push warm water away from the coast. These warm waters then collide and are substituted by cold nutrient-rich waters from the depths. The upwelling within the dome is caused by circulation of the Costa Rican current, northeastern Equatorial Counter Current and the Northwestern Equatorial Current combining with the Papagayo Jet stream that crosses over the Lake of Nicaragua and the northern plains of Costa Rica. The Costa Rica Dome is similar to other thermocline domes as it has an east–west thermocline ridge with a seasonal evolution affected by large scale wind patterns. The Costa Rica Dome has a unique property as it is formed by a coastal wind jet. The cycle of the dome can be explained as four wind forcing stages: coastal shoaling of the thermocline off the Gulf of Papagayo in February–April, separation from the coast during May–June, countercurrent thermocline ridging during July–November and deepening during December–January. The axis of the Costa Rica Dome rotates far west of the Isla del Coco. The inter-tropical convergence lies in the middle of the dome and is seen close to the shores of Costa Rica. Distribution of nutrients and oxygen within the Costa Rica Dome is determined primarily through the localised upwelling of the nutrient-rich, oxygen-poor water from 65m and deeper and the substantial mixing of oxygen-rich and low nutrient content levels. The huge marine current that generates the displacement of deep, cold and nutrient water makes the dome one of the richest places on the planet. The oceanic floor of the dome is abundant with methane clathrate, known as 'fire ice'. Ecology Marine Life The Costa Rica dome is a biodiversity hotspot for marine life. The presence of a seasonally predictable strong and shallow thermocline associated with cyclonic circulation and upwelling make the Costa Rica Dome a distinct biological habitat, with zoo-plankton and phytoplankton biomass significantly higher than in the tropical waters surrounding it. The algae are strengthened because of the enormous amount of cold water, combined with sunlight. Within the dome there is the highest concentration of chlorophyll in the world, with approximately 60 milligrams per cubic centimetre of sea water. This feeds zoo-plankton consisting of larvae and larger animals such as sea sponges, worms, echinoderms, mollusks, crustaceans, and other arthropods. Another important aspect of the Costa Rica Dome is its of dense patches of krill at various depths. The dome is also a habitat to sharks, dolphins, eels, tuna, billfish, and sea turtles: olive ridley sea turtles and leatherback sea turtle, stingrays, octopuses, colonies of seabirds and blue whales and killer whales. The dome is a unique year-round habitat for blue whale. Of the nine distinct blue whale populations around the world, this area's is the largest with an estimated 3,000 whales. The dome provides an area for mating, feeding, breeding, calving and raising calves. With satellite tracking, the Costa Rica Dome was found to be a calving breeding area for North Pacific blue whales as-well as a major migration corridor for the whales. Leatherback sea turtle are the widest-ranging marine turtle species and are critically endangered. The Costa Rica Dome may be a significant migratory path for the endangered turtles nesting in Costa Rica. In 2008, a large multi-year satellite data tracking set for the leatherback sea turtle was analysed. The results showed that the migration path of these turtles took them through the southern edge of the Costa Rica Dome and the Costa Rica Coastal Current, indicating that the dome is part of the migration corridor for the endangered leatherback turtle. An ecological feature of the dome is the "tune-dolphin-seabird assemblage" which can be characterised as a feeding relationship between yellowfin tuna, spotted and spinner dolphins and large number of seabird species. The flock of seabirds feed on the prey at the surface made possible by the subsurface tuna and dolphin predators. The tuna, because of their near-surface occurrence, large size and visibility due to the relationship with the dolphins and seabirds, this area forms one of the world's largest yellowfin tuna fisheries. History The Costa Rica Thermal Done was detected off the Central American country in 1948, through the use of bathythermographs, thermometers that measure and graphically represent the temperature of the seawater at different depths, used on ships that traveled from California to Panama. The dome was found by Townsend Cromwell, and he also named it. Human use Ecotourism Due to the Costa Rica Thermal Dome, several species that benefit from its habitat move close to the coasts and help sustain economic activities for the close countries. The countries from within this region contribute greatly to the fishing industry in Central America, with it being estimated these countries generated $750 million in 2009 for the industry. Another example of tourism is sea turtle nesting within this region. This activity helped generate $2,113,176 for the tour operators and related businesses close to the Las Baulas Marine National Park in 2004. The Costa Rica Dome is estimated to bring in more than $20 million a year for at least 10 fishing communities. Additionally, sport fishing benefits greatly from the domes biodiversity. In Costa Rica sport fishing generated close to $599 million in 2008 which is 2.13% of GDP for that year. Secondly, in Panama it was estimated to generate $170.4 million in 2013. Additionally, in September 2014, the first day of the annual Festival of Whales and Dolphins, which is dedicated to whale watching in southeast Costa Rica, earned $40,000. Conservation The Costa Rica Dome was declared a marine zone of ecological or biological importance at the "12th conference of the Parties of the Convention on Biological Diversity (CBD)". The promotion of maritime conservation and highlighting the migration and feeding of species such as the blue whale, Leatherback sea turtle and common dolphin. Additionally, the Ministry of Environment and Energy of Costa Rica prepared a zoning decree at the same time that would help control the fishing of tuna (oceanic fish). In January 2015 a consensus recommendation was agreed by the Ad-Hoc Working Group at the United Nations. This historic decision was one towards the development of an international legally binding instrument on conservation and sustainable use of marine biodiversity in areas beyond national jurisdiction. There have been several conservation efforts into the Costa Rica Dome. Mission Blue, a group consisting of more than 200 respected ocean conservation groups and like-minded organisations, have led extensive research into the Costa Rica Thermal Dome. Similarly, since 2012 the Marviva foundation has promoted the Costa Rica Thermal Dome Initiative in order to ensure sustainability. In order to study the cold upwellings in the Dome they implemented two methods. A zooplankton net was dropped to 80 meters and pulled up samples in multiple locations and a second collection process which involved a niskin bottle. This device can be deployed at any length to capture a sample from that specific part of the ocean. This sample is water collected from a specific water level. The Costa Rica Dome operates in areas beyond natural jurisdiction, which are areas that lie further than 200 nautical miles from shore and beyond a countries Exclusive Economic Zone. The United Nations Convention on the Law of the Sea provides legal framework that helps regulate areas beyond natural jurisdiction. However, access to majority of the ocean is now possible because of human technological advancements, causing Areas beyond natural jurisdiction are constantly exploited for their resources. Global Ocean Biodiversity Initiative GOBI consists of an international partnership of institutions committed to conserving biological diversity within the marine environment. GOBI provides expertise, data and knowledge in order to support the identification on ecological and biological significant marine areas by the Convention on Biological Diversity. GOBI's work intends to catalogue all available information on the physical and biological characteristics of the Costa Rica Dome. The approach to this work is: Publication of a spatial and temporal species distribution atlas for the Costa Rica Thermal Dome. Visualisations of the oceanographic features and ecological attributes, value of the resource and the economic and biological links to Central America are to be included within the contents. Raised awareness of the value and relevance of biodiversity sustained by the CRTD and the linkages to human wellbeing and the conservation of areas beyond natural jurisdiction applied through an extensive public outreach program as-well as a media campaign. Creation of a multisectoral recommendation for a potential macro-zonation plan for the CRTD, leading to the development of a potential governance model for the CRTD that lies in international waters Promotion of agreed recommendations and governance model amongst decision makers within Central America. References Biogeography Bodies of water of Costa Rica Ecotourism Fisheries law Ocean currents
Costa Rica Thermal Dome
[ "Chemistry", "Biology" ]
2,182
[ "Ocean currents", "Biogeography", "Fluid dynamics" ]
60,789,000
https://en.wikipedia.org/wiki/Photopyroelectric
Photopyroelectric As known that Photopyroelectric can be regarded as –Photo +Pyroelectric,which means any optical systems using a pyroelectric detector or imaging system, In addition, pyroelectricity could be depicted as the capability of the components formulating the transient voltage when heated or cooled. Once the temperature on which they depend changes, the position of the atom will change slightly in the crystal structure. This process of change can also be referred to as the polarization of the material. As a result, the voltage across the crystal will be triggered by this change in polarization. To further explain, when the temperature in the engine is kept constant for a period of time, the voltage in the photovoltage will gradually disappear due to the leakage current. In this sense, leakage is mainly caused by several ways, for example, electrons going through the crystal, ions going through the air, or current leaking through a voltmeter connected to the crystal. Technical Base of Photopyroelectric The photopyroelectric refers to the technique of the optimal system which is mainly based on the imaginary system and the pyroelectric detector. Pyroelectric detector In terms of the pyroelectric detector, it can be used as a sensor to support the system. Due to the unipolar axis characteristics of the pyroelectric crystal, it is characterized by asymmetry. Polarization due to changes in temperature, the so-called pyroelectric effect, is currently widely used in sensor technology. Pyroelectric crystals need to be very thin to prepare and are plated in a direction perpendicular to the polar axis. An absorbing layer (blackening layer) is also required on the upper electrode. When this absorbing layer is exposed to infrared radiation, the pyroelectric chip is heated and produces a surface electrode. If the amount of radiation is interrupted, a charge opposite to the direction of polarization is generated. However, this charge is very small, so the charge is converted to a signal voltage by ultra low noise and ultra low leakage field effect transistors (JFET) or operational amplifiers (OpAmp) before neutralized by the internal resistance of the crystal. Pyroelectric detectors have a high signal-to-noise ratio even at 4K Hz. For example, in a Fourier infrared spectrometer, a thermopile can only perform better at a few hertz. Imaginary system In terms of the imaginary system, it is a general term for various types of remote sensor systems that acquire remote sensing images of objects without photography. Scanning is usually used for imaging, tape recording or indirect recording on film. According to the structure of the system, the scanning method and the detector parts are roughly divided into: 1. Optomechanical scanning. Such as multi-spectral scanners. The mirror is used to scan the object surface, and the image data is output after being split, detected and photoelectrically converted. 2. Electronic scanning. For example, a return beam guiding TV camera, is an image-side scanning method. The process is optical imaging on the target surface of the light guide, and the signal is amplified and output after being scanned by the electron beam. 3. Robust self-scanning. For example, the photoelectric scanning sensor of the French SPOT satellite is also an image scanning method. The object is imaged by an objective lens on a detector array consisting of a plurality of charge coupled devices (CCDs) that are photoelectrically converted and output. 4. Antenna scanning. Such as side-view radar, which is an active remote sensing imaging system that is a surface scanning method. It transmits the microwave beam through the antenna and receives an echo reflected by the scene, which is demodulated and output. The Use of Photopyroelectric Photopyroelectric calorimetry of composite materials The use of optoelectronics tells us that previous optoelectronic structures were used to check the thermal efficiency of certain materials that were composite and inserted into the detection unit as a liner. This technique depends on the coupled fluid thickness scanning process (TWRC method). Two special composites were chosen for this study: (I) Liquid: Nanofluid based on water and containing gold nanoparticles (ii) More solid type: Urea - Fumaric acid eutectic in a ratio of 1:1. It has been found that the thermal effusivity is independent upon the volume and concentration in the gold particles. Considering the eutectic characterized by urea-fumaric acid, it can be reasonably concluded that the value of the heat permeable compound is quite different from that of the pure raw material. This illustrates the production of compounds. Self-consistence photopyroelectric calorimetry for liquids This photopyroelectric also demonstrate that the front photopyroelectric (FPPE)structure is also important. In addition, it clearly explains the Thermal Wave Resonator Cavity (TWRC) method, which is designed to check the thermal mobility and diffusivity of liquids. It has demonstrated that the same type of technology is capable of producing a variety of static and dynamic thermal parameters. In addition, two of these parameters are checked and calculated in a straightforward manner, while the other two are still calculated indirectly. This method shows the principle of sustainability in that it studies certain liquids such as various oils, water, glycerin, ethylene glycol and the like. Photopyroelectric Effect and Pyroelectric Measurement Due to fluid processing, photoelectric effect and thermoelectric measurement and subtraction between the sample and the detector, the optoelectronic technology used in the standard distribution systematically underestimates the thermal diffusivity of the solid sample. In order to solve the negative effects in the process of treating fluids in this study, a completely new method will be proposed. It depends on the application of the transparent thermoelectric sensor as well as the transparent coupling of the fluid, as well as the self-standardization process. In this sense, it is easy to measure examples of accurate opacity and solidity of thermal diffusivity, as well as the light absorption coefficient of translucent solid samples. Photopyroelectric for the Simultaneous Thermal Photoelectron display thermophysical studies for simultaneous thermals are very important and critical in many relevant academic sciences. The heating capacity is closely related to the microstructure of the approved material and is important in monitoring the energy content of the system. Therefore, calorimetry plays an important role in the cataloging of physical systems, especially in the transition phase where energy fluctuations are very important. This paper summarizes the ability of photothermographic technology to study the variation of certain heat and other thermal parameters with temperature and is closely related to the transition. The working principle is applied to the theoretical basis, and the experimental structure and additional benefits of the technology compared with the traditional technology are described in detail. The integration in the calorimetric setting provides the possibility of performing calorimetric studies while also depicting the complementary nature of optical, structural and electrical properties. This paper reviews the high temperature resolution results for several phase transition parameters in different systems under various possible configurations. Optimized configuration of the pyroelectric sensor in photopyroelectric technique Optimized configuration of pyroelectric sensors in optoelectronic technology. It has been shown that in the case of constant laser power, the response of the pyroelectric sensor would not depend on the spatial distribution of the intensity of the laser beam. Therefore, depending on the voltage model, the signal amplitude will be inversely proportional to the effective range of the sensor. In addition, the thermoelectric signal may increase once the effective area decreases and the total area of the sensor remains constant. Based on this, by optimizing the metal electrode structure of the sensor, a method is proposed to improve the PPE signal measured in voltage mode. The experiment shows that this improved method can increase the signal amplitude by 10 times without increasing the electrical noise. Deficiency in the photopyroelectric Types of deficiency The so-called optical component surface defects mainly refer to surface rickets and surface contaminants. Surface rickets refer to various processing defects such as pitting, scratches, open bubbles, broken edges, and broken spots on the surface of polished optical components. The main reason is processing or subsequent processing. Scratches are the scratches on the surface of an optical component. Due to the length of the scratch, it can be divided into long scratches and short scratches, with a limit of 2 mm. If the scratch length is greater than 2 mm, it is a long scratch, and if it is less than 2 mm, it is a short scratch . For short scratches, the evaluation criterion is to detect their cumulative length. Relatively speaking, scratches are easier to detect than defects such as pitting. Pitting refers to pits and defects on the surface of an optical component. The surface roughness in the pit is large, the width and depth are approximately the same, and the edges are irregular. Typically, defects with an aspect ratio greater than 4:1 are scratches, while defects less than 4:1 are pitting. The bubbles are formed by gases that are not removed in time during the manufacture or processing of the optical component. Since the pressure of the gas in each direction is evenly distributed, the shape of the bubble is usually spherical. Broken edges are a criticism of the edge of optical components. Although it is outside the effective area of the light source, it is also a source of light scattering, which also has an effect on optical performance. Negative impact caused by the deficiency Surface rickets, as a microscopic local defect caused by man-made process, have a certain influence on the surface properties of optical components, which may lead to serious consequences such as optical instrument operation errors. In short, the surface defects of optical components can be detrimental to the performance of optical systems, and the root cause is the scattering characteristics of light. The damage of optical component surface defects to itself and the entire optical system is manifested in the following aspects: (1) The quality of the beam is degraded. The surface scattering defect of the component produces a scattering effect of light, so that the energy of the beam is greatly consumed after passing through the defect, thereby reducing the quality of the beam. (2) The thermal effect of defects. Since the area where the surface defects are located absorbs more energy than other areas, the thermal effect phenomenon may cause local particial deformation of the component, damage the film layer, etc., and thus damage the entire optical system. (3) Damage to other optical components in the system. In a laser system, under the illumination of a high-energy laser beam, the scattered light generated by the surface of the component is absorbed by other optical components in the system, resulting in uneven light received by the component. When the damage threshold of the optical component material is reached. The quality of the transmitted light is affected, and the optical components are damaged, which is more likely to cause serious damage to the optical system. (4) Rickets can affect the cleanliness of the field of view. When there are too many rickets on the optical components, it will affect the microscopic aesthetics. In addition, the cockroaches will leave tiny dust, microorganisms, polishing powder and other impurities, which will cause the components to be corroded, moldy, and foggy. Will significantly affect the basic performance of the component. References Spectroscopy
Photopyroelectric
[ "Physics", "Chemistry" ]
2,424
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
60,789,568
https://en.wikipedia.org/wiki/Bagger%201473
Bagger 1473 (christened as The Blue Wonder) is a Type SRs 1500-series bucket-wheel excavator prototype left abandoned in a field in the municipality of Schipkau in Germany. Technical data The bucket wheel excavator had a power of 5555 kW during operation and was supplied with a 6 kV power cable. Overall, the bucket wheel excavator is about 50 meters high and about 171.5 meters long. The six-section crawler undercarriage, with a maximum travel speed of 6 meters per minute, carries the total mass of 3850 tons. With 10 buckets of 1.5 cubic meters each and 57 pours per minute (known as the pour rate), the excavator achieves a theoretical mining capacity of 5130 cubic meters per hour. The bucket wheel diameter is 12.5 meters and the cutting speed is 3.73 meters per second. The design of the wheel boom, with a length of 67 meters, enables an excavation height of up to 35 meters and an excavation depth of up to 15 meters. History The excavator was used at the Tagebau Meuro mine from 1965 to 2002. After it was withdrawn from service, the municipalities Senftenberg, Großräschen, and Schipkau decided on a joint action to preserve the opencast mining machine. Between 29 August to 15 September 2003, Bagger 1473 was moved approximately from the Meuro mine to near the EuroSpeedway Lausitz, where it would serve as a monument to the area's former lignite mining. The machine was moved across industrial roads and railways owned by the LMBV but public traffic was not affected. When Bagger 1473 became popular with the urban explorers, it was misidentified as Bagger 258 because of markings found on its information plate. Scrapping In January 2019 the municipalities that supported its move announced that the excavator was to be scrapped. Their decision was mainly due to the machine's dilapidation and damage. It was financially impossible to maintain and because vandalism and theft had become so extensive, the structure was no longer safe for people. Parts of the excavator would be preserved, such as its wheel. However, when the Brandenburg Landesamt für Denkmalpflege (State Office for the Preservation of Monuments) and state archaeological museum learned of the planned demolition from news reports, they issued a statement the excavator had acknowledged historical structure protection since 2002/2003. Although at that time, it was simply assumed that it was unnecessary to formally place it on the list of such structures. The fact that the structure was identified as historically significant was considered sufficient to declare it as protected. The Landesamt quickly made it official in February. Constructive discussions about the future of Bagger 1473 are ongoing with the local municipalities and communities. Despite imminent demolition work being prevented, it remains unclear how the structure can be preserved and who will have financial responsibility. See also Overburden Conveyor Bridge F60 EuroSpeedway Lausitz References External links Tagebau Meuro at ostkohle.de "Tagebau Meuro: 1958–1999" (in German) Engineering vehicles Bucket-wheel excavators Mining in Germany Takraf GmbH
Bagger 1473
[ "Engineering" ]
679
[ "Engineering vehicles", "Mining equipment", "Bucket-wheel excavators" ]
60,789,813
https://en.wikipedia.org/wiki/PRNET
The Packet Radio Network (PRNET) was a set of early, experimental mobile ad hoc networks whose technologies evolved over time. It was funded by the Advanced Research Projects Agency (ARPA). Major participants in the project included BBN Technologies, Hazeltine Corporation, Rockwell International's Collins division, and SRI International. History ARPA initiated the PRNET project in 1973, funding both theoretical and experimental research. Its goals were outlined in a 1975 paper by Bob Kahn, namely, to investigate the feasibility of using packet-switched, store-and-forward radio communications to provide reliable computer communications in a mobile environment. The earlier ALOHAnet served as an inspiration, but PRNET tackled a significantly harder set of problems, namely, multi-hop communications between mobile vehicles without a central station. In Kahn's initial conception, the overall system design was "predicated upon the existence of an array of low cost repeaters", where he defines the term to mean "a particular kind of packet radio which is equipped to retransmit by radio some or all packets which it receives by radio". In today's terminology, this might be called a router or a packet switch, rather than a radio repeater. The first PRNET was established under the auspices of SRI in the San Francisco Bay Area, with BBN contributing network technology and Collins creating the Experimental Packet Radios (EPRs), which implemented L-band spread-spectrum waveforms and supported half-duplex communications at 100 or 400 kilobits/second. There was also a smaller network at BBN, for software development and testing. The first packet radios were delivered in mid-1975 for initial testing and a quasi-operational network capability was established for the first time in September 1976, shortly after the prototype networking software was developed. By 1977, this software included radio network routing control; a gateway to other networks; network measurement; debugging tools; and configuration tools. PRNET was sufficiently advanced by 1977 to participate in the initial three-way internetworking demonstration, which linked a mobile vehicle in PRNET with nodes in the ARPANET, and via SATNET, to nodes in London run by Peter Kirstein's research group at University College London. Afterwards, it was usually attached to the ARPANET so that BBN software developers could access and update it from Cambridge. By June 1978, about 25 radio nodes were available. By September 1979, "Ron [Kunzelman] reported that SRI is now operating two PRNETs in the San Francisco bay area, and one PRNET at Ft. Bragg, North Carolina. The net at Ft. Bragg is now eight terminals on two TIUs, and will grow to forty terminals." The Experimental Packet Radios were later replaced by Upgraded Packet Radios (UPR), circa 1978, and in 1986 by Low-Cost Packet Radio (LPR) as part of DARPA's follow-on SURAN project. See also History of the Internet References Robert E. Kahn, "The organization of computer resources into a packet radio network", AFIPS '75 Proceedings, May 19–22, 1975, pages 177-186. J. Burchfiel, R. Tomlinson and M. Beeler, "Functions and Structure of a Packet Radio Station," AFIPS Conference Proceedings, Volume 44, 1975, AFIPS Press, Montvale, N.J. Darryl E. Rubin, "Army Packet Radio Network Protocol Study", SRI International, November 1977. Robert E. Kahn, Steven A. Gronemeyer, Jerry Burchfiel, Ronald C. Kunzelman, "Advances in Packet Radio Technology", Proceedings of the IEEE, vol. 66, no. 11, November 1978, pages 1468–1498. J. Jubin, "Current packet radio network protocols", lNFOCOM‘85 Proc., Mar. 1985. J. Westcott and J. Jubin, "A distributed routing design for a broadcast environment," in MILCOM’82 Proc., 1982. John Jubin and Janet D. Tornow, "The DARPA Packet Radio Network Protocols", Proceedings of the IEEE, vol. 75, no. 1, January 1987, pages 21–32. Packet radio
PRNET
[ "Technology" ]
870
[ "Wireless networking", "Packet radio" ]
60,789,918
https://en.wikipedia.org/wiki/Kioxia
Kioxia Holdings Corporation (), simply known as Kioxia (stylized in all-uppercase), is a Japanese multinational computer memory manufacturer headquartered in Tokyo, Japan. The company was spun off from the Toshiba conglomerate in June 2018 and gained its current name in October 2019; it is currently majority owned by Bain Capital which holds a 56% stake, while Toshiba holds a 41% stake. In the early 1980s, while still part of Toshiba, the company was credited with inventing flash memory. As of the second quarter of 2021, the company was estimated to have 18.3% of the global revenue share for NAND flash solid-state drives. Name Kioxia is a combination of the Japanese word kioku meaning memory and the Greek word axia meaning value. History In 1980, Fujio Masuoka, an engineer at Kioxia predecessor Toshiba, invented flash memory, and in 1984, Masuoka and his colleagues presented their invention of NOR flash. In January 2014, the Toshiba Corporation completed its acquisition of OCZ Storage Solutions, renaming it OCZ and making it a brand of Toshiba. On June 1, 2018, due to heavy losses experienced by the bankruptcy of the Westinghouse subsidiary of former parent company Toshiba over nuclear power plant construction at Vogtle Electric Generating Plant in 2016, was spun off from the Toshiba Corporation. Toshiba maintained a 40.2% equity in the new company. The new company consisted of all of Toshiba memory businesses. Toshiba Memory Corporation became a subsidiary of the newly formed Toshiba Memory Holdings Corporation on March 1, 2019. In June 2019, Toshiba Memory Holdings Corporation experienced a power cut at one of its factories in Yokkaichi, Japan, resulting in the loss of at least 6 exabytes of flash memory, with some sources estimating the loss as high as 15 exabytes. Western Digital used (and still uses) Kioxia's facilities for making its own flash memory chips. On August 30, 2019, the company announced that it signed a definitive agreement to acquire Taiwanese electronics manufacturer Lite-On's SSD business for . The acquisition closed on July 1, 2020. On July 18, 2019, Toshiba Memory Holdings Corporation announced it would change its name to Kioxia on October 1, 2019, including all Toshiba memory companies. On October 1, 2019, Toshiba Memory Holdings Corporation was renamed Kioxia Holdings Corporation and Toshiba Memory Corporation was renamed Kioxia Corporation. This renaming also included companies associated with Toshiba's solid-state drive brand OCZ. In February 2022, Kioxia and Western Digital reported that contamination issues have affected the output of their flash memory joint-production factories, with WD admitting that at least 6.5 exabytes of memory output being affected. The Kiakami and Yokkaichi factories in Japan stopped producing due to the contamination. Corporate affairs As of April 14, 2024, Kioxia ownership is as follows: Bain Capital Private Equity (56.24%) Toshiba (40.64%) Hoya Corporation (3.13%) Subsidiaries Kioxia Holdings Corporation is the holding company of Kioxia Corporation. Kioxia Corporation is the parent company of several companies including Kioxia Systems Company, Kioxia Advanced Package Corporation, Kioxia America, and Kioxia Europe. References External links Toshiba Computer hardware companies Computer memory companies Consumer electronics brands Electrical engineering companies of Japan Electronics companies of Japan Electronics companies established in 2018 Corporate spin-offs Japanese brands Japanese companies established in 2018 Manufacturing companies based in Tokyo Multinational companies headquartered in Japan Joint ventures Technology companies of Japan Computer storage companies Bain Capital companies Companies listed on the Tokyo Stock Exchange 2024 initial public offerings
Kioxia
[ "Technology" ]
786
[ "Computer hardware companies", "Computers" ]
60,792,132
https://en.wikipedia.org/wiki/Holos%20%28political%20party%29
Holos (, ), translated as Voice or Vote, is a liberal and pro-European political party in Ukraine, led by Kira Rudik. The group was founded by Ukrainian musician Svyatoslav Vakarchuk in May 2019. The party won 20 MPs in the 2019 parliamentary election and became part of the opposition in the current Ukrainian parliament. Holos is the first party with liberal ideology to enter the Ukrainian Parliament and form a faction. After the local elections in 2020, under leadership of Kira Rudik, the party gained 350 seats in the local councils. Name The name of the party has two meanings in the Ukrainian language: voice and vote. The founder of the party, Svyatoslav Vakarchuk, said that the name of the party was chosen because this word characterizes Ukrainian voters and their desire for change in the country: History Establishment and entry to parliament The party was presented to the public on 16 May 2019 in Kyiv and announced it would run in the upcoming July 2019 Ukrainian parliamentary election. Legally, Holos is an "upgraded" version of the party "Platform of Initiatives" that was founded in 2015 to take part in the 2015 Ukrainian local elections. The creation of the party marks Vakarchuk's second venture into politics – he previously served as an MP for almost a year after being elected in the 2007 Ukrainian parliamentary election. The party was first featured in the opinion polling carried out from 16 to 21 May 2019, its rating was 4.6%. The party held its first congress on 8 June 2019, at which part of its party list for the forthcoming elections was announced. The top 10 candidates were as follows: 1) Svyatoslav Vakarchuk, 2) Yulia Klimenko, 3) Kira Rudik, 4) Yaroslav Zheleznyak, 5) Oleksandra Ustinova, 6) Oleh Makarov, 7) Yaroslav Yurchyshyn, 8) Serhiy Rakhmanin, 9) Solomiya Bobrovska, and 10) Olha Stefanyshyna. Prior to the congress, the Ukrainian Galician Party and Voice agreed to cooperate, Ukrainian Galician Party members ran as Voice candidates in single-member constituencies and were added to Voice's national electoral list. On 12 June, the party withdrew two of its constituency candidates because they had "affiliated with or co-operated with pro-Russian forces", namely Ukrainian Choice and the Opposition Bloc. Party leader Vakarchuk had assured on 19 May 2019 that no incumbent MPs would be on the party's list for the 2019 parliamentary election. However, such deputies, whose political views coincided with the party's ideology, were allowed to be elected by majority constituencies. In the election, four incumbent MPs stood as candidates for the party in majority constituencies: Victoria Voytsitska, Pavlo Rizanenko, Victoria Ptashnyk, and Leonid Yemets. None of them were elected. In the 2019 parliamentary election, Voice finished with 5.82% of the vote, 17 MPs elected nationwide and three MPs elected in a constituency. 47.6% of the party's elected deputies are women. On 11 March 2020, Vakarchuk stepped aside as head of the party. Voice selected Kira Rudik as its new leader. Vakarchuk left parliament in June 2020, stating that his "mission" (bringing new people with new politics into parliament) was complete. The party was admitted to the Alliance of Liberals and Democrats for Europe (ALDE) on 18 November 2020. In the October 2020 Ukrainian local elections, Voice had some local success, having led its factions to the Lviv Oblast Council, to the city councils of Kyiv, Lviv, Cherkasy, and a number of other city councils, including even in the Donbas. The party's mayoral candidate made it to the second round of the election in Cherkasy, which he lost. Disunion In June 2021, 10 of the party's 20 MPs announced that they would create a separate association named Justice (; Spravedlyvist) and expressed their lack of confidence in faction leader Yaroslav Zhelezniak. In addition, three of the party's regional branches called on party leader Kira Rudyk to resign and demanded that the party hold a congress to select a new leader. The party responded by admitting that it was "going through a difficult period" and announcing a congress at which the conflict within Voice will be addressed. The split was triggered by the decision of five of the party's MPs (including Rudyk and Zhelezniak) to vote in favour of the ruling party's initiative to delay the introduction of Ukrainian language quotas for the country's film industry; the dissenting MPs called this "betrayal" and a "vote for Russification". In late July 2021, a group made up of 11 of the party's MPs attempted to replace faction leader Yaroslav Zhelezniak with his deputy, Roman Kostenko as acting head of the faction. This move failed because for the faction leader to be replaced, firstly two-thirds of the faction have to vote (in this case, 14 MPs), and secondly such a decision can only be made at a faction meeting. A statement countering the attempt was signed by the remaining MPs, including Zhelezniak and Kostenko. At the party congress of 29 July 2021 it was decided to expel seven of the party MPs. Five of the expelled MPs had already written statements to leave the party. The expelled members were dissatisfied with, according to them, the "cementing of Kira Rudyk's control over the party." At the same meeting, 86% of the delegates expressed their confidence in Rudyk as the party leader. As of September 2021, only nine of the party's 20 seats in the Verkhovna Rada (Ukraine's national parliament) are held by MPs who are loyal to the party; the remaining 11 are held by MPs who are part of Spravedlyvist. National (state) funding of the party On 15 December 2021, Voice lost its appeal to the Supreme Court of Ukraine to rollback the January 2021 blocking of the payment of state funds to the party's accounts by the National Agency for Prevention of Corruption (NAPC). The court verdict remanded the case for retrial, but allowed the NAPC to continue blocking the funds. After the violations were eliminated, the NACP resumed the funding. On 2 February 2022, party leader Kira Rudyk stated that a criminal case had been opened against the party regarding its economic activities. In comments on the party's official site Rudyk stated that the case was politically motivated, and that Ukraine's State Bureau of Investigation was "systematically attempting to muzzle those who criticise the government while ignoring cases involving 'Ze-friends'". In June 2024 the NACP stopped the national (state) funding of the party due to (alleged) incorrect information in its reports to amount of almost 5 million hryvnias. The Supreme Court of Ukraine confirmed the legality of the decision of the NACP to stop state funding of the party on 11 October 2024. Political positions The party declares a democratic approach, supporting the separation of money from politics. In economic matters, the party is in favor of introducing a tax on withdrawn capital, a land market, privatization of state-owned enterprises, and the fight against illegal customs and tax schemes. Party leader Vakarchuk stated on 10 June 2019 that the party wants to abolish the current Ukrainian election constituencies (in which 225 seats are elected in constituencies with a first-past-the-post electoral system in one round), instead favoring a shift to full open list proportional representation. According to the analysis of human rights activist Volodymyr Yavorsky, the party's program pays great attention to human rights, while there are no populist statements in it. According to experts from the Center for Economic Strategy Dmytro Yablonovsky and Daria Mikhailishin, the program focuses on combating corruption through de-oligarchization and by increasing the efficiency of the state through the introduction of modern technologies. In November 2019, the party's parliamentary faction stated that because Ukraine could not regain control of separatist Donetsk People's Republic and Luhansk People's Republic and Russian annexed Crimea it should "freeze the conflicts", abandon the Minsk Agreements and focus on strengthening its own positions. In October 2020, the party claimed that Russia must pay $500 bln in reparations to cover the damage it has caused to the Donbas since 2014. Russian invasion of Ukraine On the morning of February 24, 2022, Holos faction was present in the Verkhovna Rada in full (except for one — MP Oleksandra Ustinova, who at that time was on a business trip) and voted for the draft laws important for security and defense of Ukraine. Besides, two faction’s MPs were the first among all the parliamentarians who went to the front from the very first day of the full-scale Russian invasion: they are Roman Kostenko and Roman Lozynskyi. For security reasons, the party does not provide information about other members of Holos party who joined the Defense Forces of Ukraine but did not announce it publicly. Since February 24, Holos has been actively working in the Verkhovna Rada, in local representative offices, and at the level of inter-parliamentary diplomacy. In addition, representatives of Holos conduct active volunteer activities and attract international aid. Work with liberal democrats in Europe and across the world In early May 2023, Holos joined the world federation of liberal-democratic parties Liberal International. According to party leader Kira Rudik, Holos has been working with many countries (in particular, the African Liberal Alliance, Taiwan, and countries of Latin America) in order to ensure long-term support for Ukraine. In 2022, Kira Rudik was elected Vice President of the Alliance of Liberals and Democrats for Europe (ALDE). Kira Rudik became the first representative of a non-EU country in this position. The Ukrainian politician set such tasks: ensuring interaction of liberal parties with Ukraine, promotion of Ukraine's interests in the European Parliament; representation of countries outside the EU (Ukraine, Georgia, Moldova, Iceland, and Norway); acceleration of the EU enlargement process (foremost, the European integration of Ukraine); advocacy of women's leadership within the framework of the “Alliance of Her” program. During ALDE membership, Holos initiated resolutions in support of Ukraine. In particular, in June 2024, the ALDE Council in Vilnius adopted a document on unquestioning support for Ukraine, in which European liberals called to: Support the implementation of the Peace Formula and its individual points as part of European security, Increase and accelerate the military assistance of our state and our Defense Forces, Strengthen sanctions pressure on Russia and eliminate ways of circumventing economic restrictions (measures against the shadow fleet, strengthening export controls etc.). The search for effective mechanisms for the use of frozen Russian assets for the needs of Ukraine, Strengthening cooperation for the return of Ukrainian children kidnapped by the occupiers. Leadership Party Leaders Faction Leaders Electoral performance Verkhovna Rada See also :Category:Holos (political party) politicians Notes References External links 2019 establishments in Ukraine Anti-corruption parties in Ukraine Centrist parties in Ukraine E-democracy Liberal parties in Ukraine Ukraine Parliamentary factions in Ukraine Political parties established in 2019 Pro-European political parties in Ukraine
Holos (political party)
[ "Technology" ]
2,373
[ "E-democracy", "Computing and society" ]
60,792,194
https://en.wikipedia.org/wiki/CAS%20parameters
The CAS (concentration, asymmetry, smoothness) parameters are a tool originally developed in astronomy to characterize the shapes and images of objects with some central concentration. Each parameter is a single number which represents some aspect of the structure of the object under study. These parameters were originally developed by astronomers to quantify the light distribution of galaxies, as a way to avoid having to use visual estimates of galaxy morphological classification. They have also been used in biological imaging, and other areas of imaging analysis. Each parameter is measured in a well defined way and within a well defined radius. Asymmetry (A) The asymmetry index is a measure of how symmetric an object is. It is defined by rotating and subtracting an image by 180 deg from its center. This parameter has been used in astronomy to determine galaxy mergers and its history. The asymmetry is a measure of skewness in terms of the 2-dimensional distribution of light. Concentration (C) The concentration index is used to measure how concentrated the light is within the object under study. It is an analog of the mean for a spatial distribution in two dimensions. It is usually measured at the radii of the image under studied which contains 80% and 20% of the light. Other methods are used, such as the fraction of light within set radii. Smoothness (S) The smoothness (also called clumpiness) is a measure of the fraction of light in an object which is in small scale structures. It is an analog of the standard deviation for a 2-dimensional image. In astronomy galaxies which are elliptical have a low smoothness as there is little in the way of small scale structure within these types of galaxies. References Astronomy software
CAS parameters
[ "Astronomy" ]
353
[ "Astronomy software", "Works about astronomy" ]
60,792,670
https://en.wikipedia.org/wiki/Lisa%20Alvarez-Cohen
Lisa Alvarez-Cohen is the vice provost for academic planning, Fred and Claire Sauer Professor at the University of California, Berkeley. She was elected a member of the National Academy of Engineering in 2010 for the discovery and application of novel microorganisms and biochemical pathways for microbial degradation of environmental contaminants. She is also a Fellow of the American Society for Microbiology. Early life and education Alvarez-Cohen studied engineering and applied science at Harvard University and graduated in 1984. She was a postgraduate student at Stanford University, where she earned her master's degree in 1985 and a PhD in 1991. Research and career Alvarez-Cohen works in environmental microbiology and ecology. She joined the faculty at University of California, Berkeley in 1991, and was the first woman to achieve tenure in Berkeley's Civil & Environmental Engineering Department. She is interested in species that can perform environmentally relevant functions, including studies of biotransformation and the fate of environmental water contaminants. Alvarez-Cohen uses omics based molecular tools to optimise bioremediation. Amongst other contaminants, Alvarez-Cohen's lab have studied the remediation of trichloroethene, aqueous film forming foams and arsenic: Trichloroethene is a contaminant that is routinely found at Superfund sites. Trichloroethene is often included on the United States Environmental Protection Agency priorities list, and is typically dechlorinated using dehalococcoides. Aqueous film forming foams have been used since the 1960s to extinguish hydrocarbon fuel fires. They contain perfluoroalkoxy alkanes, which can be contaminants due to their impact on human heath. Perfluoroalkoxy alkanes are known to bioaccumulate and exhibit toxicity in animals. Arsenic occurs regularly on the National Priorities List in various chlorinated solvents. Alvarez-Cohen uses anammox to remove nitrogen from wastewater. Anammox is cheaper and more efficient than conventional nitrogen sequestration. She uses stable isotope traces to study the fundamental mechanisms of anammox. Academic service In 2007 Alvarez-Cohen became chair for the Department of Civil and Environmental Engineering, a position she held until 2012. She has served as the diversity director of the Stanford University Engineering Research Center and elected chair of the Berkeley Division of the Academic Senate. She was appointed the vice provost for academic planning in July 2018. Alvarez-Cohen has appeared on NPR and served on the editorial advisory board of Environmental Science & Technology. She has represented the United States at the National Academy of Engineering Frontiers of Engineering in India, Arlington County, Virginia, and Irvine, California. Selected publications Awards and honours 1994 W. M. Keck Foundation Award for Engineering Teaching Excellence 2002 Elected a fellow of the American Society for Microbiology 2003 Association of Environmental Engineering and Science Professors Distinguished Service Award 2010 Elected to the National Academy of Engineering 2014 American Society of Civil Engineers Simon W. Freese Environmental Engineering Award 2018 Association of Environmental Engineering and Science Professors Fellow Personal life Alvarez-Cohen is married to Mike Dean Alvarez Cohen, with whom she has two children, Jason and Ryan. References Living people American women environmentalists American environmentalists American environmental scientists American women chemists Harvard John A. Paulson School of Engineering and Applied Sciences alumni Stanford University School of Engineering alumni Members of the United States National Academy of Engineering Year of birth missing (living people) Fellows of the Association of Environmental Engineering and Science Professors Fellows of the American Academy of Microbiology 21st-century American women
Lisa Alvarez-Cohen
[ "Environmental_science" ]
727
[ "American environmental scientists", "Environmental scientists" ]
60,794,195
https://en.wikipedia.org/wiki/Journal%20of%20Computational%20and%20Nonlinear%20Dynamics
The Journal of Computational and Nonlinear Dynamics is a quarterly peer-reviewed multidisciplinary scientific journal covering the study of nonlinear dynamics. It was established in 2006 and is published by the American Society of Mechanical Engineers. The editor-in-chief is Balakumar Balachandran (University of Maryland). According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.996. References External links Multidisciplinary scientific journals Academic journals established in 2006 Quarterly journals Dynamics (mechanics) English-language journals Systems science literature American Society of Mechanical Engineers academic journals Mechanics journals Dynamical systems journals
Journal of Computational and Nonlinear Dynamics
[ "Physics", "Mathematics" ]
124
[ "Physical phenomena", "Classical mechanics", "Motion (physics)", "Mechanics", "Dynamics (mechanics)", "Dynamical systems journals", "Mechanics journals", "Dynamical systems" ]
42,886,528
https://en.wikipedia.org/wiki/Spent%20potlining
Spent Potlining (SPL) is a waste material generated in the primary aluminium smelting industry. Spent Potlining is also known as Spent Potliner and Spent Cell Liner. Primary aluminium smelting is the process of extracting aluminium from aluminium oxide (also known as alumina). The process takes place in electrolytic cells that are known as pots. The pots are made up of steel shells with two linings, an outer insulating or refractory lining and an inner carbon lining that acts as the cathode of the electrolytic cell. During the operation of the cell, substances, including aluminium and fluorides, are absorbed into the cell lining. After some years of operation, the pot lining fails and is removed. The removed material is spent potlining (SPL). SPL was listed by the United States Environmental Protection Agency in 1988 as a hazardous waste. Hazardous properties of SPL are: Toxic fluoride and cyanide compounds that are leachable in water Corrosive - exhibiting high pH due to alkali metals and oxides Reactive with water - producing inflammable, toxic and explosive gases. The toxic, corrosive and reactive nature of SPL means that particular care must be taken in its handling, transportation and storage. SPL from aluminium reduction cell cathodes is becoming one of the aluminium industry's major environmental concerns. On the other hand, it also represents a major recovery potential because of its fluoride and energy content. Most SPL is currently stored at the aluminium smelter sites or placed in landfills. Dissolved fluorides and cyanides from SPL that are placed in landfills, along with other leachates may have environmental impacts. Environmentally safe storage methods include secure landfills or permanent storage buildings. However, many of the environmentally safe solutions are expensive and may develop unforeseen problems in the future. Background Production of primary aluminium metal with the Hall–Héroult process involves the electrolytic reduction of alumina in cells or pots. The electrolyte is made up of molten cryolite and other additives. The electrolyte is contained in a carbon and refractory lining in a steel potshell. The pots typically have a life of 2 to 6 years. Eventually the cell fails and the potlining (SPL) is removed and replaced. The SPL generated is listed by various environmental bodies as hazardous waste. Due to the concentrations of fluorides and cyanides in spent potlining, and the tendency to leach in contact with water, the US Environmental Protection Agency (USEPA) listed the materials on 13 September 1988 (53 Fed. Reg. 35412) as a hazardous waste (K088) under 40 C.F.R., Part 261, Subpart D. International shipment of SPL is subject to the protocols of the Basel Convention on the Transboundary Movement of Hazardous Wastes and Their Disposal. As the environmental regulation agencies in an increasing number of countries define SPL as a hazardous material, the disposal costs can easily run to more than $1000 per tonne SPL. World production of primary aluminium was 67 million tonnes in 2021. The world's aluminium smelters also produce about 1.6 million tonnes of toxic SPL waste. Past industry practice has been to landfill this waste. This must change if the aluminium industry wants to claim a reasonable degree of sustainability and environmentally tolerable emissions. Landfill of unreacted SPL is considered a practice of the past. Chemical Properties of SPL There is variation in composition of SPL depending on such factors as the type of aluminium smelting technology used, the initial components of the cell lining and dismantling procedures. Indicative composition of SPL for three different technologies is shown in the following table. SPL is hazardous due to: Toxicity from fluoride and cyanide compounds that are leachable in water Corrosive - exhibiting high pH due to alkali metals and oxides Reactive with water in a way that produces inflammable, toxic and explosive gases. An example of the potential consequences of SPL reaction with water is the death of two workers and reported damage costs of $30 million due to an explosion of flammable gases from SPL in the hold of a cargo ship. The leachable fluorides in SPL come from the cryolite (Na3AlF6) and sodium fluoride (NaF) that are used as a flux in the smelting process. Cyanide compounds form in the pot lining when nitrogen from air reacts with other substances. For example, nitrogen reacting with sodium and carbon according to the equation - 1.5N2 + 3Na + 3C → 3NaCN. Aluminium carbide forms in the potlining from the reaction of aluminium and carbon according to the equation – 4Al + 3C → Al4C3. Aluminium nitride forms from a number of reactions including the reaction of cryolite with nitrogen and sodium according to the equation - Na3AlF6 + 0.5N2 + 3Na → AlN + 6NaF Gases are generated from reactions of water with compounds such as un-oxidised aluminium, un-oxidised sodium metal, aluminium carbide and aluminium nitride. Typical gases from the reaction of SPL with water are: Hydrogen from aluminium and water – 2Al + 3H20 → 3H2 + Al2O3 Hydrogen from sodium metal and water – 2Na + 2H20 → H2 + 2NaOH Methane from aluminium carbide and water - Al4C3 + 6H20 → 3CH4 + 2Al2O3 Ammonia from aluminium nitride and water – 2AlN + 3H20 → 2NH3 + Al2O3n Toxicity of SPL A number of research studies included biological tests to evaluate the toxicity of SPL on plants and humans. Aluminium, cyanide and fluoride salts were identified as the major toxic agents in SPL. The genotoxic potential of SPL and its main chemical components was evaluated on vegetal and human cells. Observed effects on vegetal cells included reduction in mitotic index and an increase in the frequency of chromosome alterations. Fluoride was the main genotoxic component for human leukocytes. The observed effects induced by SPL suggest its mutagenic potential on plant and animal cells, confirming its noxiousness to the environment and human beings. The studies consistently recommend that handling measures and appropriate disposal of SPL are extremely important and indispensable to avoid its dispersion to the environment and that the storage and disposal of SPL should be supervised closely in order to reduce the risk. Issues with Landfilling SPL Past practices for dealing with Spent Potlining (SPL) include dumping it in rivers or in the sea or storing it in open dumps or landfilling. These methods are not environmentally acceptable because of the leachability of cyanides and fluorides. More recently SPL has been stored in secure landfills where it is placed on an impermeable base and covered with an impermeable cap. The amount of detailed information available on the quality of percolate from existing SPL landfills is very limited. A particular problem with SPL in landfills is the long-term liabilities that result from limited effective life of landfills based on current technology when compared with the long-lived contaminating properties of SPL. Lee and Jones-Lee describe the evolution and technical aspects of “dry-tomb” landfilling and why they consider it a seriously flawed technology citing problems such as: eventual failure of composite liner systems failure of cover systems to prevent ingress of water low probability of ground water monitoring systems to detect polluted leachate inadequate post closure funding and management arrangements. A 2004 study of a landfill containing SPL located in North America identified four chemical species as priority contaminants: cyanide, fluoride, iron and aluminium. Life-cycle assessment and ground water transport modelling were used to provide an understanding of the situation identifying environmental issues and significant ecotoxilogical potential impacts. The study observed that, while assumptions that the confinement of soil and waste was assumed to be perfect, in fact these sites could themselves become sources of contamination. The study states that the most advantageous option is the total destruction of the SPL fraction if concerns about the quality of long term confinement are considered. The major objection to the sealed type of disposal is that it will need to be monitored indefinitely. There is, therefore, a real need to find safe, acceptable alternative ways to landfill disposal. SPL was dumped by previous owners in an unlined waste repository at the Kurri Kurri smelter in Australia resulting in contamination of the local groundwater aquifer with high levels of fluoride, cyanide, sodium sulphate and chloride. An Interim Action conducted under Agreed Order No. DE-5698 between the Port of Tacoma and the Washington State Department of Ecology addresses the removal, through excavation and offsite disposal, of SPL zone material and associated contaminated soil at an old aluminium smelter site. The background to this situation is that from 1941 to 1947, the US Department of Defense built and operated an aluminum smelter at the Site. In 1947, Kaiser Aluminum & Chemical Corporation (Kaiser Aluminum) purchased the Site and operated the aluminum production facility until 2001. In 2002, Kaiser Aluminum closed the plant and, in 2003, the Port of Tacoma purchased the smelter property from Kaiser Aluminum for redevelopment. SPL Treatment Options A number of alternatives have been proposed for treatment of SPL. The alternatives can be classified as follows: disposal techniques where all or part of the SPL is destroyed or utilized by another industry including: combustion for power generation slag additives in iron and steel industry fuel and mineral supplement in cement manufacture red brick industry conversion to inert landfill materials recovery or recycling techniques where some of the SPL can be recovered for use in primary aluminium smelting: fluoride recovery from leaching processes pyrohydrolysis pyrosulfolysis silicopyrohydrolysis graphite recovery cathode carbon additives anode carbon additives selective recovery of aluminium. Recycling through other industries is an attractive and proven option; however, classification of SPL as a hazardous waste has greatly discouraged other industries from utilizing SPL, due to the burdensome and expensive environmental regulations. The Arkansas Pollution Control and Ecology Commission noted that treated SPL used to construct roads was recovered and placed in secure landfill. References Bibliography Andrade-Vieira, L.F., Palmieri, M.J. & Davide, L. F. (2017), Effects of long exposure to spent potliner on seeds, root tips, and meristematic cells of Allium cepa, Environmental Monitoring and Assessment,189:489 Arkansas Pollution Control and Ecology Commission (1998), Subject: Reynolds Metals Company Gum Springs and Hurricane Creek. Minute Order 98-28 Holywell, G. and Breault, R. (2013). An Overview of Useful Methods to Treat, Recover or Recycle Spent Potlining. JOM, Vol. 65, No. 11. International Aluminium Institute (2010). Aluminium Industry Benchmarking 2010. International Aluminium Institute, New Zealand House, Haymarket, London, UK. Godin, J., Ménard, J-F., Hains, S., Deschênes, L. and Samson, R. (2004). Combined Use of Life Cycle Assessment and Groundwater Transport Modelling to Support Contaminated Site Management. Human and Ecological Risk Assessment, 10:1099-1116. Kumar, B., Sen, P. K. and Sing, G. (1992). Environmental Aspects of Spent Pot Linings from Aluminium Smelters and its Disposal – An Appraisal, Indian Journal of Environmental Protection, Vol. 12, No. 8. Palmieri, M. J., Andrade-Vieira, L. F., Trento, M. V. C., de Faria, Eleutério, M. W., Luber, J., Davide, L. C., & Marcussi, S. (2016). Cytogenotoxic effects of spent pot liner (SPL) and its main components on human leukocytes and meristematic cells of Allium cepa. Water, Air, and Soil Pollution, 227(5), 1–10. Palmieri,M. J., Andrade-Vieira, L. F., Campos, J. M. S., Gedraite, L. S.,&Davide, L. C. (2016). Cytotoxicity of spent pot liner on Allium cepa root tip cells: a comparative analysis in meristematic cell type on toxicity bioassays, Ecotoxicology and Environmental Safety, 133, 442–447. Palmieri, M. J., Luber, J., Andrade-Vieira, L. F., & Davide, L. C. (2014). Cytotoxic and phytotoxic effects of the main chemical components of spent pot-liner: a comparative approach, Mutation Research, 763, 30–35. Pawlek, R.P. (2012). Spent potlining: an update. In Suarez C. E. (Editor). Light Metals. The Minerals, Metals and Materials Society. Pong, T.K., Adrien, R.J., Besdia, J., O’Donnell, T.A. and Wood, D. G. (2000). Spent Potlining – A Hazardous Waste Made Safe. Transactions of the Institution of Chemical Engineers, Vol 78 Part B, May 2000 Rustad, I., Kastensen, K.H. and Ødegård, K.E. (2000). Disposal Options for Spent Potlining. In Wolley, G.R., Goumans, J.J.J.M. and Wainwright, P. J. (Editors). Waste Materials in Construction. Shipowners Club (2010). Flammable Gas Causes Explosion, in Loss Prevention Case Studies, The Shipowners’ Protection Limited, 2010http://www.shipownersclub.com/media/433198/spl_ebook_021010.pdf Silveira, B.I., Danta, A.E., Blasquez, A.E. and Santos, R.K.P. (2002). Characterization of Inorganic Fraction of Spent Potliners: Evaluation of the Cyanides and Fluorides Content. Journal of Hazardous Materials B89 177–183. Sørlie, M. and Øye, H. A. (2010). Cathodes in Aluminium Electrolysis. Aluminium-Verlag Marketing and Kommunication, Düsseldorf. Turner, B.D., Binning, P.J. and Sloan, S.W. (2008). A Calcite Permeable Barrier for The Remediation of Fluoride from Spent Potliner (SPL) Contaminated Groundwater. Journal of Containment Hydrology 95 110-120 Washington Department of Ecology (2013). Final SPL Area Interim Action Work Plan Former Kaiser Aluminum Property 3400 Taylor Way Tacoma, Washington. Prepared for Port of Tacoma, Tacoma Washington by Landau & Associates, Edmonds, WA. Retrieved from Department of Ecology website https://fortress.wa.gov/ecy/gsp/CleanupSiteDocuments.aspx?csid=2215 Aluminium Smelting
Spent potlining
[ "Chemistry" ]
3,247
[ "Metallurgical processes", "Smelting" ]
42,887,127
https://en.wikipedia.org/wiki/Hanseniaspora%20guilliermondii
Hanseniaspora guilliermondii is a species of yeast in the family Saccharomycetaceae. In its anamorph form, it is called Kloeckera apis. Taxonomy The initial sample of the species was isolated by South African pathologist Adrianus Pijper from an infected nail from a patient and assigned the name H. guilliermondii. In 1952, the species was placed in synonymy with Hanseniaspora valbyensis. In 1968, N. J. W. Kerger-Van Rij and Donald G. Ahearn, observed physiological and morphological differences between H. valbyensis and H. guilliermondii and proposed a resumed separation of the two species. Their study identified that a third strain, originally described as H. melligeri by J. Lodder in 1932 that had been isolated from dates and later synonymized with H. valbyensis, was synonymous with H. guilliermondii. Further testing by Meyer, Brown, and Smith in 1977 confirmed the findings of the 1968 study using DNA testing. Further DNA examination in 1978 demonstrated that yeast samples originally collected from grape juice and identified as the unique species H. apuliensis by Castelli in 1948, later synonymized with H. valbyensis in 1958, was actually synonymous with H. guilliermondii. Yeast samples that had been obtained from a bee by P. Lavie in 1954 and later designated as Kloeckera apis was found to be the anamorph form of H. guilliermondii and placed in synonymy. Description Microscopic examination of the yeast cells in YM liquid medium after 48 hours at 25°C reveals cells that are 2.2 to 5.8 μm by 4.5 to 10.2 μm in size or occasionally longer, apiculate, ovoid to elongate, appearing singly or in pairs. Reproduction is by budding, which occurs at both poles of the cell. In broth culture, sediment is present, and after one month a very thin ring is formed. Colonies that are grown on malt agar for one month at 25°C appear white to cream-colored, glossy, and smooth. Growth is slightly raised at the center. The yeast forms poorly-developed pseudohyphae on potato agar, or are absent. The yeast has been observed to form one to four, mostly four, hat-shaped ascospores when grown for at least one week on 5% Difco malt extract agar or on potato dextrose agar. When released, the ascospores tend to clump together. The yeast can ferment glucose, but not galactose, sucrose, maltose, lactose, raffinose or trehalose. It has a positive growth rate at 37°C, but no growth at 40°C. It can grow on agar media containing 0.1% cycloheximide and utilize 2-keto-d-gluconate as a sole source of carbon. Ecology Although the original sample of the species was obtained in a clinical medical setting, the yeast is primarily associated with fruits, plants, fermenting musts, and insects. Strains of this species produce acetoin, a chemical found in many food products and fragrances. References Saccharomycetes Yeasts Fungi described in 1928 Fungus species
Hanseniaspora guilliermondii
[ "Biology" ]
709
[ "Yeasts", "Fungi", "Fungus species" ]
42,887,965
https://en.wikipedia.org/wiki/Toxgnostics
Toxgnostics is part of personalized medicine as it describes the guiding principles for the discovery of pharmacogenomic biomarker tests, also referred to as companion diagnostic tests, which identify if an individual patient is likely to suffer severe drug toxicity from treatment with a specific therapeutic agent. Once at-risk individuals are identified, drug toxicity can be prevented using elective dose reduction or prescription of a different medication. Background The majority of toxgnostic studies have been candidate gene studies restricted to the known Absorption, Distribution, Metabolism, and Excretion genes (ADME) of drug treated patients. The PharmaADME consortium identified 32 core genes containing 184 variants within common pathways that should be included in ADME candidate gene studies of toxicity biomarkers. Toxicity biomarkers that have been clinically validated using this restricted panel of genes include the P450 cytochrome assay that is currently recommended for routine clinical use of the oral anticoagulant warfarin. Using next-generation sequencing methods and genome-wide association studies a more comprehensive toxgnostic approach can be utilized through unbiased analysis of several million variants across the whole human genome, including introns and exons, for pharmacogenomic markers of drug induced toxicity. Cancer drugs have been highlighted as particularly appropriate candidates for toxgnostic studies due to the significant toxicity profiles associated with both targeted therapies and chemotherapy. Most cancer patients obtain only modest benefit from treatment, whereas toxicity is common and often associated with severe side effects which include considerable morbidity and mortality. One of the most commonly used chemotherapy drugs 5-fluorouracil (5FU) prescribed as adjuvant therapy following surgical resection of early stage colorectal cancer benefits only approx. 4% of patients, whereas 30–40% of those treated will suffer severe toxicity such as neutropenia, mucositis, hand-foot syndrome, diarrhoea, and stomatitis, fatal toxicities will kill 0.5-1% of people treated. Through the use of toxgnostic screens a number of genetic variants have now been identified that can be used to predict 5FU toxicity prior to treatment. These genetic variants can be used to identify the individuals predisposed to severe drug toxicity and the dose of 5FU chemotherapy can be reduced to prevent severe toxic side effects. Toxgnostic biomarker tests currently available for use in clinical practice include markers for irinotecan, thioguanine, warfarin and 5FU. Toxgnostic principles Toxgnostic studies are defined by four key elements: Analysis should be embedded within large, prospective, randomized, controlled clinical trials (i.e. Phase III clinical trials) The phenotype of interest should be clinically relevant and clearly defined using internationally standardized criteria and systematically captured such as the US National Cancer Institute (NCI) Common Terminology Criteria for Adverse Events (CTCAE) grade 3–5 toxicity. Analysis should be unbiased to encompass the maximum relevant genomic diversity, rather than being limited by what is often a superficial understanding of the pathways involved in the pharmacokinetics and pharmacodynamics of the agent. The performance of individual variants should be compared with that of a combined risk score, which may outperform each individual variant when they are analysed separately. Appropriate analytical approaches for toxgnostic studies include candidate gene studies, GWAS and whole genome sequencing. GWAS and whole genome sequencing are the most comprehensive approaches though careful considerations must be applied to the relevance, analysis and interpretation of the results to prevent over-fitting, which produces false-positive results, a proposed GWAS workflow is shown below. Regulatory oversight of toxgnostic tests Toxicity biomarkers can be co-developed and co-approved with the respective drug as a companion diagnostic test, this requires premarket approval (PMA). The Food and Drug Administration (FDA) IVD Companion Diagnostic Device guidance issued in draft 14 July 2011 states that a companion diagnostic test can be used to “Identify patients likely to be at increased risk for serious adverse reactions as a result of treatment with a particular therapeutic product”. Additionally there are guidelines for the submission of pharmacogenomic studies from the FDA and draft guidance from the European Medicines Agency (EMA). References Genomics Toxicology
Toxgnostics
[ "Environmental_science" ]
897
[ "Toxicology" ]
42,888,123
https://en.wikipedia.org/wiki/Genomatica
Genomatica is a San Diego–based biotechnology company that develops and licenses biological manufacturing processes for the production of intermediate and basic chemicals. Genomatica’s process technology for the chemical 1,4-Butanediol (BDO) is now commercial. Genomatica produced 5 million pounds of renewable BDO in five weeks at a DuPont Tate & Lyle plant in Tennessee. Its GENO BDO process has been licensed by BASF and by Novamont. History Genomatica was founded in San Diego in 1998 by Christophe Schilling and Bernhard Palsson. Schilling's goal was to use biotechnology to make more sustainable choices in manufacturing. In 2021, Lululemon partnered with Genomatica to create a plant-based nylon material, which was launched in 2023. In 2023, L'Oréal along with Unilever and Kao Corporation invested in Genomatica. The investment will go toward developing plant-based personal care and cosmetics products. References Biotechnology companies of the United States
Genomatica
[ "Biology" ]
207
[ "Biotechnology stubs" ]
42,888,422
https://en.wikipedia.org/wiki/Rock%20shed
A rock shed is a civil engineering structure used in mountainous areas where rock slides and land slides create highway closure problems. A rock shed is built over a roadway that is in the path of the slide. They are equally used to protect railroads. They are usually designed as a heavy reinforced concrete covering over the road, protecting the surface and vehicles from damage due to the falling rocks with a sloping surface to deflect slip material beyond the road, however an alternative is to include an impact-absorbing layer above the ceiling. A further use of this type of structure may be seen protecting the A4 road; although constructed primarily to alleviate risk from falling rocks from a limestone seam it also serves to protect against objects or persons falling from the Clifton Suspension Bridge where the height differential of approximately 70 metres from the bridge to the bottom of the Avon Gorge would give sufficient kinetic energy to even a relatively small item to cause injury on impact. Examples of rock sheds A4 road where it passes under the Clifton Suspension Bridge, Bristol, England, constructed in 1980 California State Route 1 at Pitkins Curve, just north of Limekiln State Park, constructed in 2014 Ferguson Rock Shed, to rectify a closure of California State Route 140 by a landslide in 2006, completion expected in the mid-2020s External links See also Avalanche dam Rock shelter Snow shed References Civil engineering Infrastructure
Rock shed
[ "Engineering" ]
271
[ "Construction", "Civil engineering", "Infrastructure" ]
42,889,377
https://en.wikipedia.org/wiki/Generator%20%28circuit%20theory%29
A generator in electrical circuit theory is one of two ideal elements: an ideal voltage source, or an ideal current source. These are two of the fundamental elements in circuit theory. Real electrical generators are most commonly modelled as a non-ideal source consisting of a combination of an ideal source and a resistor. Voltage generators are modelled as an ideal voltage source in series with a resistor. Current generators are modelled as an ideal current source in parallel with a resistor. The resistor is referred to as the internal resistance of the source. Real world equipment may not perfectly follow these models, especially at extremes of loading (both high and low), but for most purposes, they suffice. The two models of non-ideal generators are interchangeable; either can be used for any given generator. Thévenin's theorem allows a non-ideal current source model to be converted to a non-ideal voltage source model and Norton's theorem allows a non-ideal voltage source model to be converted to a non-ideal current source model. Both models are equally valid, but the voltage source model is more applicable when the internal resistance is low (that is, much lower than the load impedance), and the current source model is more applicable when the internal resistance is high (compared to the load). Symbols |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Ideal voltage source | Ideal current source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Controlled voltage source | Controlled current source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Battery of cells | Single cell Symbols commonly used for ideal sources are shown in the figure. Symbols do vary from region to region and time period to time period. Another common symbol for a current source is two interlocking circles. Dependent sources A dependent source is one in which the voltage or current of the source output is dependent on another voltage or current elsewhere in the circuit. There are thus four possible types: current-dependent voltage source, voltage-dependent voltage source, current-dependent current source, and voltage-dependent current source. Non-ideal dependent sources can be modelled with the addition of an impedance in the same way as non-dependent sources. These elements are widely used to model the function of two-port networks; one generator is needed for each port, and it is dependent on either voltage or current at the other port. The models are an example of black box modelling; that is, they are quite unrelated to what is physically inside the device but correctly model the device's function. There are a number of these two-port models, differing only in the type of generator required to represent them. This kind of model is particularly useful for modelling the behaviour of transistors. The model used to represent h-parameters is shown in the figure. h-parameters are frequently used in transistor data sheets to specify the device. The h-parameters are defined as the matrix where the voltage and current variables are as shown in the figure. The circuit model using dependent generators is just an alternative way of representing this matrix. References Circuit theorems
Generator (circuit theory)
[ "Physics" ]
730
[ "Circuit theorems", "Equations of physics", "Physics theorems" ]
42,889,559
https://en.wikipedia.org/wiki/Hydronalium
Hydronalium is a family of aluminium-magnesium alloys. It is an alloy predominantly of aluminium, with between 1%-12% of magnesium as the primary alloying ingredient. It also includes a secondary addition of manganese, usually between 0.4%-1%. The Hydronalium alloys originated in Germany in the 1930s and are best known, at least by that name, in Eastern Europe. They were widely used for shipbuilding in Poland. There are many alloys within this family, one standard reference listing over twenty. Applications The alloy family is noted for its resistance to seawater corrosion. As such it is used in sheet form for boatbuilding and light shipbuilding. As castings it is used for marine fittings. The reliable strength of some grades is sufficient for aerospace use and so they are used for wetted components of seaplane aircraft, such as floats and propellers, where marine corrosion resistance is also needed. Some variants of the alloy are ductile enough to be drawn into wire. This, combined with their resistance to corrosion by salty sweat, has led to an application for violin strings as an alternative to silver. See also 5083 aluminium alloy References Aluminium–magnesium alloys Aluminium alloys
Hydronalium
[ "Chemistry" ]
242
[ "Alloys", "Aluminium alloys" ]
42,890,548
https://en.wikipedia.org/wiki/DORN1
DORN1 refers to a purinergic receptor found in green plants, which is involved in extracellular ATP detection. Through the process of signal transduction, DORN1 couples extracellular ATP binding (which occurs during cellular stress) to downstream signalling and ultimately gene expression, which is thought to aid in plant survival. Molecular properties In contrast to animal purinergic receptors (which are G protein-coupled receptors), DORN1 is a lectin receptor kinase (LecRK), and is part of the L type lectin receptor kinases, due to its legume-like extracellular domain. In green plants such as Arabidopsis thaliana, several mutants lacking the DORN1 receptors are unable to phosphorylate mitogen-activated protein kinases after ATP stimulation. Function DORN1 receptors may play a role in mediating wound-induced inflammatory responses in green plants, with ATP acting as a damage-associated molecular pattern molecule. In response to cell lysis, ATP is discharged and binds onto the extracellular lectin domain of the DORN1 receptor. The intracellular DORN1 kinase domain is subsequently activated, resulting in several cellular responses such as mitogen-activated protein kinase activation, increased cytosolic calcium concentration and reactive oxygen species (ROS) production, ultimately leading to the induction of defence gene expression. See also Plant perception (physiology) Receptor protein serine/threonine kinase References Plant physiology
DORN1
[ "Biology" ]
301
[ "Plant physiology", "Plants" ]
42,890,663
https://en.wikipedia.org/wiki/BIBFRAME
BIBFRAME (Bibliographic Framework) is a data model for bibliographic description. BIBFRAME was designed to replace the MARC standards, and to use linked data principles to make bibliographic data more useful both within and outside the library community. History The MARC Standards, which BIBFRAME seeks to replace, were developed by Henriette Avram at the U.S. Library of Congress during the 1960s. By 1971, MARC formats had become the national standard for dissemination of bibliographic data in the United States, and the international standard by 1973. In a provocatively titled 2002 article, library technologist Roy Tennant argued that "MARC Must Die", noting that the standard was old; used only within the library community; and designed to be a display, rather than a storage or retrieval format. A 2008 report from the Library of Congress wrote that MARC is "based on forty-year old techniques for data management and is out of step with programming styles of today." In 2012, the Library of Congress announced that it had contracted with Zepheira, a data management company, to develop a linked data alternative to MARC. Later that year, the library announced a new model called MARC Resources (MARCR). That November, the library released a more complete draft of the model, renamed BIBFRAME. The Library of Congress released version 2.0 of BIBFRAME in 2016. Design BIBFRAME is expressed in RDF and based on three categories of abstraction (work, instance, item), with three additional classes (agent, subject, event) that relate to the core categories. While the work entity in BIBFRAME may be "considered as the union of the disjoint work and expression entities" in IFLA's Functional Requirements for Bibliographic Records (FRBR) entity relationship model, BIBFRAME's instance entity is analogous to the FRBR manifestation entity. This represents an apparent break with FRBR and the FRBR-based Resource Description and Access (RDA) cataloging code. However, the original BIBFRAME model argues that the new model "can reflect the FRBR relationships in terms of a graph rather than as hierarchical relationships, after applying a reductionist technique." Since both FRBR and BIBFRAME have been expressed in RDF, interoperability between the two models is technically possible. Specific formats The BIBFRAME model includes a serial entity for journals, magazines, and other periodicals. Several issues have prevented the model from being used for serials cataloging. BIBFRAME lacks several serials-related data fields available in MARC. A 2014 report was optimistic about BIBFRAME's suitability for describing audio and video resources, but also expressed concern about the high-level Work entity, which is unsuitable for modeling certain audio resources. Implementations The Library of Congress has a web page that lists institutions that have experimented with implementing BIBFRAME. Colorado College's Tutt Library created several experimental apps using BIBFRAME, reported in 2013. Ex Libris published a roadmap in 2017 to implement BIBFRAME in its library systems, which includes a MARC-to-BIBFRAME transformation. The National Library of Sweden was the first national library to fully transition to BIBFRAME in 2018. University of Concepción's library, in Chile, is the first Latin American university library to start transitioning to BIBFRAME, in 2023. Related initiatives and standards RDA, FRBR, FRBRoo, FRAD, and FRSAD are available in RDF in the Open Metadata Registry, a metadata registry. Schema Bib Extend project, a W3C-sponsored community group has worked to extend Schema.org to make it suitable for bibliographic description. See also Europeana Functional Requirements for Bibliographic Records (FRBR) Functional Requirements for Authority Data (FRAD) Functional Requirements for Subject Authority Data (FRSAD) International Standard Bibliographic Description (ISBD) IFLA Library Reference Model (LRM) Linked data Metadata Authority Description Schema (MADS) Metadata Object Description Schema (MODS) Open Library Resource Description and Access Schema.org Notes References External links Official Website Current BIBFRAME vocabularies Bibliography file formats Library automation Library cataloging and classification Library of Congress Metadata publishing Metadata standards Semantic Web
BIBFRAME
[ "Engineering" ]
907
[ "Library automation", "Automation" ]
42,890,785
https://en.wikipedia.org/wiki/OrionVM
OrionVM Wholesale Pty Limited (trading as OrionVM) is an Australian infrastructure as a service provider and white-label cloud platform. Resellers present customers with a rebranded interface for deploying virtual machine instances, which are only billed for what their customers use. Cloud Harmony benchmarked the OrionVM Cloud Platform's InfiniBand-backed network storage as the world's fastest in 2011. The company was founded and is headquartered in Sydney, Australia, with offices in San Francisco, California. History OrionVM was founded in a dorm by Sheng Yeo, Alex Sharp and Joseph Glanville in 2010. The company's cloud platform was developed while the founders were still students at the University of Technology, Sydney and University of Sydney. After fifteen months of development, their cloud platform entered a Public Beta programme, with a full launch on 1 April 2011. In 2011, the company received angel investments from Australian entrepreneur and PIPE Networks co-founder Stephen Baxter and American Gordon Bell of DEC and Microsoft Research. For his work at OrionVM, CEO Sheng Yeo was nominated for the 2012 Australian Entrepreneur of the Year and the 2013 Ernst & Young Entrepreneur of the Year. In 2014, OrionVM received a State Merit award and a National Finalist nomination in the 2014 iAwards, with CTO Alex Sharp winning the Hills YIA Cloud award. The company was nominated for a Stevie Award for New Product or Service of the Year in Cloud Infrastructure Software, and an Australian Startup Awards nomination. In 2016, Yeo and Sharp were named in the Forbes 30 Under 30 Asia list. Products OrionVM sells a wholesale cloud infrastructure platform for public, private and hybrid cloud deployments. Vendors can white-label the platform for resale, or for internal use. Prominent resellers include: Australian telephone company AAPT BizCloud IT broker StrataCore. Technology OrionVM uses the Xen hypervisor to virtualise multiple machines (referred to as "instances") on the same hardware. Linux instances use paravirtualisation for reduced overhead by default, with Windows Server being deployed using hardware-assisted virtualisation (HVM). Traditional virtual private server and infrastructure as a service providers consolidate storage into a storage area network, which is limited by Ethernet network speeds and best-effort reliability. OrionVM's platform took design cues from supercomputers by placing hypervisor storage and compute on the same physical servers. These are backed by a decentralised InfiniBand fabric. This improves network reliability and performance, and allows for rapid rollover between physical hosts for high availability. Features Rebranded panel To end users, the base of the platform consists of a web panel, where customers are able to deploy virtual machines. For resellers, the logos and theme can be modified to suit their own branding. Instances From the panel, users can deploy preconfigured instances with their chosen operating system and required memory. Additional storage disks and IP addresses can be created separately, then assigned to new or existing instances. After shutting down, further resources can be allocated or scaled down. Access Instances can be accessed out-of-band via a web-based serial console or VNC session. Access is also available via ovm_ctl, an open source command line interface available from GitHub and the pip package manager. Linux machines come preconfigured with SSH, and Windows with RDP for remote access. Templates Instances can be provisioned from a series of predefined templates, which can be customised if required. They include: CentOS Debian Linux FreeBSD Slackware Linux Ubuntu Server (long-term support releases) Microsoft Windows Server Vyatta templates are officially supported for software-defined logical networking between instances. API A public API exists for controlling instances. Open source Python bindings are available on GitHub. See also Cloud computing InfiniBand References External links Cloud infrastructure Cloud computing providers Companies based in Sydney Australian companies established in 2010 Internet hosting Web services Internet technology companies of Australia Cloud platforms Australian brands
OrionVM
[ "Technology" ]
837
[ "Cloud infrastructure", "Cloud platforms", "Computing platforms", "IT infrastructure" ]