id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
32,421,587
https://en.wikipedia.org/wiki/Collaborative%20decision-making%20software
Collaborative decision-making (CDM) software is a software application or module that helps to coordinate and disseminate data and reach consensus among work groups. CDM software coordinates the functions and features required to arrive at timely collective decisions, enabling all relevant stakeholders to participate in the process. The selection of communication tools is very important for high end collaborative efforts. Online collaboration tools are very different from one another, some use older forms of Internet-based Managing and working in virtual teams is not any task but it is being done for decades now. The most important factor for any virtual team is decision making. All the virtual teams have to discuss, analyze and find solutions to problems through continuous brain storming session collectively. An emerging enhancement in the integration of social networking and business intelligence (BI), has drastically improvised the decision making by directly linking the information on BI systems with collectively gathered inputs from social software. Nowadays all the organizations are dependent on business intelligence (BI) tools so that their employers can make better decisions based on the processed information in tools. The application of social software in business intelligence (BI) to the decision-making process provides a significant opportunity to tie information directly to the decisions made throughout the company. History Technology scientists and researchers have worked and explored automated decision Support Systems (DSS) for around 40 years. The research initiated with building model-driven DSS in the late 1960s. Advanced with usage of financial related planning systems, spreadsheet-based decision Support Systems and group decision support systems (GDSS) started in the early and mid-1980s. Data warehouses, managerial Information Systems, online analytical processing (OLAP) and business Intelligence emerged in late 1980s and mid-1990s and around same time the knowledge driven DSS and the usage of web-based DSS were evolving significantly. The field of automated decision support is emerging to utilize new advancements and create new applications. In the 1960s, scientists deliberately started examining the utilization of automated quantitative models to help with basic decision making and planning. Automated decision support systems have become more of real time scenarios with the advancement of minicomputers, timeshare working frameworks and distributed computing. The historical backdrop of the execution of such frameworks starts in the mid-1960s. In a technology field as assorted as DSS, chronicling history is neither slick nor direct. Diverse individuals see the field of decision Support Systems from different vantage focuses and report distinctive records of what happened and what was important. As technology emerged new automated decision support applications were created and worked upon. Scientists utilized multiple frameworks to create and comprehend these applications. Today one can arrange the historical backdrop of DSS into the five expansive DSS classes, including: communications-driven, data-driven, document driven, knowledge-driven and model-driven decision support systems. Model-driven spatial decision support system (SDSS) was developed in the late 1980s and by 1995 the SDSS idea had turned out to be recognized in the literature. Data driven spatial DSS are also quite regular. All in all, a data-driven DSS stresses access to and control of a time-series of internal organization information and sometimes external and current data. Executive Information Systems are cases of data driven DSS.The very first cases of these frameworks were called data-oriented DSS, analysis Information Systems and recovery. Communications-driven DSS utilize networks and communications technologies to facilitate decision-relevant collaboration and communication. In these frameworks, communications technologies are the overwhelming design segment. Devices utilized incorporate groupware, video conferencing and computer-based bulletin boards. In 1989, Lotus presented a groupware application called Notes and expanded the focus of GDSS to incorporate upgrading communication, collaboration and coordination among gatherings of individuals. In general, groupware, bulletin boards, audio and videoconferencing are the essential advancements for communications-driven decision support. In the last couple of years, voice and video started utilizing the Internet convention and have incredibly extended the conceivable outcomes for synchronous communications-driven DSS. A document driven DSS utilizes PC storage and processing technologies to give record recovery and investigation. Huge archived databases may incorporate examined reports, hypertext records, pictures, sounds and video. Content and record administration expanded in the 1970s and 1980s as a critical, generally utilized automated means for presenting and preparing bits of content. Cases of archives that may be retrieved by a document driven DSS are strategies and techniques, item determinations, catalogs and corporate verifiable reports, including minutes of meetings and correspondence. A search engine is an essential decision-aiding tool connected with document-driven DSS. Knowledge-driven DSS can propose or prescribe actions to managers. These DSS are individual PC frameworks with specific critical thinking ability risen. The "expertise" comprises knowledge around a specific area, comprehension of issues inside that space, and "skill" at taking care of some of these issues. These frameworks have been called suggestion DSS and knowledge-based DSS. Web based DSS, starting in roughly 1995, the far reaching Web and worldwide Internet gave an innovation stage to encourage developing the abilities and sending of automated choice support. The arrival of the HTML 2. details with shape labels and tables was a defining moment in the advancement of web-based DSS. In 1995, various papers were introduced on utilizing the Web and Internet for choice support at the third International conference of the International society for decision support systems (ISDSS). Notwithstanding web-based, model-driven DSS, analysts were reporting web access to data warehouses. DSS Research Resources was begun as an online gathering of bookmarks. By 1995, the World Wide Web was perceived by various programming designers and scholastics as a genuine stage for executing a wide range of decision-support systems. In 1996-97, corporate intranets were produced to support information exchange and knowledge management. The primary decision-support apparatuses included specially appointed question and reporting instruments, improvement and recreation models, online analytical processing (OLAP), data mining and data visualization. Enterprise wide DSS utilizing database technologies were particularly well known among large organizations. In 1999, sellers presented new Web-based analytical applications. Numerous DBMS merchants moved their center to web-based analytical applications and business intelligence solutions. In 2000, application service providers (ASPs) started facilitating the application programming and specialized foundation for decision support capabilities. Additionally the year 2000 was a gateway. More advanced "enterprise knowledge portals" were presented by sellers that combined information portals, knowledge management, business intelligence, and communications-driven DSS in an integrated web environment. Decision support applications and research concentrates on identified data-oriented systems, management expert systems, multidimensional data analysis, query and reporting tools, online analytical processing (OLAP), business Intelligence, group DSS, conferencing and groupware, document management, spatial DSS and executive Information Systems as the technologies rise, meet and wander. The investigation of decision support systems is a connected train that utilizes learning and particularly hypothesis from different disciplines. Consequently, numerous DSS scientists look into inquiries that have been analyzed on the grounds that they were of worry to individuals who were building and utilizing particular DSS. Subsequently, a great part of the wide DSS information base gives speculations and headings to building more powerful DSS. CDM and Business Intelligence Web 2.0 collaboration tools have reached the mass collaboration expectations by crossing the limits of web 1.0 collaboration tools. These tools provide a user controlled environment with social software in an inexpensive and flexible approach. The raise of collaboration 2.0 technologies are being quickly accepted in the corporate. Social and collaborative business intelligence (BI) were popularly recognized as a sub category with in BI work space in the year 2009. Social and collaborative BI, a type of CDM software, harnesses the functions and philosophies of social networking and social Web 2.0 technologies, applying them to reporting and analytics at the enterprise level, to facilitate better and faster fact-based decision-making. This platform, such as Web 2.0 technologies, is designed around the premise that anyone should be able to share content and contribute to discussion, anywhere and anytime Since 2010 there is an inclination to consolidate highlights from informal organizations into Business Intelligence arrangements. A wide range of business applications ought to likewise take after this crucial change in the coming years. International Data Corporation (IDC) predicted that 2011 would be the year where the trend of embedding social media style features into BI solutions would make its mark, and that virtually all types of business applications would undergo a fundamental transformation. IDC also believed the emerging CDM software market would grow quickly, forecasting revenues of nearly $2 billion by 2014, with a compound annual growth rate of 38.2 percent between the years 2009 and 2014. CDM software, in the context of BI, is the ability to share and institutionalize information, analysis and insight, which would otherwise be lost. Business Intelligence (BI) has been broadly utilized to oversee and refine incomprehensible supplies of information. Many organizations have applied business Intelligence in their firms in order to refine their own data for better understanding and decision making. BI also has its applications in statistical analysis, predictive modelling and optimization. The different reports generated by these products play a major role in decision making. Decision Making is an important task in the job as the consequences of a decision effect the growth and performance of the organization. Collaborative Decision Making (CDM) joins social programming with business insight. This mix can drastically enhance the nature of basic decision making by specifically connecting the data contained in BI frameworks with collective information gathered using social programming. User associations could cobble together such a framework with existing social programming, BI stages and essential labeling usefulness. CDM is a rising segment of numerous application sorts - including BI, human resources (HR), ability administration and suites - however it is likewise a conduct realized by the utilization of Web 2.0 applications. In the vanguard of this pattern is the way that BI is being incorporated with shared, cloud-based applications. Virtual world Second Life is additionally rising as a stage for collaborative decision making. The key advantage of this is "breaking down space" and the capacity to mix synchronous and asynchronous exercises. For meetings and occasions, the advantages of having all the significant data and individuals on request, which evacuates the limitations of timetable and geology. Service oriented architecture (SOA) has assumed an essential part in making this a reality. BI pervades a whole association and, if utilized effectively, can decidedly impact choices that influence each useful territory. Now collective Decision Making (CDM) is a joint government/industry activity went for enhancing air movement stream administration through expanded data trade among aeronautics group partners. CDM is included agents from government, general flight, carriers, private industry and the scholarly world who cooperate to make mechanical and procedural answers for the air traffic flow management (ATFM) challenges confronted by the national airspace system (NAS). New techniques are being used to maximize understanding and improve collaborative Decision Making in areas such as design reviews, construction planning and integrated operations. Today's BI tools are doing good work in terms of extracting right information for the right people, but lack of accountability in decision making process is leading the organizations into poor choices. Though there is lot of money invested in the business Intelligence software and data warehouse technology, the output of these is still giving bad business choices. There is a gap created between level of information in business Intelligence and the quality and transparency of decision making. The problem has become so prevalent that the need for collaborative decision making (CDM) software, a new approach making complex business decisions that closely links information and reports gathered from social media collaboration tools emerged. CDM platforms will give users easy access to relevant BI data sources as well as the ability to tag and search those sources for future reference and accountability. The decision itself would be linked to the BI software inputs, collaboration tools and the methods and practices that were used to make that decision. The need of making complex and efficient decisions with the power of information systems made the use of business intelligence in collaborative decision making The quality of the decisions depends on the effective utilization of BI and information integration in the business which include – capturing BI value, effective practice of BI applications and knowledgeable business officials with expertise in BI and IT knowledge. Benefits and potential The concept of social and collaborative BI has been hailed by many as the answer to the persistent problem that, despite increasing investment in BI, many organizations are failing to utilize reporting and analytics effectively and continue to make poor business decisions, resulting in low ROI. Gartner predicts that CDM platforms will stimulate a new approach to complex decision making by linking the information and reports gleaned from BI software with the latest social media collaboration tools. Gartner's prognostic report, The Rise of Collaborative Decision Making, predicts that this new technology will minimize the cost and lag in the decision-making process, leading to improved productivity, operational efficiencies and ultimately, better, more timely decisions. Recent McKinsey Global and Aberdeen Group research have indicated that organizations with collaborative technologies respond to business threats and complete key projects faster, experiencing decreased time to market for new products as well as improved employee satisfaction. Components There are three major functions that combine together to enable effective enterprise collaboration and networking based on reporting and analytics, and form the basis of a CDM platform. These are the ability to: Discuss and overlay knowledge on business data Share knowledge and content Collectively decide the best course of action Discussing and overlaying knowledge on business data Most decision-making and discussion surrounding business processes occurs outside organizational BI platforms, opening a gap between human insight and the business data itself. Business decisions should be made alongside business data to ensure steadfast, fact-based decision-making. An open-access discussion forum integrated into the BI solution allows users to discuss the results of data analysis, connecting the right people with the right data. Users are able to overlay human knowledge, insight and provide context to the data in reports. A social layer within a BI solution improves the efficiency of business interaction regarding reporting and analytics compared to traditional avenues of communication such as faxes, phone calls and face-to-face meetings by: Being recordable: Conversations are automatically recorded, creating a searchable history of all interaction, eliminating unnecessarily revisiting points previously made Eliminating logistical hurdles: The need for complex and costly travel arrangements is significantly reduced, with geographically dispersed stakeholders able to participate in the exchange of information faster Enabling all relevant stakeholders to participate: All relevant stakeholders can contribute to discussion at their convenience Key features of a CDM forum Collaborative decision-making (CDM) is defined as social media feature which, if combined with BI applications, will allow an increased distribution and discussion of information through a number of key features. These key features include annotations, discussions, and tagging, embedding, and providing decisions. Annotations help others in accepting and interpreting the data, which makes it more significant. For instance, when users are creating or analyzing reports within the BI environment, they can add commentaries and annotations so as to offer context to the data. Business leaders can be observed to be assured that they completely understand the information on which decisions are grounded. Open-access discussions will allow the contributors to post their notions as well as to read, consider and enhance the proposals of others. This feature can be a valuable device for pursuing the input of other investors. This is because of how assimilating CDM tools within the BI environment offers the possibility to hold discussions in complete view of the significant data. Tagging, on the other hand, enables the users to highlight related information in a flexible manner which makes it easy for other user to examine and recover beneficial and practical data. The ability to embed information enclosed in a BI solution into other applications is a vital factor for making sure that precise information is made accessible to decision-makers in a sensible manner. When information is embedded, it can be seen and commented on by several users. Meaning to say, ideas and suggestions can also be shared and discoursed in actual. Lastly, BI solutions are observed to have the capability of supporting appropriate decision-making that supports groups to attain explicit, quantifiable goals and objectives. These may also comprise an improved product overview or more lucrative supply chain. Sharing knowledge and content The digital era is often described as the Information Age. But the value of information resides in its ability to be shared. A CDM module allows information relating to reporting and analytics to be shared in three ways, by: Cataloguing: A social layer within a BI solution allows users to create a searchable history by tagging and cataloging past discussions and reports within shared folders inside the BI portal. Tagging allows users to quickly and easily file report, annotation and discussion content under multiple categories for quick and easy retrieval. Distributing: The ability to export entire files/reports from the BI portal keeps all relevant decision-makers properly informed. Likewise, sharing direct links to external information in a threaded discussion within the CDM platform adds necessary detail, context and perspective to discussion. Embedding: A CDM layer within a BI tool enables users to embed reports and vital contextual content across platforms – wherever it is needed for decision-making. A CDM module does this in two ways Within the BI tool's social layer or enterprise portals (intranet system) via a web services application programming interface (API) Outside the enterprise, on any platform, via YouTube style Java script export, enabling users to embed live interactive reports or other information by simply copying the Java script fragment into any HTML page Collectively deciding the best course of action Collaborative Decision Making (CDM) Systems are defined as cooperative computer-based systems which assist the elucidation of ill-structured difficulties by a set of decision makers who are functioning together as a team. Their main objective is to enlarge the effectiveness of decision clusters through the cooperative sharing of information among group members and the computer. CDM associates the social software with business intelligence in which this said amalgamation can radically improve the value of decision-making by directly connecting the information enclosed in BI systems with collaborative input garnered through the usage of social software. This has also been identified as collaborative BI which has become a collaborative decision-making (CDM) module. Accordingly, this attaches the purposes and philosophies of social networking and Web 2.0 technologies, putting them on to broadcasting and analytics. If this would be implemented properly, collaborative BI will have the capability to form important connections between people, data, process and technology which will then connect the gap concerning insight and action through assisting peoples’ normal decision-making procedures. In order for an organization to attain a real collaborative BI, they must requisite to implement a collaborative mentality as well and upkeep a culture of organization-wide data sharing and data entree. This halts down departmental silos, empowering quicker, improved and more operative decision-making. It is also observed as an inflexible precondition for success wherein if an organization has a culture where people are rewarded for hoarding evidence, or information, and being specialists without sharing, then that organization is not ready. Technology will be observed to not make an organization collaborative if it does not already upkeep the belief of teams from various business units functioning in concert on shared projects. Technology factors that underpin enterprise CDM A BI CDM module is underpinned by three factors. 1 Ease of use: CDM software follows the Web 2.0 self-service mindset. The collaborative components within the BI solution cater for a diversity of user ability and skill levels to ensure knowledge does not remain departmentalized. 2 Fully integrated: Users must be able to discuss their analysis alongside their BI content. Picture this scenario: You’re using your BI tool to search for data on last month's sales results from the Americas. You find a startling anomaly – sales have skyrocketed compared to previous months. Why? What has been done differently? How can you replicate the results? If the CDM platform is within the BI tool, you can immediately start the investigation, inviting others into the conversation in full view of the data. There's no need to set up meetings and discussions in isolation from your data set. The collaborative process remains clearly documented in a single open-access space, and discussion remains on topic – the underlying information (data) is right there. To enable successful CDM, both your collaborative platform and information should be in the one place. 3 Web-based: Being Web-based, the collaborative platform allows all relevant stakeholders to follow and contribute to discussion as it unfolds, regardless of location, time difference or device used to access it. Notable CDM modules in the Business Intelligence space Social BI and CDM software is still in its infancy according to Gartner, and remains underutilized. However, a handful of vendors in the BI marketplace offer CDM modules, including: IBM Cognos (Optional add on) While the offerings listed above are larger BI systems with upgrades for CDM features, there have emerged some dedicated web based, software-as-a-service CDM offerings, including: 1000minds Altova MetaTeam D-Sight Loomio References Business software Decision support systems Group decision-making Collaborative software
Collaborative decision-making software
Technology
4,361
50,546,524
https://en.wikipedia.org/wiki/TYC%209486-927-1
TYC 9486-927-1 (also known as 2MASS J21252752-8138278) is the primary of a possible trinary star system located at a distance of 34.5 parsecs from Earth in the southern direction in the constellation of Octans. It is a BY Draconis variable, with large starspots causing it to change brightness as it rotates every 13 hours. TYC 9486-927-1 has rapid rotation and coronal and chromospheric activity suggestive of a young age. Observations and multi-epoch radial velocity data suggest that TYC 9486-927-1 is a single, rapidly rotating star rather than a spectroscopic or tight visual binary. However, it is still possible that TYC 9486-927-1 is an equal mass binary with a face-on orbit and close separation. The candidate secondary stellar companion is 2MASS J21121598–8128452. It is a red dwarf star of spectral class M5.5. Its projected separation from the primary would be 62,700 AU. The candidate tertiary companion is 2MASS J21192028–8145446 - of spectral class M6 or M7 and at a projected separation of 31,000 AU from the primary. Planetary system The planet 2MASS J21265040-8140293 orbits TYC 9486-927-1 at a projected separation of . With a mass from 11.6 to 15 Jupiter masses, it is considered to be either a brown dwarf, or a giant planet. References Octans J21252752-8138278 Planetary systems with one confirmed planet M-type main-sequence stars 3 Octantis, FT
TYC 9486-927-1
Astronomy
368
1,949,856
https://en.wikipedia.org/wiki/Hydrogenothermaceae
The Hydrogenothermaceae family are bacteria that live in harsh environmental settings. They have been found in hot springs, sulfur pools, and thermal ocean vents. They are true bacteria as opposed to the other inhabitants of extreme environments, the Archaea. An example occurrence of certain extremophiles in this family are organisms of the genus Sulfurihydrogenibium that are capable of surviving in extremely hot environments such as Hverigerdi, Iceland. Obtaining energy Hydrogenothermaceae families consist of aerobic or microaerophilic bacteria, which generally obtain energy by oxidation of hydrogen or reduced sulfur compounds by molecular oxygen. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN) and the National Center for Biotechnology Information (NCBI). See also List of bacterial orders List of bacteria genera References Hedlund, Brian P., et al. “Isolation of Diverse Members of the Aquificales from Geothermal Springs in Tengchong, China.” Frontiers in Microbiology, vol. 6, 2015, . External links Aquificota
Hydrogenothermaceae
Biology
237
744,621
https://en.wikipedia.org/wiki/Privilege%20escalation
Privilege escalation is the act of exploiting a bug, a design flaw, or a configuration oversight in an operating system or software application to gain elevated access to resources that are normally protected from an application or user. The result is that an application or user with more privileges than intended by the application developer or system administrator can perform unauthorized actions. Background Most computer systems are designed for use with multiple user accounts, each of which has abilities known as privileges. Common privileges include viewing and editing files or modifying system files. Privilege escalation means users receive privileges they are not entitled to. These privileges can be used to delete files, view private information, or install unwanted programs such as viruses. It usually occurs when a system has a bug that allows security to be bypassed or, alternatively, has flawed design assumptions about how it will be used. Privilege escalation occurs in two forms: Vertical privilege escalation, also known as privilege elevation, where a lower privilege user or application accesses functions or content reserved for higher privilege users or applications (e.g. Internet Banking users can access site administrative functions or the password for a smartphone can be bypassed.) Horizontal privilege escalation, where a normal user accesses functions or content reserved for other normal users (e.g. Internet Banking User A accesses the Internet bank account of User B) Vertical This type of privilege escalation occurs when the user or process is able to obtain a higher level of access than an administrator or system developer intended, possibly by performing kernel-level operations. Examples In some cases, a high-privilege application assumes that it would only be provided with input matching its interface specification, thus doesn't validate this input. Then, an attacker may be able to exploit this assumption, in order to run unauthorized code with the application's privileges: Some Windows services are configured to run under the Local System user account. A vulnerability such as a buffer overflow may be used to execute arbitrary code with privilege elevated to Local System. Alternatively, a system service that is impersonating a lesser user can elevate that user's privileges if errors are not handled correctly while the user is being impersonated (e.g. if the user has introduced a malicious error handler) Under some legacy versions of the Microsoft Windows operating system, the All Users screensaver runs under the Local System account – any account that can replace the current screensaver binary in the file system or Registry can therefore elevate privileges. A Windows driver, for example kprocesshacker.sys, can be used to run programs such as cmd.exe as internal accounts, also providing access to LocalSystem. In certain versions of the Linux kernel it was possible to write a program that would set its current directory to /etc/cron.d, request that a core dump be performed in case it crashes and then have itself killed by another process. The core dump file would have been placed at the program's current directory, that is, /etc/cron.d, and cron would have treated it as a text file instructing it to run programs on schedule. Because the contents of the file would be under attacker's control, the attacker would be able to execute any program with root privileges. Cross Zone Scripting is a type of privilege escalation attack in which a website subverts the security model of web browsers, thus allowing it to run malicious code on client computers. There are also situations where an application can use other high privilege services and has incorrect assumptions about how a client could manipulate its use of these services. An application that can execute Command line or shell commands could have a Shell Injection vulnerability if it uses unvalidated input as part of an executed command. An attacker would then be able to run system commands using the application's privileges. Texas Instruments calculators (particularly the TI-85 and TI-82) were originally designed to use only interpreted programs written in dialects of TI-BASIC; however, after users discovered bugs that could be exploited to allow native Z-80 code to run on the calculator hardware, TI released programming data to support third-party development. (This did not carry on to the ARM-based TI-Nspire, for which jailbreaks using Ndless have been found but are still actively fought against by Texas Instruments.) Some versions of the iPhone allow an unauthorised user to access the phone while it is locked. Jailbreaking In computer security, jailbreaking is defined as the act of removing limitations that a vendor attempted to hard-code into its software or services. A common example is the use of toolsets to break out of a chroot or jail in UNIX-like operating systems or bypassing digital rights management (DRM). In the former case, it allows the user to see files outside of the filesystem that the administrator intends to make available to the application or user in question. In the context of DRM, this allows the user to run arbitrarily defined code on devices with DRM as well as break out of chroot-like restrictions. The term originated with the iPhone/iOS jailbreaking community and has also been used as a term for PlayStation Portable hacking; these devices have repeatedly been subject to jailbreaks, allowing the execution of arbitrary code, and sometimes have had those jailbreaks disabled by vendor updates. iOS systems including the iPhone, iPad, and iPod Touch have been subject to iOS jailbreaking efforts since they were released, and continuing with each firmware update. iOS jailbreaking tools include the option to install package frontends such as Cydia and Installer.app, third-party alternatives to the App Store, as a way to find and install system tweaks and binaries. To prevent iOS jailbreaking, Apple has made the device boot ROM execute checks for SHSH blobs in order to disallow uploads of custom kernels and prevent software downgrades to earlier, jailbreakable firmware. In an "untethered" jailbreak, the iBoot environment is changed to execute a boot ROM exploit and allow submission of a patched low level bootloader or hack the kernel to submit the jailbroken kernel after the SHSH check. A similar method of jailbreaking exists for S60 Platform smartphones, where utilities such as HelloOX allow the execution of unsigned code and full access to system files. or edited firmware (similar to the M33 hacked firmware used for the PlayStation Portable) to circumvent restrictions on unsigned code. Nokia has since issued updates to curb unauthorized jailbreaking, in a manner similar to Apple. In the case of gaming consoles, jailbreaking is often used to execute homebrew games. In 2011, Sony, with assistance from law firm Kilpatrick Stockton, sued 21-year-old George Hotz and associates of the group fail0verflow for jailbreaking the PlayStation 3 (see Sony Computer Entertainment America v. George Hotz and PlayStation Jailbreak). Jailbreaking can also occur in systems and software that use generative artificial intelligence models, such as ChatGPT. In jailbreaking attacks on artificial intelligence systems, users are able to manipulate the model to behave differently than it was programmed, making it possible to reveal information about how the model was instructed and induce it to respond in an anomalous or harmful way. Android Android phones can be officially rooted by either going through manufacturers controlled process, using an exploit to gain root, or installing a rooting modification. Manufacturers allow rooting through a process they control, while some allow the phone to be rooted simply by pressing specific key combinations at boot time, or by other self-administered methods. Using a manufacturers method almost always factory resets the device, making rooting useless to people who want to view the data, and also voids the warranty permanently, even if the device is derooted and reflashed. Software exploits commonly either target a root-level process that is accessible to the user, by using an exploit specific to the phone's kernel, or using a known Android exploit that has been patched in newer versions; by not upgrading the phone, or intentionally downgrading the version. Mitigation strategies Operating systems and users can use the following strategies to reduce the risk of privilege escalation: Data Execution Prevention Address space layout randomization (to make it harder for buffer overruns to execute privileged instructions at known addresses in memory) Running applications with least privilege (for example by running Internet Explorer with the Administrator SID disabled in the process token) in order to reduce the ability of buffer overrun exploits to abuse the privileges of an elevated user. Requiring kernel mode code to be digitally signed. Patching Use of compilers that trap buffer overruns Encryption of software and/or firmware components. Use of an operating system with Mandatory Access Controls (MAC) such as SELinux Kernel Data Relocation Mechanism (dynamically relocates privilege information in the running kernel, preventing privilege escalation attacks using memory corruption) Recent research has shown what can effectively provide protection against privilege escalation attacks. These include the proposal of the additional kernel observer (AKO), which specifically prevents attacks focused on OS vulnerabilities. Research shows that AKO is in fact effective against privilege escalation attacks. Horizontal Horizontal privilege escalation occurs when an application allows the attacker to gain access to resources which normally would have been protected from an application or user. The result is that the application performs actions with the same user but different security context than intended by the application developer or system administrator; this is effectively a limited form of privilege escalation (specifically, the unauthorized assumption of the capability of impersonating other users). Compared to the vertical privilege escalation, horizontal requires no upgrading the privilege of accounts. It often relies on the bugs in the system. Examples This problem often occurs in web applications. Consider the following example: User A has access to their own bank account in an Internet Banking application. User B has access to their own bank account in the same Internet Banking application. The vulnerability occurs when User A is able to access User B's bank account by performing some sort of malicious activity. This malicious activity may be possible due to common web application weaknesses or vulnerabilities. Potential web application vulnerabilities or situations that may lead to this condition include: Predictable session IDs in the user's HTTP cookie Session fixation Cross-site scripting Easily guessable passwords Theft or hijacking of session cookies Keystroke logging See also Cybersecurity Defensive programming Hacking of consumer electronics Illegal number Principle of least privilege Privilege revocation (computing) Privilege separation Rooting (Android OS) Row hammer References Operating system security
Privilege escalation
Technology
2,204
29,260,448
https://en.wikipedia.org/wiki/Babycurus%20toxin%201
Babycurus-toxin 1 (BcTx1) is a component of the venom of the east African scorpion Babycurus centrurimorphus. This toxin modifies both the activation and the inactivation properties of insect sodium channels. Sources This toxin is a component of the venom secreted by the east African scorpion Babycurus centrurimorphus of the scorpion family Buthidae, and is more specific in envenoming insects, for example the cockroach, than in human envenoming. Chemistry The molecular weight of BcTx1 is 3248. It belongs to the long (4 C-C) scorpion toxin superfamily and can be classified as a β-toxin. Only the first 30 amino acids of the BcTx1 protein have yet been sequenced. The toxin shares similarities in protein structure with other toxins found in scorpions in North Africa and the Middle East, including the genera Buthus and Centruroides. Target The toxin acts affects sodium channel properties. The toxin can bind to many different sodium channels, including mammalian channels, although the effects on non-insect sodium channels have not yet been tested. Mode of action BcTx1 acts on insect axonal sodium channels by lowering their activation threshold (by 5-10 mV), resulting in an increase of Na+ conductances. It also shifts the steady-state inactivation curve in the negative direction (by ~15 mV). In addition, BcTx1 slows down the activation of sodium channels. As a result of its action on the sodium channels, neurons depolarize and can no longer fire action potentials, leading to a flaccid paralysis. References Ion channel toxins Neurotoxins
Babycurus toxin 1
Chemistry
357
13,789,074
https://en.wikipedia.org/wiki/Methomyl
Methomyl is a carbamate insecticide introduced in 1966. It is highly toxic to humans, livestock, pets, and wildlife. The EU imposed a pesticide residue limit of 0,01 mg/kg for all fruit and vegetables. Methomyl is a common active ingredient in commercial fly bait, for which the label instructions in the United States warn that "It is a violation of Federal Law to use this product in a manner inconsistent with its labeling." "Off-label" uses and other uses not specifically targeted at problem insects are illegal, dangerous, and ill-advised. Use Methomyl is a broad-spectrum insecticide that is used to kill insect pests. Methomyl is registered for commercial/professional use under certain conditions on sites including field, vegetable, and orchard crops; turf (sod farms only); livestock quarters; commercial premises; and refuse containers. Products containing 1% Methomyl are available to the general public for retail sale, but more potent formulations are classified as restricted-use pesticides: not registered for homeowner or non-professional application. However, Heliothis virescens developed a resistance to methomyl within 5 years. Other species like Helicoverpa assulta also developed resistance after exposure. Toxicity In acute toxicity testing, methomyl is placed in EPA Toxicity Category I (the highest toxicity category out of four) via the oral route and in eye irritation studies. It is in lower Toxicity Categories for inhalation (Category II), acute dermal effects (Category III), and acute skin irritation (Category IV). Methomyl is not likely to be a carcinogen (EPA carcinogen Category E). Ecotoxicity Methomyl has low persistence in the soil environment, with a reported half-life of approximately 14 days. Because of its high solubility in water, and low affinity for soil binding methomyl may have potential for groundwater contamination. The estimated aqueous half-life for the insecticide is 6 days in surface water and over 25 weeks in groundwater. Synthesis First prepare thioester: Second prepare oxime from thioester: Third prepare product from methyl isocyanate and the finished oxime: Trade names Common names for methomyl include metomil and mesomile. Trade names include , Agrinate, DuPont 1179, Flytek, Kipsin, Lannate, Lanox, Memilene, Methavin, Methomex, Nudrin, NuBait, Pillarmate and SD 14999 References External links Acetylcholinesterase inhibitors Carbamate insecticides Endocrine disruptors Oxime carbamates
Methomyl
Chemistry
553
57,014,144
https://en.wikipedia.org/wiki/MACS%20J1149%20Lensed%20Star%201
MACS J1149 Lensed Star 1, also known as Icarus, is a blue supergiant star observed through a gravitational lens. It is the seventh most distant individual star to have been detected so far (after Earendel, Godzilla, Mothra, Quyllur, star-1 and star-2), at approximately 14 billion light-years from Earth (redshift z=1.49; comoving distance of 14.4 billion light-years; lookback time of 9.34 billion years). Light from the star was emitted 4.4 billion years after the Big Bang. According to co-discoverer Patrick Kelly, the star is at least a hundred times more distant than the next-farthest non-supernova star observed, SDSS J1229+1122, and is the first magnified individual star seen. History In April and May 2018, the star was found in the course of studying the supernova SN Refsdal with the Hubble Space Telescope. Astronomer Patrick Kelly of the University of Minnesota is the lead author of the finding, published in the journal Nature Astronomy. While astronomers had been collecting images of this supernova from 2004 onward, they recently discovered a point source that had appeared in their 2013 images, and become much brighter by 2016. They determined that the point source was a solitary star being magnified more than 2,000 times by gravitational lensing. The light from LS1 was magnified not only by the huge total mass of the galaxy cluster MACS J1149+2223—located 5 billion light-years away—but also transiently by another compact object of about three solar masses within the galaxy cluster itself that passed through the line of sight, an effect known as gravitational microlensing. The galaxy cluster magnification is probably a factor of 600, while the microlensing event, which peaked in May 2016, brightened the image by an additional factor of ~4. There was a second peak near the brightness curve maximum, which may indicate the star was binary. The microlensing body may have been a star or a black hole in the cluster. Continuous monitoring of the star Icarus may one day rule out the possibility that primordial black holes constitute a sizable fraction of dark matter. Normally, the only astronomical objects that can be detected at this range would be either whole galaxies, quasars, or supernovas, but the light from the star was magnified by the lensing effect. They determined the light was from a stable star, not a supernova, as its temperature did not fluctuate; the temperature also allowed them to catalog the star as a blue supergiant. Because the visible light is the redshifted ultraviolet tail, the star does not appear blue to us but reddish or pink. The light observed from the star was emitted when the universe was about 30% of its current age of 13.8 billion years. Kelly suggested that similar microlensing discoveries could help them identify the earliest stars in the universe. Name The formal name MACS J1149 is a reference to MAssive Cluster Survey and the star's coordinates in the J2000 astronomical epoch. While Kelly had wanted to name the star Warhol, alluding to Andy Warhol's notion of having 15 minutes of fame, the team ended up naming the star Icarus based on the Greek mythological figure. Astrophysical implications The discovery shows that astronomers can study the oldest stars in background galaxies of the early universe by combining the strong gravitational lensing effect from galaxy clusters with gravitational microlensing events caused by compact objects in these galaxy clusters. By using these events, astronomers can study and test some models about dark matter in galaxy clusters and observe high energy events (supernovae, variable stars) in young galaxies. See also List of star extremes List of the most distant astronomical objects WHL0137-LS Notes References External links Hubble Discovers Supernova Split by Cosmic Lens – NASA (2017) View of SN Refsdal – National Geographic Society (2015) Images of SN Refsdal – HubbleSite (2015) Hubble Uncovers the Farthest Star Ever Seen (2018) 20180402 B-type supergiants Extragalactic stars Gravitational lensing Hubble Space Telescope Intergalactic stars Leo (constellation)
MACS J1149 Lensed Star 1
Astronomy
881
67,368,474
https://en.wikipedia.org/wiki/Freshwater%20shoreline%20management
Freshwater Shoreline Management involves assessing and protecting lakes, rivers, and other freshwater shorelines from excessive development or other anthropogenic disturbances. Shoreline management involves the long-term monitoring of watershed and shoreline revitalisation projects. Freshwater shoreline management is frequently run by local conservation authorities through state, provincial, and federal lake partner programs. These programs have been used as a method of tracking shoreline change over time, determining areas of concern, and educating shoreline property owners. History The concept of Freshwater Shoreline Management evolved from ideas developed for the Integrated Coastal Zone Management (ICZM), which emerged from the 1992 United Nations Conference on Environment and Development. In Canada, a coastal zone management plan was completed by 1996 using the ICZM framework. Freshwater management programs utilized the coastal zone management plan to create freshwater management plans to address the growing concerns for the environment that had been aired since the 1960s in Canadian society. Anthropogenic effects on watersheds were increasing globally in the 1900s, with nutrient loading of phosphorus, nitrogen, and sulfur causing eutrophication and acidification of water bodies. These effects are primarily caused by the human development of shorelines, agricultural runoff of chemicals and fertilizers, human litter, and sewage/wastewater. To manage these impacts, local and regional organizations began conducting watershed monitoring programs to detect long-term environmental changes and establish their causes. Usage Anthropogenic effects on lakes, such as freshwater usage, shoreline development, recreational use, agriculture, and retaining walls, can negatively impact aquatic and terrestrial organisms that rely on the shoreline of a lake for habitat. The anthropogenic effects can also cause eutrophication and acidification of lakes, which impacts organisms within the water itself and can also cause harm to human health. It can have the added effect of decreasing property values and tourism in the lake communities due to some beaches being unsafe to swim in because of pollutants. Since it may be modified to match the needs of the watershed and be applied to the current land use nearby, freshwater shoreline management is useful for community-based monitoring. The Lake Ontario Shoreline Management Plan is an example of how communities can use freshwater shoreline management. Programs such as this were developed by conservation authorities and citizens alongside regional and provincial governments to perform shoreline mapping and assessment, public consultation/education, and implement long-term monitoring of the watershed and shoreline. The Muskoka Watershed Council has also performed shoreline assessments using the Love Your Lakes Program to survey the shoreline of Lake Bella in the Muskoka District. It showed that the natural shoreline decreased from 96% in 2002 to 80% in 2007, impacting the overall water quality as it allows for increased nutrient runoff, negatively impacting biodiversity as it decreases habitat for fish, insects, and birds. This program has increased local education on lake health and stewardship of revitalizing shorelines. Climate Change Impacts Climate change has been found to affect freshwater shoreline communities. Effects such as increased warming of the water bodies, increased storm runoff, the quickening of yearly ice melt and limited amounts of winter ice, and increased wave height during storms, which increases the potential of erosion, were all found to potentially affect lake shorelines. Shoreline management has been identified as a method to mitigate climate change impacts such as potential flooding and nutrient loading from frequent and higher-intensity storms. That can occur as shorelines naturalize, which can increase filtration and decrease sediment and nutrient runoff. Example: Love Your Lakes Program The Love Your Lakes Program is an example of a Shoreline Assessment and Revitalization program used in Canada. It was developed under the Canadian Ministry of Environment and Climate Change (MECC) Lake Partner Program as a joint effort between Watersheds Canada, MECC, and the Canadian Wildlife Federation. The program allows lake owners and organizations to apply to have their shorelines assessed and discusses methods that individuals and the community can use to revitalize their shorelines. Naturalization, using native plant species along the shoreline to create a buffer, it is often recommended as this limits erosion from wake action and can decrease nutrient runoff from lawn maintenance or farming activities. To this date, almost 200 lakes have been assessed by the program. This has led to increased community awareness and shoreline naturalization, which has transformed up to 300 shoreline properties. References Coastal geography Coastal engineering Environmental impact in the United States Environmental impact in Canada
Freshwater shoreline management
Engineering
876
25,501,419
https://en.wikipedia.org/wiki/Indoor%E2%80%93outdoor%20thermometer
An indoor–outdoor thermometer is a thermometer that simultaneously provides a measurement of the indoor and outdoor temperatures. The outdoor part of the thermometer requires some kind of remote temperature sensing device. Conventionally, this was done by extending the bulb of the thermometer to the remote site. Modern instruments are more likely to use some form of electronic transducer. Glass thermometer In an indoor–outdoor thermometer based on a conventional liquid-in-glass thermometer, the stem of the outdoor thermometer is connected to the bulb by a long, flexible or semi-rigid capillary. The temperature scale is marked on the stem as usual. However, the temperature that is actually measured is the temperature at the bulb. Ambient corrections are difficult to achieve with this system and are not usually done. So it is not as accurate as a conventional precision thermometer. Rather, it is typically used for low-cost applications such as private houses. The main issue with accuracy is that if the bulb and the stem are at different levels, there is a change in reading due to the change in pressure head. A further problem is that changes in the ambient temperature of the indoor part of the device can cause a change in reading as well as the temperature of the outdoor part of the device. This effect can be minimised by making the bulb large and the capillary a small diameter. This ensures that changes in the outside temperature produce large changes in the column of liquid in the stem and will tend to swamp the smaller changes caused by the changes in the indoor temperature. Common working liquids used are toluene and alcohol. Both of these have large temperature coefficients of expansion and do not freeze or boil in the temperature range of interest. Electronic types The sensors can be any of the types used in electronic thermometers. Thermistors are common and semiconductor junctions can also be used. Indoor-outdoor electronic thermometers are a frequent hobbyist project and are sometimes sold as kits. Many indoor-outdoor thermometers on sale are wireless devices requiring no physical connection to the sensor placed outside. In these cases the sensor needs to be battery-powered. Applications The primary purpose of the indoor-outdoor thermometer is to allow the outside temperature to be indicated inside a building, thus removing the need to go outside to take a temperature reading. They are also used in vehicles, and are particularly useful for municipal vehicles involved in snow and ice clearance. Building maintenance engineers can use an indoor-outdoor thermometer that has not been installed to get a quick reading of air temperature in a location inside a building. This is done by swinging the bulb of the outdoor sensor in the air while still attached to the instrument. This will get a faster reading because the bulb will come up to temperature much more quickly than the indoor sensor built into the instrument. References Bibliography Curl, Robert S., Building Owner's and Manager's Guide: Optimizing Facility Performance, Fairmont Press, 1998 . Graf, Rudolf F.; Whalen, George J., "Build your own indoor-outdoor electronic thermometer", Popular Mechanics, vol.133, no. 2, pp. 150–152, February 1970 . Hawkins, W. J., "Digital thermometer from a kit", Popular Science, vol.204, no. 3, p. 90, March 1974 . Lamb, Robert, "How weather gadgets work", p. 2, How Stuff Works, retrieved and archived 3 January 2012. McGee, Thomas Donald, Principles and Methods of Temperature Measurement, Wiley-IEEE, 1988 . Minsk, L. David, Snow and Ice Control Manual for Transportation Facilities, McGraw-Hill Professional, 1998 . Thermometers Meteorological instrumentation and equipment Telemetry
Indoor–outdoor thermometer
Technology,Engineering
773
869,961
https://en.wikipedia.org/wiki/Semi-automatic%20transmission
A semi-automatic transmission is a multiple-speed transmission where part of its operation is automated (typically the actuation of the clutch), but the driver's input is still required to launch the vehicle from a standstill and to manually change gears. Semi-automatic transmissions were almost exclusively used in motorcycles and are based on conventional manual transmissions or sequential manual transmissions, but use an automatic clutch system. But some semi-automatic transmissions have also been based on standard hydraulic automatic transmissions with torque converters and planetary gearsets. Names for specific types of semi-automatic transmissions include clutchless manual, auto-manual, auto-clutch manual, and paddle-shift transmissions. These systems facilitate gear shifts for the driver by operating the clutch system automatically, usually via switches that trigger an actuator or servo, while still requiring the driver to manually shift gears. This contrasts with a preselector gearbox, in which the driver selects the next gear ratio and operates the pedal, but the gear change within the transmission is performed automatically. The first usage of semi-automatic transmissions was in automobiles, increasing in popularity in the mid-1930s when they were offered by several American car manufacturers. Less common than traditional hydraulic automatic transmissions, semi-automatic transmissions have nonetheless been made available on various car and motorcycle models and have remained in production throughout the 21st century. Semi-automatic transmissions with paddle shift operation have been used in various racing cars, and were first introduced to control the electro-hydraulic gear shift mechanism of the Ferrari 640 Formula One car in 1989. These systems are currently used on a variety of top-tier racing car classes; including Formula One, IndyCar, and touring car racing. Other applications include motorcycles, trucks, buses, and railway vehicles. Design and operation Semi-automatics facilitate easier gear shifts by removing the need to depress a clutch pedal or lever at the same time as changing gears. Most cars that have a semi-automatic transmission are not fitted with a standard clutch pedal since the clutch is remotely controlled. Similarly, most motorcycles with a semi-automatic transmission are not fitted with a conventional clutch lever on the handlebar. Clutchless manual transmissions Most semi-automatic transmissions are based on conventional manual transmission. They can be partially automated transmission. Once the clutch becomes automated, the transmission becomes semi-automatic. However, these systems still require manual gear selection by the driver. This type of transmission is called a clutchless manual or an automated manual. Most semi-automatic transmissions in older passenger cars retain the normal H-pattern shifter of a manual transmission; similarly, semi-automatic transmissions on older motorcycles retain the conventional foot-shift lever, as on a motorcycle with a fully manual transmission. However, semi-automatics systems in newer motorcycles, racing cars, and other types of vehicles often use gear selection methods such as shift paddles near the steering wheel or triggers near the handlebars. Several different forms of automation for clutch actuation have been used over the years, from hydraulic, pneumatic, and electromechanical clutches to vacuum-operated, electromagnetic, and even centrifugal clutches. Fluid couplings (most commonly and formerly used in early automatic transmissions) have also been used by various manufacturers, usually alongside some form of mechanical friction clutch, to prevent the vehicle from stalling when coming to a standstill or at idle. A typical semi-automatic transmission design may work by using Hall effect sensors or micro switches to detect the direction of the requested shift when the gear stick is used. These sensors' output, combined with the output from a sensor connected to the gearbox which measures its current speed and gear, is fed into a transmission control unit, electronic control unit, engine control unit, or microprocessor, or another type of electronic control system. This control system then determines the optimal timing and torque required for smooth clutch engagement. The electronic control unit powers an actuator, which engages and disengages the clutch in a smooth manner. In some cases, the clutch is actuated by a servomotor coupled to a gear arrangement for a linear actuator, which, via a hydraulic cylinder filled with hydraulic fluid from the braking system, disengages the clutch. In other cases, the internal clutch actuator may be completely electric, where the main clutch actuator is powered by an electric motor or solenoid, or even pneumatic, where the main clutch actuator is a pneumatic actuator that disengages the clutch. A clutchless manual system, named the Autostick, was a semi-automatic transmission introduced by Volkswagen for the 1968 model year. Marketed as the Volkswagen Automatic Stickshift, a conventional three-speed manual transmission was connected to a vacuum-operated automatic clutch system. The top of the gear stick was designed to depress and activate an electric switch, i.e. when touched by the driver's hand. When pressed, the switch operated a 12-volt solenoid, which in turn operated the vacuum clutch actuator, thus disengaging the clutch and allowing shifting between gears. With the driver's hand removed from the gearshift, the clutch would re-engage automatically. The transmission was also equipped with a torque converter, allowing the car to idle in gear like with an automatic, as well as stop and start from a standstill in any gear. Automated manual transmissions Starting in the late 1990s, automotive manufacturers introduced what is now called an automated manual transmission (AMT), which is mechanically similar to, and has its roots in, earlier clutchless manual transmission systems. An AMT functions in the same way as older semi-automatic and clutchless manual transmissions, but with two exceptions; it is able to both operate the clutch and shift automatically, and does not use a torque converter. Shifting is done either automatically from a transmission control unit (TCU), or manually from either the shift knob or shift paddles mounted behind the steering wheel. AMTs combine the fuel efficiency of manual transmissions with the shifting ease of automatic transmissions. Their biggest disadvantage is poor shifting comfort due to the mechanical clutch being disengaged by the TCU, which is easily noticeable as "jolting". Some transmission makers have tried solving this issue by using oversized synchronizer rings and not fully opening the clutch during shifting—which works in theory, but as of 2007, there have not been any series production cars with such functions. In passenger cars, modern AMTs generally have six speeds (though some have seven) and a rather long gearing. In combination with a smart-shifting program, this can significantly reduce fuel consumption. In general, there are two types of AMTs: integrated AMTs and add-on AMTs. Integrated AMTs were designed to be dedicated AMTs, whereas add-on AMTs are conversions of standard manual transmissions into AMTs. An automated manual transmission may include a fully automatic mode where the driver does not need to change gears at all. These transmissions can be described as a standard manual transmission with an automated clutch and automated gear shift control, allowing them to operate in the same manner as traditional automatic transmissions. The TCU automatically shifts gears if, for example, the engine is redlined. The AMT can be switched to a clutchless manual mode wherein one can upshift or downshift using a console-mounted shift selector or paddle shifters. It has a lower cost than conventional automatic transmissions. The automated manual transmission (trade names include SMG-III) is not to be confused with "manumatic" automatic transmission (marketed under trade names such as Tiptronic, Steptronic, Sportmatic, and Geartronic). While these systems seem superficially similar, a manumatic uses a torque converter like an automatic transmission, instead of the clutch used in the automated manual transmission. An automated manual can give the driver full control of the gear selection, whereas a manumatic will deny a gear change request that would result in the engine stalling (from too few RPM) or over-revving. The automatic mode of an automated manual transmission at low or frequent stop start speeds is less smooth than that of manumatics and other automatic transmissions. Sequential manual transmissions Several semi-automatic transmissions used by motorcycles and racing cars are actually mechanically based on sequential manual transmissions. Semi-automatic motorcycle transmissions generally omit the clutch lever, but retain the conventional heel-and-toe foot shift lever. Semi-automatic motorcycle transmissions are based on conventional sequential manual transmissions and typically use a centrifugal clutch. At idle speed, the engine is disconnected from the gearbox input shaft, allowing both it and the bike to freewheel, unlike with torque converter automatics, there is no idle creep with a properly adjusted centrifugal clutch. As the engine speed rises, counterweights within the clutch assembly gradually pivot further outwards until they start to make contact with the inside of the outer housing and transmit an increasing amount of engine power and torque. The effective "bite point" or "biting point" is found automatically by equilibrium, where the power is transmitted through the (still-slipping) clutch is equal to what the engine can provide. This allows relatively fast full-throttle takeoffs (with the clutch adjusted so the engine is at peak torque) without the engine slowing or being bogged down, as well as more relaxed starts and low-speed maneuvers at lower throttle and RPMs. Usage in passenger cars 1900s–1920s In 1901, Amédée Bollée developed a method of shifting gears that did not require the use of a clutch and was activated by a ring mounted within the steering wheel. One car using this system was the 1912 Bollée Type F Torpedo. 1930s–1940s Prior to the arrival of the first mass-produced hydraulic automatic transmission (the General Motors Hydra-Matic) in 1940, several American manufacturers offered various devices to reduce the amount of clutch or shifting input required. These devices were intended to reduce the difficulty of operating the unsynchronised manual transmissions, or "crash gearboxes", that were commonly used, especially in stop-start driving. An early step towards automated transmissions was the 1933–1935 REO Self-Shifter, which automatically shifted between two forward gears in the "forward" mode (or between two shorter gear ratios in the "emergency low" mode). Standing starts required the driver to use the clutch pedal. The Self-Shifter first appeared in May 1933 and was offered as standard on the Royale and as an option on the Flying Cloud S-4. In 1937, the four-speed Oldsmobile Automatic Safety Transmission was introduced on the Oldsmobile Six and Oldsmobile Eight models. It used a planetary gearset with a clutch pedal for starting from a standstill and switching between the "low" and "high" ranges. The Automatic Safety Transmission was replaced by the fully-automatic Hydra-Matic for the 1940 model year. The 1938–1939 Buick Special was available with another Self-Shifter 4-speed semi-automatic transmission, which used a manual clutch for starting from standstill and an automated clutch for gear changes. The 1941 Chrysler M4 Vacamatic transmission was a two-speed manual transmission with an integral underdrive unit, a traditional manual clutch, and a fluid coupling between the engine and the clutch. The two-speed transmission had "high" and "low" ranges, and the clutch was used when the driver wanted to switch between ranges. For normal driving, the driver would press the clutch, select the High range, and then release the clutch. Once the accelerator was pressed, the fluid coupling would engage and the car would begin moving forward, with the underdrive unit engaged to provide a lower gear ratio. At between , the driver would lift off the accelerator and the underdrive unit would disengage. The Vacamatic was replaced by a similar M6 Presto-Matic transmission for the 1946 model year. Similar designs were used for the 1941–1950 Hudson Drive-Master and the ill-fated 1942 Lincoln Liquimatic. Both of these combined a 3-speed manual transmission with automated shifting between the 2nd and 3rd gears, instead of the Vacamatic's "underdrive" unit. The Packard Electro-Matic, introduced in the 1941 Packard Clipper and Packard 180, was an early clutchless manual transmission that used a traditional friction clutch with automatic vacuum operation, which was controlled by the position of the accelerator. 1950s–1960s The Automotive Products manumatic system, available on the 1953 Ford Anglia 100E, was a vacuum-powered automatic clutch system that was actuated by a switch that was triggered whenever the gear stick was moved. The system could control the throttle cable (to keep the engine at the required RPM for the gear change) and vary the rate of clutch engagement. The successive Newtondrive system, available on the 1957–1958 Ford Anglia, also had a provision for choke control. A similar product was the German Saxomat automatic clutch system, which was introduced in the mid-1950s and available on various European cars. The Citroën DS, introduced in 1955, used a hydraulic system with a hydraulically-operated speed controller and idle speed step-up device to select gears and operate the otherwise conventional clutch. This allowed clutchless shifting with a single column-mounted selector, while the driver simultaneously lifted off the accelerator to change gear. This system was nicknamed "Citro-Matic" in the U.S. For the 1962 model year, American Motors introduced the E-Stick, which eliminated the clutch pedal in the Rambler American with standard three-speed manual transmissions. This automatic clutch used engine oil pressure as a hydraulic source and was available for less than $60. Compared to fully automatic transmissions of the time, the E-Stick offered the fuel economy of a stick-shift, with vacuum and electric switches controlling the clutch. The E-Stick three-speed transmission was offered on the larger Rambler Classic models, along with an overdrive unit. The system was only available with 6-cylinder engines, and the lack of a clutch proved unpopular, so it was discontinued after 1964. The 1967 Volkswagen WSK (Wandlerschaltkupplungsgetriebe; English: Torque converter shift/clutch gearbox), used in the Beetle, Type 3 and Karmann Ghia, was one of the first gearboxes of its kind, with an automatic mechanical clutch and a torque converter. It was also known as the Autostick. Shifting was done manually by the driver. The automatic mechanical clutch allowed the car to accelerate from a stop, whereas the torque converter enabled it to do so in any gear. Dampening engine vibrations and providing torque multiplication, it functioned as a sort of "reduction gearbox", so the actual mechanical gearbox only needed three forward gears (this is why conventional automatic transmissions with torque converters normally have fewer gears than manual transmissions). The WSK had no "first" gear; instead, the first gear was converted into reverse gear, and the second gear was labeled first (with the third and fourth gears respectively being labeled second and third). The Chevrolet Torque-Drive transmission, introduced on the 1968 Chevrolet Nova and Camaro, is one of a few examples where a semi-automatic transmission was based on a conventional hydraulic automatic transmission (rather than a standard manual transmission). The Torque-Drive was essentially a 2-speed Powerglide automatic transmission without the vacuum modulator, requiring the driver to manually shift gears between "Low" and "High". The quadrant indicator on Torque-Drive cars was "Park-R-N-Hi-1st". The driver would start the car in "1st," then move the lever to "Hi" when desired. The Torque-Drive was discontinued at the end of 1971 and replaced by a traditional hydraulic automatic transmission. Other examples of semi-automatic transmissions based on hydraulic automatics were the Ford 3-speed Semi-Automatic Transmission used in the 1970–1971 Ford Maverick, early versions of Honda's 1972–1988 Hondamatic 2-speed and 3-speed transmissions, and the Daihatsu Diamatic 2-speed transmission used in the 1985–1991 Daihatsu Charade. Other examples Usage in motorcycles An early example of a semi-automatic motorcycle transmission was the use of an automatic centrifugal clutch in the early 1960s by the Czechoslovakian manufacturer Jawa Moto. Their design was used without permission in the 1965 Honda Cub 50, which resulted in Jawa suing Honda for patent infringement and Honda agreeing to pay royalties for each motorcycle using the design. Other semi-automatic transmissions used in motorcycles include: Honda's Hondamatic two-speed transmission fitted with a torque converter (which shares its name with several fully-automatic transmissions), as used in its 1976 CB750A, 1977 CB400A Hawk, 1978 CM400A and 1982 CM450A. Those in various minibikes, including the Amstar Nostalgia 49, Honda CRF50F, Z series, and ST series, Kawasaki KLX-110, KLX-110R, and KSR110, KTM 65 SX, Suzuki DR-Z50, DR-Z70, and DR-Z125, SSR SR110TR, and Yamaha TT-R50E. Yamaha used an automatic clutch system called YCCS on motorcycles such as the 2006 Yamaha FJR1300AE sports-touring. This system can be shifted either with the lever in the traditional position near the left foot or with a switch accessible to the left hand where the clutch lever would go on traditional motorcycles. The Can-Am Spyder Roadster's SE5 and SE6 5-speed and 6-speed transmissions. Those in several underbone motorcycles in the 1970s; the Suzuki FR50, Suzuki FR80, and Yamaha Townmate used 3-speed transmissions with a heel-and-toe gear shift. Some high-performance sport bikes use a trigger-shift system, with a handlebar-mounted trigger, paddle, switch, or button, and an automatically operated clutch. Some dirt bikes use this system, which is sometimes referred to as an auto-clutch transmission. These include the Honda CRF110F and Yamaha TT-R110E. The conventional motorcycle foot shifter is retained, but the manual hand-clutch lever is no longer required. Semi-automatic transmissions in dirt bikes may be referred to as "automatic" despite a lack of automatic shifting. Usage in motorsports Semi-automatic transmissions in racing cars are typically operated by shift paddles connected to a designated transmission control unit. The first Formula One car to use a semi-automatic transmission was the 1989 Ferrari 640. It used hydraulic actuators and electrical solenoids for clutch control and shifting, and was shifted via two paddles mounted behind the steering wheel. Another paddle on the steering wheel controlled the clutch, which was only needed when starting from a standstill. The car won its debut race at the Brazilian Grand Prix, but for much of the season suffered from reliability problems. Other teams began switching to similar semi-automatic transmissions; the 1991 Williams FW14 was the first to use a sequential drum-rotation mechanism (similar to those used in motorcycle transmissions), which allowed for a more compact design that required only one actuator to rotate the drum and change gears. A further development was made possible by the introduction of electronic throttle control soon after, which made it possible for the car to automatically rev-match during downshifts. By 1993, most teams were using semi-automatic transmissions. The last F1 car fitted with a conventional manual gearbox, the Forti FG01, raced in 1995. Following concerns about the potential for Formula One cars to shift gears automatically without any driver input, mandatory software was introduced in 1994 that ensured that gear changes only occurred when instructed by the driver. Pre-programmed, computer-controlled, fully-automatic upshifts and downshifts were re-introduced and allowed from 2001, and were permitted from that year's Spanish Grand Prix, but were banned again in 2004. Buttons on the steering wheel to shift directly to a particular gear (instead of having to shift sequentially using the paddles) are permitted. The 2005 Minardi PS05, Renault R25, and Williams FW27 were the last Formula 1 cars to use a 6-speed gearbox before the switch to a mandatory 7-speed gearbox for the 2006 season. Since 2014 season, Formula 1 cars currently use mandatory 8-speed paddle-shift gearboxes. The now-defunct CART Champ Car Series switched from a lever-shift sequential system to a 7-speed paddle-shift system for the 2007 season. This transmission was introduced with the new-for-2007 Panoz DP01 chassis. The rival IndyCar Series introduced their 6-speed semi-automatic paddle-shift system for the 2008 season, also replacing the previous lever-shifted sequential transmission, introduced with the Dallara IR-05 chassis for 2008. IndyCars currently use the Xtrac P1011 sequential transmission, which uses a semi-automatic paddle shift system supplied by Mega-Line called AGS (Assisted Gearshift System). AGS uses a pneumatic gearshift and clutch actuator controlled by an internal transmission control unit. Both the FIA Formula 2 and Formula 3 Championships currently use 6-speed sequential gearboxes with electro-hydraulic operation via shift paddles. Manual control of the multi-plate clutch systems via a lever behind the steering wheel is used to launch the cars. DTM currently uses a Hewland DTT-200 6-speed sequential transmission with steering-wheel-mounted shift paddles, which was introduced for the 2012 season with the new rule change. This new system replaced the older lever-shifted sequential transmission, which had been used for the previous 12 seasons (since 2000). Usage in other vehicles Other notable uses for semi-automatic transmissions include: During the 1940s to 1960s, many small diesel shunting locomotives used epicyclic semi-automatic transmissions. For example, the British Rail Class 03 and British Rail Class 04 used the Wilson-Drewry CA5 R7 transmission. The Sinclair S.S.S. Powerflow, used from the 1950s to the early 1960s in Huwood-Hudswell diesel mining locomotives, the British Rail Class D2/7 and the British Rail Class D2/12. The Powerflow design is of the layshaft type with constant-mesh gears and dog clutch engagement, allowing it to provide seamless power delivery during upshifts. This transmission was also used in some road vehicles. The Self-Changing Gears Pneumocyclic, an epicyclic transmission built in the United Kingdom from the 1960s to the 1980s. Using a similar design to the company's previous preselector gearboxes, the Pneumocyclic transmission was used in several buses, such as the Leyland Leopard, Panther, and Tiger. It was also fitted to several thousand British diesel railcars during this time. All-terrain vehicles, such as the Honda ATC185, Honda ATC200, Honda TRX90X and TRX250X (Honda SportClutch), Suzuki LT125D Quadrunner (also known as the Suzuki QuadRunner 125), Suzuki LT 230, Suzuki Eiger 400, Yamaha Big Bear 250, 350, and 400, Yamaha Grizzly 80, Yamaha Grizzly 700, Yamaha Raptor 80, Yamaha YFB250 Timberwolf, the Yamaha Moto-4 ATV range, and the Yamaha Tri-Moto range. The Honda Electric Shift Program is used in ATVs such as the 1998 Honda TRX450FE (also called the Foreman 450ES ESP) and first-generation Honda Rincon. Shifting is accomplished by pressing one of the two gear selector arrows on the left handlebar, which activates an electric shifting system. See also Dual-clutch transmission (DCT) Manumatic Saxomat Shift time References Automotive transmission technologies Automobile transmissions Motorcycle transmissions Mechanical power control
Semi-automatic transmission
Physics
4,899
1,362,652
https://en.wikipedia.org/wiki/Kleinian%20group
In mathematics, a Kleinian group is a discrete subgroup of the group of orientation-preserving isometries of hyperbolic 3-space . The latter, identifiable with , is the quotient group of the 2 by 2 complex matrices of determinant 1 by their center, which consists of the identity matrix and its product by . has a natural representation as orientation-preserving conformal transformations of the Riemann sphere, and as orientation-preserving conformal transformations of the open unit ball in . The group of Möbius transformations is also related as the non-orientation-preserving isometry group of , . So, a Kleinian group can be regarded as a discrete subgroup acting on one of these spaces. History The theory of general Kleinian groups was founded by and , who named them after Felix Klein. The special case of Schottky groups had been studied a few years earlier, in 1877, by Schottky. Definitions One modern definition of Kleinian group is as a group which acts on the 3-ball as a discrete group of hyperbolic isometries. Hyperbolic 3-space has a natural boundary; in the ball model, this can be identified with the 2-sphere. We call it the sphere at infinity, and denote it by . A hyperbolic isometry extends to a conformal homeomorphism of the sphere at infinity (and conversely, every conformal homeomorphism on the sphere at infinity extends uniquely to a hyperbolic isometry on the ball by Poincaré extension. It is a standard result from complex analysis that conformal homeomorphisms on the Riemann sphere are exactly the Möbius transformations, which can further be identified as elements of the projective linear group PGL(2,C). Thus, a Kleinian group can also be defined as a subgroup Γ of PGL(2,C). Classically, a Kleinian group was required to act properly discontinuously on a non-empty open subset of the Riemann sphere, but modern usage allows any discrete subgroup. When Γ is isomorphic to the fundamental group of a hyperbolic 3-manifold, then the quotient space H3/Γ becomes a Kleinian model of the manifold. Many authors use the terms Kleinian model and Kleinian group interchangeably, letting the one stand for the other. Discreteness implies points in the interior of hyperbolic 3-space have finite stabilizers, and discrete orbits under the group Γ. On the other hand, the orbit Γp of a point p will typically accumulate on the boundary of the closed ball . The set of accumulation points of Γp in is called the limit set of Γ, and usually denoted . The complement is called the domain of discontinuity or the ordinary set or the regular set. Ahlfors' finiteness theorem implies that if the group is finitely generated then is a Riemann surface orbifold of finite type. The unit ball B3 with its conformal structure is the Poincaré model of hyperbolic 3-space. When we think of it metrically, with metric it is a model of 3-dimensional hyperbolic space H3. The set of conformal self-maps of B3 becomes the set of isometries (i.e. distance-preserving maps) of H3 under this identification. Such maps restrict to conformal self-maps of , which are Möbius transformations. There are isomorphisms The subgroups of these groups consisting of orientation-preserving transformations are all isomorphic to the projective matrix group: PSL(2,C) via the usual identification of the unit sphere with the complex projective line P1(C). Variations There are some variations of the definition of a Kleinian group: sometimes Kleinian groups are allowed to be subgroups of PSL(2, C).2 (that is, of PSL(2, C) extended by complex conjugations), in other words to have orientation reversing elements, and sometimes they are assumed to be finitely generated, and sometimes they are required to act properly discontinuously on a non-empty open subset of the Riemann sphere. Types A Kleinian group is said to be of finite type if its region of discontinuity has a finite number of orbits of components under the group action, and the quotient of each component by its stabilizer is a compact Riemann surface with finitely many points removed, and the covering is ramified at finitely many points. A Kleinian group is called finitely generated if it has a finite number of generators. The Ahlfors finiteness theorem says that such a group is of finite type. A Kleinian group Γ has finite covolume if H3/Γ has finite volume. Any Kleinian group of finite covolume is finitely generated. A Kleinian group is called geometrically finite if it has a fundamental polyhedron (in hyperbolic 3-space) with finitely many sides. Ahlfors showed that if the limit set is not the whole Riemann sphere then it has measure 0. A Kleinian group Γ is called arithmetic if it is commensurable with the group norm 1 elements of an order of quaternion algebra A ramified at all real places over a number field k with exactly one complex place. Arithmetic Kleinian groups have finite covolume. A Kleinian group Γ is called cocompact if H3/Γ is compact, or equivalently SL(2, C)/Γ is compact. Cocompact Kleinian groups have finite covolume. A Kleinian group is called topologically tame if it is finitely generated and its hyperbolic manifold is homeomorphic to the interior of a compact manifold with boundary. A Kleinian group is called geometrically tame if its ends are either geometrically finite or simply degenerate . A Kleinian group is said to be of type 1 if the limit set is the whole Riemann sphere, and of type 2 otherwise. Examples The Maskit slice through the moduli space of Kleinian groups Bianchi groups A Bianchi group is a Kleinian group of the form PSL(2, Od), where is the ring of integers of the imaginary quadratic field for d a positive square-free integer. Elementary and reducible Kleinian groups A Kleinian group is called elementary if its limit set is finite, in which case the limit set has 0, 1, or 2 points. Examples of elementary Kleinian groups include finite Kleinian groups (with empty limit set) and infinite cyclic Kleinian groups. A Kleinian group is called reducible if all elements have a common fixed point on the Riemann sphere. Reducible Kleinian groups are elementary, but some elementary finite Kleinian groups are not reducible. Fuchsian groups Any Fuchsian group (a discrete subgroup of PSL(2, R)) is a Kleinian group, and conversely any Kleinian group preserving the real line (in its action on the Riemann sphere) is a Fuchsian group. More generally, every Kleinian group preserving a circle or straight line in the Riemann sphere is conjugate to a Fuchsian group. Koebe groups A factor of a Kleinian group G is a subgroup H maximal subject to the following properties: H has a simply connected invariant component D A conjugate of an element h of H by a conformal bijection is parabolic or elliptic if and only if h is. Any parabolic element of G fixing a boundary point of D is in H. A Kleinian group is called a Koebe group if all its factors are elementary or Fuchsian. Quasi-Fuchsian groups A Kleinian group that preserves a Jordan curve is called a quasi-Fuchsian group. When the Jordan curve is a circle or a straight line these are just conjugate to Fuchsian groups under conformal transformations. Finitely generated quasi-Fuchsian groups are conjugate to Fuchsian groups under quasi-conformal transformations. The limit set is contained in the invariant Jordan curve, and if it is equal to the Jordan curve the group is said to be of the first kind, and otherwise it is said to be of the second kind. Schottky groups Let Ci be the boundary circles of a finite collection of disjoint closed disks. The group generated by inversion in each circle has limit set a Cantor set, and the quotient H3/G is a mirror orbifold with underlying space a ball. It is double covered by a handlebody; the corresponding index 2 subgroup is a Kleinian group called a Schottky group. Crystallographic groups Let T be a periodic tessellation of hyperbolic 3-space. The group of symmetries of the tessellation is a Kleinian group. Fundamental groups of hyperbolic 3-manifolds The fundamental group of any oriented hyperbolic 3-manifold is a Kleinian group. There are many examples of these, such as the complement of a figure 8 knot or the Seifert–Weber space. Conversely if a Kleinian group has no nontrivial torsion elements then it is the fundamental group of a hyperbolic 3-manifold. Degenerate Kleinian groups A Kleinian group is called degenerate if it is not elementary and its limit set is simply connected. Such groups can be constructed by taking a suitable limit of quasi-Fuchsian groups such that one of the two components of the regular points contracts down to the empty set; these groups are called singly degenerate. If both components of the regular set contract down to the empty set, then the limit set becomes a space-filling curve and the group is called doubly degenerate. The existence of degenerate Kleinian groups was first shown indirectly by , and the first explicit example was found by Jørgensen. gave examples of doubly degenerate groups and space-filling curves associated to pseudo-Anosov maps. See also Ahlfors measure conjecture Density theorem for Kleinian groups Ending lamination theorem Tameness theorem (Marden's conjecture) References External links A picture of the limit set of a quasi-Fuchsian group from . A picture of the limit set of a Kleinian group from . This was one of the first pictures of a limit set. A computer drawing of the same limit set Animations of Kleinian group limit sets Images related to Kleinian groups by McMullen Discrete groups Lie groups Automorphic forms 3-manifolds
Kleinian group
Mathematics
2,165
1,364,232
https://en.wikipedia.org/wiki/Multi-Environment%20Real-Time
Multi-Environment Real-Time (MERT), later renamed UNIX Real-Time (UNIX-RT), is a hybrid time-sharing and real-time operating system developed in the 1970s at Bell Labs for use in embedded minicomputers (especially PDP-11s). A version named Duplex Multi Environment Real Time (DMERT) was the operating system for the AT&T 3B20D telephone switching minicomputer, designed for high availability; DMERT was later renamed Unix RTR (Real-Time Reliable). A generalization of Bell Labs' time-sharing operating system Unix, MERT featured a redesigned, modular kernel that was able to run Unix programs and privileged real-time computing processes. These processes' data structures were isolated from other processes with message passing being the preferred form of interprocess communication (IPC), although shared memory was also implemented. MERT also had a custom file system with special support for large, contiguous, statically sized files, as used in real-time database applications. The design of MERT was influenced by Dijkstra's THE, Hansen's Monitor, and IBM's CP-67. The MERT operating system was a four-layer design, in decreasing order of protection: Kernel: resource allocation of memory, CPU time and interrupts Kernel-mode processes including input/output (I/O) device drivers, file manager, swap manager, root process that connects the file manager to the disk (usually combined with the swap manager) Operating system supervisor User processes The standard supervisor was MERT/UNIX, a Unix emulator with an extended system call interface and shell that enabled the use of MERT's custom IPC mechanisms, although an RSX-11 emulator also existed. Kernel and non-kernel processes One interesting feature that DMERT – UNIX-RTR introduced was the notion of kernel processes. This is connected with its microkernelish architecture roots. In support, there is a separate command (/bin/kpkill) rather than (/bin/kill), that is used to send signals to kernel processes. It is likely there are two different system calls also (kill(2) and kpkill(2), the first to end a user process and the second to end a kernel process). It is unknown how much of the normal userland signaling mechanism is in place in /bin/kpkill, assuming there is a system call for it, it is not known if one can send various signals or simply send one. Also unknown is whether the kernel process has a way of catching the signals that are delivered to it. It may be that the UNIX-RTR developers implemented an entire signal and messaging application programming interface (API) for kernel processes. File system bits If one has root on a UNIX-RTR system, they will surely soon find that their ls -l output is a bit different than expected. Namely, there are two completely new bits in the drwxr-xr-x field. They both take place in the first column, and are C (contiguous) and x (extents). Both of these have to do with contiguous data, however one may be to do with inodes and the other with non-metadata. Example ls -l: drwxr-xr-x root 64 Sun Dec 4 2003 /cft xrwxr-xr-x root 64 Mon Dec 11 2013 /no5text Crwxr-xr-x root 256 Tue Dec 12 2014 /no5data Lucent emulator and VCDX AT&T, then Lucent, and now Alcatel-Lucent, are the vendor of the SPARC-based and Solaris-OEM package ATT3bem (which lives on Solaris SPARC in /opt/ATT3bem). This is a full 3B21D emulator (known as the 3B21E, the system behind the Very Compact Digital eXchange, or VCDX) which is meant to provide a production environment to the Administrative Module (AM) portion of the 5ESS switch. There are parts of the 5ESS that are not part of the 3B21D microcomputer at all: SMs and CMs. Under the emulator the workstation is referred to as the 'AW' (Administrative Workstation). The emulator installs with Solaris 2.6/SPARC and also comes with Solstice X.25 9.1 (SUNWconn), formerly known as SunLink X.25. The reason for packaging the X.25 stack with the 3B21D emulator is because the Bell System, regional Bell operating companies, and ILECs still use X.25 networks for their most critical of systems (telephone switches may live on X.25 or Datakit VCS II, a similar network developed at Bell Labs, but they do not have TCP/IP stacks). The AT&T/Alcatel-Lucent emulator is not an easy program to get working correctly, even if one manages to have an image from a pulled working 5ESS hard disk 'dd' output file. First, there are quite a few bugs the user must navigate around in the installation process. Once this is done, there is a configuration file which connects peripherals to emulated peripherals. But there is scant documentation on the CD which describes this. The name of this file is em_devmap for SS5s, and em_devmap.ultra for Ultra60s. In addition, one of the bugs mentioned in the install process is a broken script to fdisk and image hard disks correctly: certain things need to be written to certain offsets, because the /opt/ATT3bem/bin/3bem process expects, or seems to need, these hard-coded locations. The emulator runs on SPARCstation-5s and UltraSPARC-60s. It is likely that the 3B21D is emulated faster on a modern SPARC than a 3B21D microcomputer's processor actually runs as measured in MIPS. The most difficult thing about having the emulator is acquiring a DMERT/UNIX-RTR hdd image to actually run. The operating system for the 5ESS is restricted to a few people, employees and customers of the vendor, who either work on it or write the code for it. Having an image of a running system, which can be obtained on eBay, pulled from a working 3B21D, and imaged to a file or put into an Ultra60 or SPARCstation-5, provides the resources to attempt to run the UNIX-RTR system. The uname -a output of the Bourne shell running UNIX-RTR (Real-time Reliable) is: # uname -a <3B21D> <3B21D> Though on 3B20D systems it will print 20 instead of 21, though 3B20Ds are rare, nowadays most non-VCDX 5ESSs are 3B21D hardware, not 3B20D (although they will run the software fine). The 3B20D uses the WE32000 processor while the 21 uses the WE32100. There may be some other differences, as well. One thing unusual about the processor is the direction the stack grows: up. Manual page for falloc (which may be responsible for Contiguous or eXtent file space allocation): FALLOC(1) 5ESS UNIX FALLOC(1) NAME falloc - allocate a contiguous file SYNOPSIS falloc filename size DESCRIPTION A contiguous file of the specified filename is allocated to be of 'size' (512 byte) blocks. DIAGNOSTICS The command complains a needed directory is not searchable, the final directory is not writable, the file already exists or there is not enough space for the file. UNIX-RTR includes an atomic file swap command (atomsw, manual page below): ATOMSW(1) 5ESS UNIX ATOMSW(1) NAME atomsw - Atomic switch files SYNOPSIS atomsw file1 file2 DESCRIPTION Atomic switch of two files. The contents, permissions, and owners of two files are switched in a single operation. In case of a system fault during the operation of this command, file2 will either have its original contents, permissions and owner, or will have file1's contents, permissions and owner. Thus, file2 is considered precious. File1 may be truncated in case of a system fault. RESTRICTIONS Both files must exist. Both files must reside on the same file system. Neither file may be a "special device" (for example, a TTY port). To enter this command from the craft shell, switching file "/tmp/abc" with file "/tmp/xyz", enter for MML: EXC:ENVIR:UPROC,FN="/bin/atomsw",ARGS="/tmp/abc"-"/tmp/xyz"; For PDS enter: EXC:ENVIR:UPROC,FN"/bin/atomsw",ARGS("/tmp/abc","/tmp/xyz")! NOTE File 1 may be lost during a system fault. FILES /bin/atomsw References Real-time operating systems Bell Labs Unices Microkernel-based operating systems Microkernels
Multi-Environment Real-Time
Technology
1,980
27,721,602
https://en.wikipedia.org/wiki/Lana%20Skirboll
Lana Skirboll is the former director of the National Institutes of Health Office of Science Policy. Biography Skirboll is an international leader in science policy. She graduated from New York University in 1970 with a bachelor's degree in Biology and completed a master's degree in Physiology in 1972 at Miami University (Ohio). She received her Ph.D. with honors from the Department of Pharmacology, Georgetown University School of Medicine in 1977, and conducted her postdoctoral training in the Departments of Psychiatry and Pharmacology at the Yale School of Medicine. Following her postdoctoral training, she was a Fogarty Fellow at the Karolinska Institute in Stockholm, Sweden in the laboratory of Tomas Hökfelt. Dr. Skirboll is the author of more than 75 scientific publications. After leaving Stockholm, Dr. Skirboll was chief of the Electrophysiology Unit in the Intramural Research Program of the U.S. National Institute of Mental Health (NIMH) prior to joining the U.S. Alcohol Drug Abuse and Mental Health Administration (ADAMHA), as the Deputy Science Advisor. She was subsequently appointed as the Chief of Staff to the Agency Administrator and Associate Administrator for Science, where she focused on animals in research and patent policy. In 1992, when ADAMHA was reorganized and its three research Institutes (NIMH, NIDA, and NIAAA) returned to the NIH, Dr. Skirboll was appointed Director of the Office of Science Policy in the NIMH. In 1995, Harold Varmus, Director of the U.S. National Institutes of Health (NIH) appointed Dr. Skirboll as Director of the NIH Office of Science Policy. During her tenure, she managed a wide range of policy issues, including the ethical, legal, social, and economic implications of biomedical research; human subject protections; the privacy and confidentiality of research records; conflicts of interest; genetics, health, and society; and dual use research, among others. Her office was responsible for NIH's oversight of gene therapy research, including the activities of the Recombinant DNA Advisory Committee (RAC) as well as for the activities of the HHS Secretary's Advisory Committee on Genetics, Health and Society; the Secretary's Advisory Committee on Xenotransplantation; the National Science Advisory Board for Biosecurity (NSABB); the Clinical Research Policy Analysis and Coordination Program (CRpac); and the NIH Office of Science Education. Dr. Skirboll was the NIH liaison to the U.S. Food and Drug Administration, the Foundation for the NIH, and the HHS Office for Human Research Protections. Her work involved collaboration within the U.S. Government, industry, and foreign governments and institutions. Under three Presidential Administrations, Dr. Skirboll was the agency’s lead on policy issues related to fetal tissue, cloning, and stem cell research. She was responsible for drafting both the 2000 and 2009 NIH Guidelines for Research Using Human Embryonic Stem Cells. Starting 2003, Dr. Skirboll worked with NIH Director Elias Zerhouni in creating the NIH Roadmap for Medical Research, the Trans-NIH Nanotechnology Task Force and the NIH program on Public-Private Partnerships. In 2009, Zerhouni named Skirboll to serve as Acting NIH Deputy Director for, and Director of the Division of Program Coordination, Planning, and Strategic Initiatives (DPCPSI), the NIH entity responsible for the NIH Common Fund. In this capacity, Dr. Skirboll directed national efforts to identify and address emerging scientific opportunities and rising public health challenges through biomedical research. In addition, Dr. Skirboll directed efforts to develop NIH’s portfolio analysis capabilities and was chair of the NIH Council of Councils. In addition, Dr. Skirboll oversaw NIH’s office of evaluation and the program offices responsible for coordination of research and activities related to research on AIDS, behavioral and social sciences, women's health, disease prevention, rare diseases, and dietary supplements—efforts that reside in DPCPSI as a result of implementing requirements of the NIH Reform Act of 2006. Dr. Skirboll has received three DHHS Secretarial Awards for Distinguished Service and a Presidential Rank Award of Meritorious Executive. In May 2010, Skirboll joined former NIH Director Elias Zerhouni in a new global science and health consulting firm, the Zerhouni Group, LLC. She recently retired from Vice President and Head of Science Policy at the pharmaceutical company Sanofi after 10 years of service. Personal life Skirboll resides in Alexandria, VA and is married to architect and hospital administrator, Leonard Taylor, Jr., who was Senior Vice President for Asset Management at the University of Maryland Medical Systems . She has two grown children, Patrick and Eleanor, and four grandchildren. References External links NIH Office of Science Policy Recombinant DNA Advisory Committee (RAC) HHS Secretary's Advisory Committee on Genetics, Health and Society Foundation for the NIH Living people United States Department of Health and Human Services officials New York University alumni Georgetown University School of Medicine alumni Year of birth missing (living people) Place of birth missing (living people) People from Bethesda, Maryland Sanofi people Stem cell research National Institutes of Health people 20th-century American women scientists 21st-century American women scientists
Lana Skirboll
Chemistry,Biology
1,112
551,448
https://en.wikipedia.org/wiki/Liana
A liana is a long-stemmed woody vine that is rooted in the soil at ground level and uses trees, as well as other means of vertical support, to climb up to the canopy in search of direct sunlight. The word liana does not refer to a taxonomic grouping, but rather a habit of plant growth – much like tree or shrub. It comes from standard French liane, itself from an Antilles French dialect word meaning to sheave. Ecology Lianas are characteristic of tropical moist broadleaf forests (especially seasonal forests), but may be found in temperate rainforests and temperate deciduous forests. There are also temperate lianas, for example the members of the Clematis or Vitis (wild grape) genera. Lianas can form bridges amidst the forest canopy, providing arboreal animals, including ants and many other invertebrates, lizards, rodents, sloths, monkeys, and lemurs with paths across the forest. For example, in the Eastern tropical forests of Madagascar, many lemurs achieve higher mobility from the web of lianas draped amongst the vertical tree species. Many lemurs prefer trees with lianas because of their roots. Lianas do not derive nutrients directly from trees but live on and derive nutrients at the expense of trees. Specifically, they greatly reduce tree growth and tree reproduction, greatly increase tree mortality, prevent tree seedlings from establishing, alter the course of regeneration in forests, and ultimately decrease tree population growth rates. For example, forests without lianas grow 150% more fruit; trees with lianas have twice the probability of dying. Lianas are uniquely adapted to living in such forests as they use the host tree, for stability, to reach to top of the canopy. Lianas directly damage hosts by mechanical abrasion and strangulation, render hosts more susceptible to ice and wind damage, and increase the probability that the host tree falls. Lianas also provide support for weaker trees when strong winds blow by laterally anchoring them to stronger trees. However, they may be destructive in that when one tree falls, the connections made by the lianas may cause many other trees to fall. Because of these negative effects, trees which remain free of lianas are at an advantage; some species have evolved characteristics which help them avoid or shed lianas. Some lianas attain to great length, such as Bauhinia sp. in Surinam which has grown as long as 600 meters (2000'). Hawkins has accepted a length of 1.5 km (1 mile) for an Entada phaseoloides. The longest monocot liana is Calamus manan (or Calamus ornatus) at exactly 240 meters (787'). Dr. Francis E. Putz states that lianas (species not indicated) have weighed "hundreds of tons" and been a half mile (0.8 km) in length. One way of distinguishing lianas from trees and shrubs is based on the stiffness, specifically, the Young's modulus of various parts of the stem. Trees and shrubs have young twigs and smaller branches which are quite flexible and older growth such as trunks and large branches which are stiffer. A liana often has stiff young growths and older, more flexible growth at the base of the stem. Examples Some families and genera containing liana species include: References External links Lianas and Climbing Plants of the Neotropics Lianas and Climbing Plants of the Neotropics: Family Treatments 'Vines and Lianas' by Rhett Butler, at http://rainforests.mongabay.com/0406.htm See also List of Longest Vines Plant morphology Biology terminology Plant life-forms Plants by habit
Liana
Biology
753
18,606,379
https://en.wikipedia.org/wiki/Arotinolol
Arotinolol (INN, marketed under the tradename Almarl) is a medication in the class of mixed alpha/beta blockers. It also acts as a β3 receptor agonist. A 1979 publication suggests arotinolol as having first been described in the scientific literature by Sumitomo Chemical as "β-adrenergic blocking, antiarrhythmic compound S-596". Medical uses It is used in the treatment of high blood pressure and essential tremor. Recommended dosage is 10 to 30 mg per day. References External links Almarl Full Prescribing Information. Revised November 2009 Sumitomo Dainippon Pharma Co., Ltd. Official Sumitomo Dainippon Pharma Website Alpha-1 blockers Amines Beta blockers Beta3-adrenergic agonists Carboxamides Secondary alcohols Tert-butyl compounds Thiazoles Thioethers Thiophenes
Arotinolol
Chemistry
203
2,473,617
https://en.wikipedia.org/wiki/HD%20ready
HD ready is a certification program introduced in 2005 by EICTA (European Information, Communications and Consumer Electronics Technology Industry Associations), now DIGITALEUROPE. HD ready minimum native resolution is 720 rows in widescreen ratio. There are currently four different labels: "HD ready", "HD TV", "HD ready 1080p", "HD TV 1080p". The logos are assigned to television equipment capable of certain features. In the United States, a similar "HD Ready" term usually refers to any display that is capable of accepting and displaying a high-definition signal at either 720p, 1080i or 1080p using a component video or digital input, but does not have a built-in HD-capable tuner. History The "HD ready" certification program was introduced on January 19, 2005. The labels and relevant specifications are based on agreements between over 60 broadcasters and manufacturers of the European HDTV Forum at its second session in June 2004, held at the Betzdorf, Luxembourg headquarters of founding member SES Astra. The "HD ready" logo is used on television equipment capable of displaying High Definition (HD) pictures from an external source. However, it does not have to feature a digital tuner to decode an HD signal; devices with tuners were certified under a separate "HD TV" logo, which does not require a "HD ready" display device. Before the introduction of the "HD ready" certification, many TV sources and displays were being promoted as capable of displaying high definition pictures when they were in fact SDTV devices; according to Alexander Oudendijk, senior VP of marketing for Astra, in early 2005 there were 74 different devices being sold as ready for HD that were not. Devices advertised as HD-compatible or HD ready could take HDTV-signal as an input (via analog -YPbPr or digital DVI or HDMI), but they did not have enough pixels for true representation of even the lower HD resolution (1280 × 720) (plasma-based sets with 853 × 480 resolution, CRT based sets only capable of SDTV-resolution or VGA-resolution, 640×480 pixels), much less the higher HD resolution (1920 × 1080), and so were unable to display the HD picture without downscaling to a lower resolution. Industry-sponsored labels such as "Full HD" were misleading as well, as they can refer to devices which do not fulfil some essential requirements such as having 1:1 pixel mapping with no overscan or accepting a 1080p signal. A UK BBC television programme found that separate labels for display devices and TV tuners/decoders confused purchasers, many of whom bought HD-ready equipment expecting to be able to receive HD with no additional equipment; they were sometimes actively misled by salespeople—a 2007 Ofcom survey found that 12% were told explicitly that they could view analog SDTV transmissions in HD, 7% that no extra equipment was needed, and 14% that HD-ready sets would receive existing digital SDTV transmissions in HD. On August 30, 2007, 1080p versions of the logos and licensing agreements were introduced; as an improvement to the earlier scheme, "HD TV 1080p" logo now requires "HD ready 1080p" certification. Requirements and logos HD ready and HD ready 1080p logos are assigned to displays (including integrated television sets, computer monitors and projectors) which have certain capabilities to process and display high-definition source video signal, outlined in a table below. The HD TV logo is assigned to either integrated digital television sets (containing a display conforming to "HD ready" requirements) or standalone set-top boxes which are capable of receiving, decoding and outputting or displaying high-definition broadcasts (that is, include a DVB tuner for cable, terrestrial or satellite broadcasting, a video decoder which supports H.264/MPEG-4 AVC compression in 720p and 1080i signal formats, and either video outputs or an integrated display capable of handling such signals). The HD TV 1080p logo is assigned to integrated digital television sets which have a display conforming to "HD ready 1080p" requirements, a DVB tuner and a decoder capable of processing 1080p signal. In order to be labelled "HD ready 1080p" or "HD Ready" logo, a display device has to meet the following requirements: References External links HD ready official UK website High Definition Television and Logos - EICTA EICTA: Broadcast License agreement and HD Ready 1080p requirements HD Ready 1080p press release DVDActive article - Are You Ready for HDTV? Television technology High-definition television Audiovisual introductions in 2005 Symbols introduced in 2005 2005 establishments in the European Union
HD ready
Technology
984
655,822
https://en.wikipedia.org/wiki/Information%20appliance
An information appliance (IA) is an appliance that is designed to easily perform a specific electronic function such as playing music, photography, or editing text. Typical examples are smartphones and personal digital assistants (PDAs). Information appliances partially overlap in definition with, or are sometimes referred to as, smart devices, embedded systems, mobile devices or wireless devices. Appliance vs computer The term information appliance was coined by Jef Raskin around 1979. As later explained by Donald Norman in his influential The Invisible Computer, the main characteristics of IA, as opposed to any normal computer, were: designed and pre-configured for a single application (like a toaster appliance, which is designed only to make toast), so easy to use for untrained people, that it effectively becomes unnoticeable, "invisible" to them, able to automatically share information with any other IAs. This definition of IA was different from today's. Jef Raskin initially tried to include such features in the Apple Macintosh, which he designed, but eventually the project went a quite different way. For a short while during the mid- and late 1980s, there were a few models of simple electronic typewriters with screens and some form of memory storage. These dedicated word processor machines had some of the attributes of an information appliance, and Raskin designed one of them, the Canon Cat. He described some properties of his definition of information appliance in his book The Humane Interface. Larry Ellison, Oracle Corporation CEO, predicted that information appliances and network computers would supersede personal computers (PCs). See also Archy Computer appliance Embedded system Internet appliance Mobile web Technological convergence Ubiquitous computing Smart speaker References External links Compact HTML for Small Information Appliances — W3C NOTE (9 February 1998)
Information appliance
Technology
374
238,748
https://en.wikipedia.org/wiki/Romanowsky%20stain
Romanowsky staining is a prototypical staining technique that was the forerunner of several distinct but similar stains widely used in hematology (the study of blood) and cytopathology (the study of diseased cells). Romanowsky-type stains are used to differentiate cells for microscopic examination in pathological specimens, especially blood and bone marrow films, and to detect parasites such as malaria within the blood. The staining technique is named after the Russian physician Dmitri Leonidovich Romanowsky (1861–1921), who was one of the first to recognize its potential for use as a blood stain. Stains that are related to or derived from the Romanowsky-type stains include Giemsa, Jenner, Wright, Field, May–Grünwald, Pappenheim and Leishman stains. They differ in protocols and additives and their names are often confused with one another in practice. Mechanism The value of Romanowsky staining lies in its ability to produce a wide range of hues, allowing cellular components to be easily differentiated. This phenomenon is referred to as the Romanowsky effect, or more generally as metachromasia. Eosin part of the stain is responsible for pink-orange hue of erythrocytes and granules inside cytoplasms of eosinophilic leukocytes. Romanowsky effect In 1891 Romanowsky developed a stain using a mixture of eosin (typically eosin Y) and aged solutions of methylene blue that formed hues unattributable to the staining components alone: distinctive shades of purple in the chromatin of the cell nucleus and within granules in the cytoplasm of some leukocytes. This became known as the Romanowsky effect. Eosin and pure methylene blue alone (or in combination) do not produce the Romanowsky effect, and the active stains which produce the effect are now considered to be azure B and eosin. Polychromed methylene blue Romanowsky-type stains can be made from either a combination of pure dyes, or from methylene blue that has been subject to oxidative demethylation, which results in the breakdown of methylene blue into multiple other stains, some of which are necessary to produce the Romanowsky effect. Methylene blue that has undergone this oxidative process is known as "polychromed methylene blue". Polychromed methylene blue may contain up to 11 dyes, including methylene blue, azure A, azure B, azure C, thionine, methylene violet Bernthesen, methyl thionoline and thionoline. The exact composition of polychromed methylene blue depends on the method used, and even batches of the stain from the same manufacturer may vary in composition. Common method of rapid oxidation uses increasing pH of the solution with potassium carbonate and boiling it, which introduces atmospheric oxygen. Other methods have been employed, as well, such as oxidation in acidic medium with dichromate anion. Although azure B and eosin have been shown to be the required components to produce the Romanowsky effect, these stains in their pure forms have not always been used in the formulation of the staining solutions. The original sources of azure B (one of the oxidation products of methylene blue) were from polychromed methylene blue solutions, which were treated with oxidizing agents or allowed to naturally age in the case of Romanowsky. Ernst Malachowsky in 1891 was the first to purposely polychrome methylene blue for use in a Romanowsky-type stain. Types Wright stain Wright's stain can be used alone or in combination with the Giemsa stain, which is known as the Wright-Giemsa stain. Wright's stain is named after James Homer Wright who in 1902 published a method using heat to produce polychromed methylene blue, which is combined with eosin Y. The polychromed methylene blue is combined with eosin and allowed to precipitate, forming an eosinate which is redissolved in methanol. The addition of Giemsa to Wright's stain increases the brightness of the "reddish-purple" color of the cytoplasmic granules. The Wright's and Wright-Giemsa stains are two of the Romanowsky-type stains in common use in the United States and are mainly used for the staining of blood and bone marrow films. Jenner stain Jenner's stain is used in microscopy for staining blood smears. The stain is dark blue and results in very observable clearly stained nucleus. Giemsa stain Giemsa stain is composed of "Azure II" and eosin Y with methanol and glycerol as the solvent. "Azure II" is thought to be a mixture of azure B (which Giemsa called "azure I") and methylene blue, although the exact composition of "azure I" is considered a trade secret. Comparable formulations using known dyes have been published and are commercially available. Giemsa stain is considered to be the standard stain for detection and identification of the malaria parasite. May-Grünwald stain The May-Grünwald-Giemsa is used for the staining of slides obtained by fine-needle aspiration in a histopathology lab for the diagnosis of tumorous cells. Pappenheim stain This method is a combination of May-Grünwald and Giemsa staining. Leishman stain In 1901 William Leishman developed a stain that was similar to Louis Jenner's but with the replacement of pure methylene blue with polychromed methylene blue. Leishman's stain is prepared from the eosinate of polychromed methylene blue and eosin Y using methanol as the solvent. Field's stain Field stain is used for staining thick blood films in order to discover malarial parasites. Clinical importances Blood and bone marrow pathology Romanowsky-type stains are widely used in the examination of blood, in the form of blood films, and in the microscopic examination of bone marrow biopsies and aspirate smears. Examination of both blood and bone marrow can be of importance in the diagnosis of a variety of blood diseases. In the United States the Wright and Wright-Giemsa variants of the Romanowsky-type stains are widely used, while in Europe Giemsa stain is commonly employed. Detection of malaria and other parasites Of the Romanowsky-type stains, the Giemsa stain is especially important in the detection and identification of malaria parasites in blood samples. Malaria antigen detection tests are an alternative to the staining and microscopic examination of blood films for the detection of malaria. Use in cytopathology Romanowsky-type stains are also used for the staining of cytopathologic specimens such as those produced from fine-needle aspirates and cerebrospinal fluid from lumbar punctures. History Although debate exists as to who deserves credit for this general staining method, popular usage has attributed it to Dmitri Leonidovich Romanowsky. In the 1870s Paul Ehrlich used a mixture of acidic and basic dyes including acid fuchsin (acid dye) and methylene blue (basic dye) to examine blood films. In 1888 Cheslav Ivanovich Chenzinsky used methylene blue, but substituted the acid fuchsin used by Ehrlich with eosin. Chenzinsky's stain combination was able to stain the malaria parasite (a member of the genus Plasmodium). Neither Ehrlich's or Chenzinsky's stains produced the Romanowsky effect as the methylene blue they used was not polychromed. Dmitri Romanowsky in 1890 published preliminary findings of his blood stain (a combination of aged methylene blue and eosin), including the results when applied to malaria infected blood. This use of polychromed methylene blue differentiated Romanowsky's stain (and the subsequent formulations) from those of Ehrlich and Chenzinsky, which lacked the purple hue associated with the Romanowsky effect. Romanowsky's 1890 publication did not include a description of how he modified his methylene blue solution, but in his 1891 doctoral thesis he described methylene blue best as used after mold began forming on the surface. Other than the use of an aged methylene blue solution, Romanowsky's stain was based on Chenzinsky's stain technique. Romanowsky's use of his method to study the malaria parasite has been attributed to the continued interest in his staining method. Ernst Malachowsky has been credited with independently observing the same stain combination as Dmitri Romanowsky in 1891, although he has also been credited with being the first to do so. Malachowsky was the first to use a deliberately polychromed methylene blue solution, which Malachowsky accomplished by the addition of borax to the staining mixture. Malachowsky is reported to have demonstrated the stain on June 15, 1890, and in the same year to have published a paper "describing his public demonstration". Both the Romanowsky and Malachowsky methods were able to stain the nucleus and cytoplasm of the malaria parasite, when until this point the stains used had only colored the cytoplasm. In 1899, Louis Leopold Jenner developed a more stable version of the methylene blue and eosin stain by collecting the precipitate that forms in water-based mixtures and redissolving it in methanol. Romanowsky-type stains prepared from the collected precipitates are sometimes known as eosinates. Besides increasing the stability of the stain, the use of methanol in Jenner's stain had the effect of fixing the blood samples, although Jenner's version of the stain does not produce the Romanowsky effect. Richard May and Ludwig Grünwald in 1892 published a version of the stain (now known as the May–Grünwald stain) which is similar to the version proposed by Jenner in 1899, and likewise does not produce the Romanowsky effect. In 1901, both Karl Reuter and William Leishman developed stains that combined Louis Jenner's use of alcohol as the solvent and Malachowsky's use of polychromed methylene blue. Reuter's stain differed from Jenner's in using ethyl alcohol instead of methanol, and Leishman's differed from Jenner's by using eosin B instead of eosin Y. James Homer Wright in 1902 published a method using heat to polychrome the methylene blue, which he combined with eosin Y. This technique is known as Wright's stain. Gustav Giemsa's name has also become associated with the stain as he is credited with publishing a useful formulation and protocol in 1902. Giemsa attempted to use combinations of pure dyes rather than polychromed methylene blue solutions which are highly variable in composition. Giemsa sold the rights to produce his stain, but never fully published details on how he produced it, although it is thought that he used a combination of azure B and methylene blue. Giemsa published a number of modifications of his stains between 1902 and 1934. In 1904 he suggested adding glycerin to his stain, along with the methanol, to increase its stability. Giemsa stain powders produced in Germany were widely used in the United States until the interruption of the supply during World War I, which caused increased utilization of James Homer Wright's method for polychroming methylene blue. See also Liu's stain Malaria antigen detection tests Papanicolaou stain Staining (biology) References Anatomical pathology Cytopathology Hematology Hematopathology Histology Romanowsky stains Staining
Romanowsky stain
Chemistry,Biology
2,463
41,964,210
https://en.wikipedia.org/wiki/Great%20120-cell%20honeycomb
In the geometry of hyperbolic 4-space, the great 120-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {5,5/2,5,3}, it has three great 120-cells around each face. It is dual to the order-5 icosahedral 120-cell honeycomb. It can be seen as a greatening of the 120-cell honeycomb, and is thus analogous to the three-dimensional great dodecahedron {5,5/2} and four-dimensional great 120-cell {5,5/2,5}. It has density 10. See also List of regular polytopes References Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296) Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213) Honeycombs (geometry) 5-polytopes
Great 120-cell honeycomb
Physics,Chemistry,Materials_science,Mathematics
239
912,076
https://en.wikipedia.org/wiki/Cajal%20body
Cajal bodies (CBs), also coiled bodies, are spherical nuclear bodies of 0.3–1.0 μm in diameter found in the nucleus of proliferative cells like embryonic cells and tumor cells, or metabolically active cells like neurons. CBs are membrane-less organelles and largely consist of proteins and RNA. They were first reported by Santiago Ramón y Cajal in 1903, who called them nucleolar accessory bodies due to their association with the nucleoli in neuronal cells. They were rediscovered with the use of the electron microscope (EM) and named coiled bodies, according to their appearance as coiled threads on EM images, and later renamed after their discoverer. Research on CBs was accelerated after discovery and cloning of the marker protein p80/Coilin. CBs have been implicated in RNA-related metabolic processes such as the biogenesis, maturation and recycling of snRNPs, histone mRNA processing and telomere maintenance. CBs assemble RNA which is used by telomerase to add nucleotides to the ends of telomeres. History CBs were initially discovered by neurobiologist Santiago Ramón y Cajal in 1903 as small argyrophilic (readily stained by silver salts, literally "silver loving") spots in the nuclei of silver-stained neuronal cells. Because of their close association with nucleoli, he named them nucleolar accessory bodies. Later on, they were forgotten and rediscovered multiple times independently which led to a state where scientists from different research fields used different names for the same structure. Names used for CBs included "sphere organelles", "Binnenkörper", "nucleolar bodies" or "coiled bodies". The name coiled bodies comes from observation of electron microscopists Monneron and Bernhard. They described bodies as aggregates composed of coiled threads with thickness of 400–600 Å. When using higher magnification, they appear as tiny, 50 Å thick fibrils irregularly twisted along the axis of the threads. The bodies were even predicted to consist of ribonucleoproteins since treatment of cells with protease and RNase together, but not alone, caused dramatic changes to the structure of CBs. Localization Cajal bodies are found in all eukaryotes that have been carefully studied. The cells in which Cajal bodies are most apparent usually demonstrate high levels of transcriptional activity, and are often dividing rapidly. Cell cycle They are about 0.1–2.0 micrometres and are found one to five per nucleus. The number varies in different types of cells and over the cell cycle. Maximum number is reached in mid G1 phase and towards G2 they become larger and their number decreases. CBs disassemble during the M phase and reappear again later in G1 phase. Cajal bodies are possibly sites of assembly or modification of the transcription machinery of the nucleus. Functions CBs are bound to the nucleolus by coilin proteins. P80-coilin is a specific marker for coiled bodies, and demonstrates these bodies tend to be associated with the nucleolus when cells are not dividing. CBs are associated with telomerase assembly and recruitment via a CAB-RNA sequence common in both CB RNAs (scaRNAs) and the RNA component of telomerase (TERC). TCAB1 recognizes the CAB sequence in both and recruits telomerase to the CBs. CBs contain high concentrations of splicing small nuclear ribonucleoproteins (snRNPs), possibly indicating that they function to modify RNA after it has been transcribed from DNA. Experimental evidence indicates that CBs contribute to the biogenesis of enzyme telomerase, and assist in subsequent transport of telomerase to telomeres. References Organelles S Cell nucleus Cell anatomy Telomeres
Cajal body
Biology
786
7,973,428
https://en.wikipedia.org/wiki/Anfinsen%27s%20dogma
Anfinsen's dogma, also known as the thermodynamic hypothesis, is a postulate in molecular biology. It states that, at least for a small globular protein in its standard physiological environment, the native structure is determined only by the protein's amino acid sequence. The dogma was championed by the Nobel Prize Laureate Christian B. Anfinsen from his research on the folding of ribonuclease A. His research was based on previous studies by biochemist Lisa Steiner, whose superiors at the time did not recognize the significance. The postulate amounts to saying that, at the environmental conditions (temperature, solvent concentration and composition, etc.) at which folding occurs, the native structure is a unique, stable and kinetically accessible minimum of the free energy. In other words, there are three conditions for formation of a unique protein structure: Uniqueness – Requires that the sequence does not have any other configuration with a comparable free energy. Hence the free energy minimum must be unchallenged. Stability – Small changes in the surrounding environment cannot give rise to changes in the minimum configuration. This can be pictured as a free energy surface that looks more like a funnel (with the native state in the bottom of it) rather than like a soup plate (with several closely related low-energy states); the free energy surface around the native state must be rather steep and high, in order to provide stability. Kinetical accessibility – Means that the path in the free energy surface from the unfolded to the folded state must be reasonably smooth or, in other words, that the folding of the chain must not involve highly complex changes in the shape (like knots or other high order conformations). Basic changes in the shape of the protein happen dependent on their environment, shifting shape to suit their place. This creates multiple configurations for biomolecules to shift into. Challenges to Anfinsen's dogma Protein folding in a cell is a highly complex process that involves transport of the newly synthesized proteins to appropriate cellular compartments through targeting, permanent misfolding, temporarily unfolded states, post-translational modifications, quality control, and formation of protein complexes facilitated by chaperones. Some proteins need the assistance of chaperone proteins to fold properly. It has been suggested that this disproves Anfinsen's dogma. However, the chaperones do not appear to affect the final state of the protein; they seem to work primarily by preventing aggregation of several protein molecules prior to the final folded state of the protein. However, at least some chaperones are required for the proper folding of their subject proteins. Many proteins can also undergo aggregation and misfolding. For example, prions are stable conformations of proteins which differ from the native folding state. In bovine spongiform encephalopathy, native proteins re-fold into a different stable conformation, which causes fatal amyloid buildup. Other amyloid diseases, including Alzheimer's disease and Parkinson's disease, are also exceptions to Anfinsen's dogma. Some proteins have multiple native structures, and change their fold based on some external factors. For example, the KaiB protein complex switches fold throughout the day, acting as a clock for cyanobacteria. It has been estimated that around 0.5–4% of Protein Data Bank (PDB) proteins switch folds. The switching between alternative structures is driven by interactions of the protein with small ligands or other proteins, by chemical modifications (such as phosphorylation) or by changed environmental conditions, such as temperature, pH or membrane potential. Each alternative structure may either correspond to the global minimum of free energy of the protein at the given conditions or be kinetically trapped in a higher local minimum of free energy. References Further reading Profiles in Science: The Christian B. Anfinsen Papers-Articles Molecular biology Protein structure Hypotheses
Anfinsen's dogma
Chemistry,Biology
797
36,544,430
https://en.wikipedia.org/wiki/Amanita%20eliae
Amanita eliae is an inedible species of fungi in the family of Amanitaceae found in Europe. It was described by Lucien Quélet in 1872. Synonyms include A. eliae, A. godeyi, and A. cordae. Description Its cap is or in diameter and across. It has a white volva. Its warts correspond to easily removable, deep depressions in the cap of the species. Its stem is around tall and has a diameter of ; it is subcylindric and tapers upwards. The cap and stem have white flesh. The stem is initially entirely white, but browns with age with a narrow bulb. The stem ring is white. Its stem is smooth and has white gills on the hymenium. Its odour and taste are indistinct. Distribution and habitat It is commonly found in Europe in the summer and autumn near coniferous and deciduous trees. References Further reading Fungi described in 1872 eliae Inedible fungi Taxa named by Lucien Quélet Fungus species
Amanita eliae
Biology
213
77,423,961
https://en.wikipedia.org/wiki/Oklab%20color%20space
The Oklab color space is a uniform color space for device independent color designed to improve perceptual uniformity, hue and lightness prediction, color blending, and usability while ensuring numerical stability and ease of implementation. Introduced by Björn Ottosson in December 2020, Oklab and its cylindrical counterpart, Oklch, have been included in the CSS Color Level 4 and Level 5 drafts for device-independent web colors since December 2021. They are supported by recent versions of major web browsers and allow the specification of wide-gamut P3 colors. Oklab's model is fitted with improved color appearance data: CAM16 data for lightness and chroma, and IPT data for hue. The new fit addresses issues such as unexpected hue and lightness changes in blue colors present in the CIELAB color space, simplifying the creation of color schemes and smoother color gradients. Coordinates Oklab uses the same spatial structure as CIELAB, representing color using three components: L for perceptual lightness, ranging from 0 (pure black) to 1 (reference white, if achromatic), often denoted as a percentage a and b for opponent channels of the four unique hues, unbounded but in practice ranging from −0.5 to +0.5; CSS assigns ±100% to ±0.4 for both a for green (negative) to red (positive) b for blue (negative) to yellow (positive) Like CIELCh, Oklch represents colors using: L for perceptual lightness C for chroma representing chromatic intensity, with values from 0 (achromatic) with no upper limit, but in practice not exceeding +0.5; CSS treats +0.4 as 100% h for hue angle in a color wheel, typically denoted in decimal degrees Achromatic colors Neutral greys, pure black and the reference white are achromatic, that is, , , , and h is undefined. Assigning any real value to their hue component has no effect on conversions between color spaces. Color differences The perceptual color difference in Oklab is calculated as the Euclidean distance between the coordinates. Conversions between color spaces Conversion to and from Oklch Like CIELCh, the Cartesian coordinates a and b are converted to the polar coordinates C and h as follows: And the polar coordinates are converted to the Cartesian coordinates as follows: Conversion from CIE XYZ Converting from CIE XYZ with a Standard Illuminant D65 involves: Converting to an LMS color space with a linear map: Applying a cube root non-linearity: Converting to Oklab with another linear map: Given: Conversion from sRGB Converting from sRGB requires first converting from sRGB to CIE XYZ with a Standard Illuminant D65. As the last step of this conversion is a linear map from linear RGB to CIE XYZ, the reference implementation directly employs the multiplied matrix representing the composition of the two linear maps: Conversion to CIE XYZ and sRGB Converting to CIE XYZ and sRGB simply involves applying the respective inverse functions in reverse order: Notes References Color space Color appearance models 2020 introductions
Oklab color space
Mathematics
667
47,289,701
https://en.wikipedia.org/wiki/Penicillium%20radicum
Penicillium radicum is an anamorph species of the genus of Penicillium which was isolated from rhizosphere of Australian wheat. This species has the ability to solubilise inorganic phosphates, this can promote plant growth Penicillium radicum produces rugulosin References Further reading radicum Fungi described in 1998 Fungus species
Penicillium radicum
Biology
79
14,359,155
https://en.wikipedia.org/wiki/Carcinoembryonic%20antigen%20peptide-1
Carcinoembryonic antigen peptide-1 is a nine amino acid peptide fragment of carcinoembryonic antigen (CEA), a protein that is overexpressed in several cancer cell types, including gastrointestinal, breast, and non-small-cell lung. Synonyms: CAP-1 Carcinoembryonic Antigen Peptide-1 Carcinoembryonic Peptide-1 CEA Peptide 1 CEA Peptide 9-mer External links National Cancer Institute Definition of carcinoembryonic antigen peptide 1 Tumor markers Peptides
Carcinoembryonic antigen peptide-1
Chemistry,Biology
111
62,478,791
https://en.wikipedia.org/wiki/Women%20in%20Cell%20Biology
Women in Cell Biology (WCIB) is a subcommittee of the American Society for Cell Biology (ASCB) created to promote women in cell biology and present awards. History A group of women were unhappy with the lack of recognition in ASCB.  In 1971, Virginia Walbot gathered a group of women to meet at the annual ASCB meetings and WICB began.  The goal was to provide a space for women to talk and network with other women in the field, learn about job opportunities, and promote women in academia.  Newsletters were distributed containing job listings and news of powerful women in biology.  Originally, WICB was not accepted by ASCB; the newsletter was not funded and later discontinued in the 1970s. WICB was established as a committee within ASCB in 1994. Activities Currently, WICB meets annually at ASCB meetings and has a column in the ASCB newsletter. The goals of WICB are to nominate and give awards and communicate through the newsletter. Awards WICB awards the following annually: WICB Junior Award for Excellence in Research WICB Mid-Career Award for Excellence in Research Sandra K. Masur Senior Leadership Award References American Society for Cell Biology Women in science and technology
Women in Cell Biology
Technology
240
36,927,475
https://en.wikipedia.org/wiki/Maxam%E2%80%93Gilbert%20sequencing
Maxam–Gilbert sequencing is a method of DNA sequencing developed by Allan Maxam and Walter Gilbert in 1976–1977. This method is based on nucleobase-specific partial chemical modification of DNA and subsequent cleavage of the DNA backbone at sites adjacent to the modified nucleotides. Maxam–Gilbert sequencing was the first widely adopted method for DNA sequencing, and, along with the Sanger dideoxy method, represents the first generation of DNA sequencing methods. Maxam–Gilbert sequencing is no longer in widespread use, having been supplanted by next-generation sequencing methods. History Although Maxam and Gilbert published their chemical sequencing method two years after Frederick Sanger and Alan Coulson published their work on plus-minus sequencing, Maxam–Gilbert sequencing rapidly became more popular, since purified DNA could be used directly, while the initial Sanger method required that each read start be cloned for production of single-stranded DNA. However, with the improvement of the chain-termination method (see below), Maxam–Gilbert sequencing has fallen out of favour due to its technical complexity prohibiting its use in standard molecular biology kits, extensive use of hazardous chemicals, and difficulties with scale-up. Allan Maxam and Walter Gilbert’s 1977 paper “A new method for sequencing DNA” was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society for 2017. It was presented to the Department of Molecular & Cellular Biology, Harvard University. Procedure Maxam–Gilbert sequencing requires radioactive labeling at one 5′ end of the DNA fragment to be sequenced (typically by a kinase reaction using gamma-32P ATP) and purification of the DNA. Chemical treatment generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). For example, the purines (A+G) are depurinated using formic acid, the guanines (and to some extent the adenines) are methylated by dimethyl sulfate, and the pyrimidines (C+T) are hydrolysed using hydrazine. The addition of salt (sodium chloride) to the hydrazine reaction inhibits the reaction of thymine for the C-only reaction. The modified DNAs may then be cleaved by hot piperidine; (CH2)5NH at the position of the modified base. The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule. Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule. The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography, yielding a series of dark bands each showing the location of identical radiolabeled DNA molecules. From presence and absence of certain fragments the sequence may be inferred. Related methods This method led to the Methylation Interference Assay, used to map DNA-binding sites for DNA-binding proteins. An automated Maxam–Gilbert sequencing protocol was developed in 1994. See also Sanger sequencing References DNA sequencing Molecular biology techniques 1977 in biotechnology
Maxam–Gilbert sequencing
Chemistry,Biology
682
945,225
https://en.wikipedia.org/wiki/Isoperimetric%20dimension
In mathematics, the isoperimetric dimension of a manifold is a notion of dimension that tries to capture how the large-scale behavior of the manifold resembles that of a Euclidean space (unlike the topological dimension or the Hausdorff dimension which compare different local behaviors against those of the Euclidean space). In the Euclidean space, the isoperimetric inequality says that of all bodies with the same volume, the ball has the smallest surface area. In other manifolds it is usually very difficult to find the precise body minimizing the surface area, and this is not what the isoperimetric dimension is about. The question we will ask is, what is approximately the minimal surface area, whatever the body realizing it might be. Formal definition We say about a differentiable manifold M that it satisfies a d-dimensional isoperimetric inequality if for any open set D in M with a smooth boundary one has The notations vol and area refer to the regular notions of volume and surface area on the manifold, or more precisely, if the manifold has n topological dimensions then vol refers to n-dimensional volume and area refers to (n − 1)-dimensional volume. C here refers to some constant, which does not depend on D (it may depend on the manifold and on d). The isoperimetric dimension of M is the supremum of all values of d such that M satisfies a d-dimensional isoperimetric inequality. Examples A d-dimensional Euclidean space has isoperimetric dimension d. This is the well known isoperimetric problem — as discussed above, for the Euclidean space the constant C is known precisely since the minimum is achieved for the ball. An infinite cylinder (i.e. a product of the circle and the line) has topological dimension 2 but isoperimetric dimension 1. Indeed, multiplying any manifold with a compact manifold does not change the isoperimetric dimension (it only changes the value of the constant C). Any compact manifold has isoperimetric dimension 0. It is also possible for the isoperimetric dimension to be larger than the topological dimension. The simplest example is the infinite jungle gym, which has topological dimension 2 and isoperimetric dimension 3. See for pictures and Mathematica code. The hyperbolic plane has topological dimension 2 and isoperimetric dimension infinity. In fact the hyperbolic plane has positive Cheeger constant. This means that it satisfies the inequality which obviously implies infinite isoperimetric dimension. Consequences of isoperimetry A simple integration over r (or sum in the case of graphs) shows that a d-dimensional isoperimetric inequality implies a d-dimensional volume growth, namely where B(x,r) denotes the ball of radius r around the point x in the Riemannian distance or in the graph distance. In general, the opposite is not true, i.e. even uniformly exponential volume growth does not imply any kind of isoperimetric inequality. A simple example can be had by taking the graph Z (i.e. all the integers with edges between n and n + 1) and connecting to the vertex n a complete binary tree of height |n|. Both properties (exponential growth and 0 isoperimetric dimension) are easy to verify. An interesting exception is the case of groups. It turns out that a group with polynomial growth of order d has isoperimetric dimension d. This holds both for the case of Lie groups and for the Cayley graph of a finitely generated group. A theorem of Varopoulos connects the isoperimetric dimension of a graph to the rate of escape of random walk on the graph. The result states Varopoulos' theorem: If G is a graph satisfying a d-dimensional isoperimetric inequality then where is the probability that a random walk on G starting from x will be in y after n steps, and C is some constant. References Isaac Chavel, Isoperimetric Inequalities: Differential geometric and analytic perspectives, Cambridge university press, Cambridge, UK (2001), Discusses the topic in the context of manifolds, no mention of graphs. N. Th. Varopoulos, Isoperimetric inequalities and Markov chains, J. Funct. Anal. 63:2 (1985), 215–239. Thierry Coulhon and Laurent Saloff-Coste, Isopérimétrie pour les groupes et les variétés, Rev. Mat. Iberoamericana 9:2 (1993), 293–314. This paper contains the result that on groups of polynomial growth, volume growth and isoperimetric inequalities are equivalent. In French. Fan Chung, Discrete Isoperimetric Inequalities. Surveys in Differential Geometry IX, International Press, (2004), 53–82. http://math.ucsd.edu/~fan/wp/iso.pdf. This paper contains a precise definition of the isoperimetric dimension of a graph, and establishes many of its properties. Mathematical analysis Dimension
Isoperimetric dimension
Physics,Mathematics
1,047
3,904,342
https://en.wikipedia.org/wiki/Zeotropic%20mixture
A zeotropic mixture, or non-azeotropic mixture, is a mixture with liquid components that have different boiling points. For example, nitrogen, methane, ethane, propane, and isobutane constitute a zeotropic mixture. Individual substances within the mixture do not evaporate or condense at the same temperature as one substance. In other words, the mixture has a temperature glide, as the phase change occurs in a temperature range of about four to seven degrees Celsius, rather than at a constant temperature. On temperature-composition graphs, this temperature glide can be seen as the temperature difference between the bubble point and dew point. For zeotropic mixtures, the temperatures on the bubble (boiling) curve are between the individual component's boiling temperatures. When a zeotropic mixture is boiled or condensed, the composition of the liquid and the vapor changes according to the mixtures's temperature-composition diagram. Zeotropic mixtures have different characteristics in nucleate and convective boiling, as well as in the organic Rankine cycle. Because zeotropic mixtures have different properties than pure fluids or azeotropic mixtures, zeotropic mixtures have many unique applications in industry, namely in distillation, refrigeration, and cleaning processes. Dew and bubble points In mixtures of substances, the bubble point is the saturated liquid temperature, whereas the saturated vapor temperature is called the dew point. Because the bubble and dew lines of a zeotropic mixture's temperature-composition diagram do not intersect, a zeotropic mixture in its liquid phase has a different fraction of a component than the gas phase of the mixture. On a temperature-composition diagram, after a mixture in its liquid phase is heated to the temperature at the bubble (boiling) curve, the fraction of a component in the mixture changes along an isothermal line connecting the dew curve to the boiling curve as the mixture boils. At any given temperature, the composition of the liquid is the composition at the bubble point, whereas the composition of the vapor is the composition at the dew point. Unlike azeotropic mixtures, there is no azeotropic point at any temperature on the diagram where the bubble line and dew lines would intersect. Thus, the composition of the mixture will always change between the bubble and dew point component fractions upon boiling from a liquid to a gas until the mass fraction of a component reaches 1 (i.e. the zeotropic mixture is completely separated into its pure components). As shown in Figure 1, the mole fraction of component 1 decreases from 0.4 to around 0.15 as the liquid mixture boils to the gas phase. Temperature glides Different zeotropic mixtures have different temperature glides. For example, zeotropic mixture R152a/R245fa has a higher temperature glide than R21/R245fa. A larger gap between the boiling points creates a larger temperature glide between the boiling curve and dew curve at a given mass fraction. However, with any zeotropic mixture, the temperature glide decreases when the mass fraction of a component approaches 1 or 0 (i.e. when the mixture is almost separated into its pure components) because the boiling and dew curves get closer near these mass fractions. A larger difference in boiling points between the substances also affects the dew and bubble curves of the graph. A larger difference in boiling points creates a larger shift in mass fractions when the mixture boils at a given temperature. Zeotropic vs. azeotropic mixtures Azeotropic and zeotropic mixtures have different dew and bubble curves characteristics in a temperature-composition graph. Namely, azeotropic mixtures have dew and bubble curves that intersect, but zeotropic mixtures do not. In other words, zeotropic mixtures have no azeotropic points. An azeotropic mixture that is near its azeotropic point has negligible zeotropic behavior and is near-azeotropic rather than zeotropic. Zeotropic mixtures differ from azeotropic mixtures in that the vapor and liquid phases of an azeotropic mixture have the same fraction of constituents. This is due to the constant boiling point of the azeotropic mixture. Boiling When superheating a substance, nucleate pool boiling and convective flow boiling occur when the temperature of the surface used to heat a liquid is higher than the liquid's boiling point by the wall superheat. Nucleate pool boiling The characteristics of pool boiling are different for zeotropic mixtures than that of pure mixtures. For example, the minimum superheating needed to achieve this boiling is greater for zeotropic mixtures than for pure liquids because of the different proportions of individual substances in the liquid versus gas phases of the zeotropic mixture. Zeotropic mixtures and pure liquids also have different critical heat fluxes. In addition, the heat transfer coefficients of zeotropic mixtures are less than the ideal values predicted using the coefficients of pure liquids. This decrease in heat transfer is due to the fact that the heat transfer coefficients of zeotropic mixtures do not increase proportionately with the mass fractions of the mixture's components. Convective flow boiling Zeotropic mixtures have different characteristics in convective boiling than pure substances or azeotropic mixtures. Overall, zeotropic mixtures transfer heat more efficiently at the bottom of the fluid, whereas pure and azeotropic substances transfer heat better at the top. During convective flow boiling, the thickness of the liquid film is less at the top of the film than at the bottom because of gravity. In the case of pure liquids and azeotropic mixtures, this decrease in thickness causes a decrease in the resistance to heat transfer. Thus, more heat is transferred and the heat transfer coefficient is higher at the top of the film. The opposite occurs for zeotropic mixtures. The decrease in film thickness near the top causes the component in the mixture with the higher boiling point to decrease in mass fraction. Thus, the resistance to mass transfer increases near the top of the liquid. Less heat is transferred, and the heat transfer coefficient is lower than at the bottom of the liquid film. Because the bottom of the liquid transfers heat better, it requires a lower wall temperature near the bottom than at the top to boil the zeotropic mixture. Heat transfer coefficient From low cryogenic to room temperatures, the heat transfer coefficients of zeotropic mixtures are sensitive to the mixture's composition, the diameter of the boiling tube, heat and mass fluxes, and the roughness of the surface. In addition, diluting the zeotropic mixture reduces the heat transfer coefficient. Decreasing the pressure when boiling the mixture only increases the coefficient slightly. Using grooved rather than smooth boiling tubes increases the heat transfer coefficient. Distillation The ideal case of distillation uses zeotropic mixtures. Zeotropic fluid and gaseous mixtures can be separated by distillation due to the difference in boiling points between the component mixtures. This process involves the use of vertically-arranged distillation columns (see Figure 2). Distillation columns When separating zeotropic mixtures with three or greater liquid components, each distillation column removes only the lowest-boiling point component and the highest boiling point component. In other words, each column separates two components purely. If three substances are separated with a single column, the substance with the intermediate boiling point will not be purely separated, and a second column would be needed. To separate mixtures consisting of multiple substances, a sequence of distillation columns must be used. This multi-step distillation process is also called rectification. In each distillation column, pure components form at the top (rectifying section) and bottom (stripping section) of the column when the starting liquid (called feed composition) is released in the middle of the column. This is shown in Figure 2. At a certain temperature, the component with the lowest boiling point (called distillate or overhead fraction) vaporizes and collects at the top of the column, whereas the component with the highest boiling point (called bottoms or bottom fraction) collects at the bottom of the column. In a zeotropic mixture, where more than one component exists, individual components move relative to each other as vapor flows up and liquid falls down. The separation of mixtures can be seen in a concentration profile. In a concentration profile, the position of a vapor in the distillation column is plotted against the concentration of the vapor. The component with the highest boiling point has a max concentration at the bottom of the column, where the component with the lowest boiling point has a max concentration at the top of the column. The component with the intermediate boiling point has a max concentration in the middle of the distillation column. Because of how these mixtures separate, mixtures with greater than three substances require more than one distillation column to separate the components. Distillation configurations Many configurations can be used to separate mixtures into the same products, though some schemes are more efficient, and different column sequencings are used to achieve different needs. For example, a zeotropic mixture ABC can be first separated into A and BC before separating BC to B and C. On the other hand, mixture ABC can be first separated into AB and C, and AB can lastly be separated into A and B. These two configurations are sharp-split configurations in which the intermediate boiling substance does not contaminate each separation step. On the other hand, the mixture ABC could first be separated into AB and BC, and lastly split into A, B, and C in the same column. This is a non-sharp split configuration in which the substance with the intermediate boiling point is present in different mixtures after a separation step. Efficiency optimization When designing distillation processes for separating zeotropic mixtures, the sequencing of distillation columns is vital to saving energy and costs. In addition, other methods can be used to lower the energy or equipment costs required to distill zeotropic mixtures. This includes combining distillation columns, using side columns, combining main columns with side columns, and re-using waste heat for the system. After combining distillation columns, the amount of energy used is only that of one separated column rather than both columns combined. In addition, using side columns saves energy by preventing different columns from carrying out the same separation of mixtures. Combining main and side columns saves equipment costs by reducing the number of heat exchangers in the system. Re-using waste heat requires the amount of heat and temperature levels of the waste to match that of the heat needed. Thus, using waste heat requires changing the pressure inside evaporators and condensers of the distillation system in order to control the temperatures needed. Controlling the temperature levels in a part of a system is possible with Pinch Technology. These energy-saving techniques have a wide application in industrial distillation of zeotropic mixtures: side columns have been used to refine crude oil, and combining main and side columns is increasingly used. Examples of zeotropic mixtures Examples of distillation for zeotropic mixtures can be found in industry. Refining crude oil is an example of multi-component distillation in industry that has been used for more than 75 years. Crude oil is separated into five components with main and side columns in a sharp split configuration. In addition, ethylene is separated from methane and ethane for industrial purposes using multi-component distillation. Separating aromatic substances requires extractive distillation, for example, distilling a zeotropic mixture of benzene, toluene, and p-xylene. Refrigeration Zeotropic mixtures that are used in refrigeration are assigned a number in the 400 series to help identify its component and their proportions as a part of nomenclature. Whereas for azeotropic mixtures they are assigned a number in the 500 series. According to ASHRAE, refrigerants names start with 'R' followed by a series of numbers—400 series if it is zeotropic or 500 if it is azeotropic—followed by uppercase letters that denote the composition. Research has proposed using zeotropic mixtures as substitutes to halogenated refrigerants due to the harmful effects that hydrochlorofluorocarbons (HCFC) and chlorofluorocarbons (CFC) have on the ozone layer and global warming. Researchers have focused on using new mixtures that have the same properties as past refrigerants to phase out harmful halogenated substances, in accordance to the Montreal Protocol and Kyoto Protocol. For example, researchers found that zeotropic mixture R-404A can replace R-12, a CFC, in household refrigerators. However, there are some technical difficulties for using zeotropic mixtures. This includes leakages, as well as the high temperature glide associated with substances of different boiling points, though the temperature glide can be matched to the temperature difference between the two refrigerants when exchanging heat to increase efficiency. Replacing pure refrigerants with mixtures calls for more research on the environmental impact as well as the flammability and safety of refrigerant mixtures. Organic Rankine cycle In the Organic Rankine Cycle (ORC), zeotropic mixtures are more thermally efficient than pure fluids. Due to their higher boiling points, zeotropic working fluids have higher net outputs of energy at the low temperatures of the Rankine Cycle than pure substances. Zeotropic working fluids condense across a range of temperatures, allowing external heat exchangers to recover the heat of condensation as a heat source for the Rankine Cycle. The changing temperature of the zeotropic working fluid can be matched to that of the fluid being heated or cooled to save waste heat because the mixture's evaporation process occurs at a temperature glide (see Pinch Analysis). R21/R245fa and R152a/R245fa are two examples of zeotropic working fluids that can absorb more heat than pure R245fa due to their increased boiling points. The power output increases with the proportion of R152a in R152a/R245fa. R21/R245fa uses less heat and energy than R245fa. Overall, zeotropic mixture R21/R245fa has better thermodynamic properties than pure R245fa and R152a/R245fa as a working fluid in the ORC. Cleaning processes Zeotropic mixtures can be used as solvents in cleaning processes in manufacturing. Cleaning processes that use zeotropic mixtures include cosolvent processes and bisolvent processes. Cosolvent and bisolvent processes In a cosolvent system, two miscible fluids with different boiling points are mixed to create a zeotropic mixture. The first fluid is a solvating agent that dissolves soil in the cleaning process. This fluid is an organic solvent with a low-boiling point and a flash point greater than the system's operating temperature. After the solvent mixes with the oil, the second fluid, a hydrofluoroether rinsing agent (HFE), rinses off the solvating agent. The solvating agent can be flammable because its mixture with the HFE is nonflammable. In bisolvent cleaning processes, the rinsing agent is separated from the solvating agent. This makes the solvating and rinsing agents more effective because they are not diluted. Cosolvent systems are used for heavy oils, waxes, greases and fingerprints, and can remove heavier soils than processes that use pure or azeotropic solvents. Cosolvent systems are flexible in that different proportions of substances in the zeotropic mixture can be used to satisfy different cleaning purposes. For example, increasing the proportion of solvating agent to rinsing agent in the mixture increases the solvency, and thus is used for removing heavier soils. The operating temperature of the system depends on the boiling point of the mixture, which in turn depends on the compositions of these agents in zeotropic mixture. Since zeotropic mixtures have different boiling points, the cleaning and rinse sump have different ratios of cleaning and solvating agents. The lower-boiling point solvating agent is not found in the rinse sump due to the large difference in boiling points between the agents. Examples of zeotropic solvents Mixtures containing HFC-43-10mee can replace CFC-113 and perfluorocarbon (PFC) as solvents in cleaning systems because HFC-43-10mee does not harm the ozone layer, unlike CFC-113 and PFC. Various mixtures of HFC-43-10mee are commercially available for a variety of cleaning purposes. Examples of zeotropic solvents in cleaning processes include: Zeotropic mixtures of HFC-43-10mee and hexamethyldisiloxane can dissolve silicones and are highly compatible with polycarbonates and polyurethane. They can be used to remove silicone lubricant from medical devices. Zeotropic mixtures of HFC-43-10mee and isopropanol can remove ions and water from materials without porous surfaces. This zeotropic mixture helps with absorption drying. Zeotropic mixtures of HFC-43-10mee, fluorosurfactant, and antistatic additives are energy-efficient and environmentally safe drying fluids that provide spot-free drying. See also List of Refrigerants Azeotrope References Chemical engineering thermodynamics
Zeotropic mixture
Chemistry,Engineering
3,748
269,794
https://en.wikipedia.org/wiki/Monomethylhydrazine
Monomethylhydrazine (MMH) is a highly toxic, volatile hydrazine derivative with the chemical formula . It is used as a rocket propellant in bipropellant rocket engines because it is hypergolic with various oxidizers such as nitrogen tetroxide () and nitric acid (). As a propellant, it is described in specification MIL-PRF-27404. MMH is a hydrazine derivative that was once used in the orbital maneuvering system (OMS) and reaction control system (RCS) engines of NASA's Space Shuttle, which used MMH and MON-3 (a mixture of nitrogen tetroxide with approximately 3% nitric oxide). This chemical is toxic and carcinogenic, but it is easily stored in orbit, providing moderate performance for very low fuel tank system weight. MMH and its chemical relative unsymmetrical dimethylhydrazine (UDMH) have a key advantage that they are stable enough to be used in regeneratively cooled rocket engines. The European Space Agency (ESA) has attempted to seek new options in terms of bipropellant rocket combinations to avoid using deadly chemicals such as MMH and its relatives. MMH is believed to be the primary active mycotoxin found in mushrooms of the genus Gyromitra, especially the false morel (Gyromitra esculenta). In these cases, MMH is formed by the hydrolysis of gyromitrin. Monomethylhydrazine is considered to be a possible occupational carcinogen, and the occupational exposure limits to MMH are set at protective levels to account for the possible carcinogenicity. A known use of MMH is in the synthesis of suritozole. MMH is also assumed to be the active methylating agent in the drug Temozolomide. References Further reading External links Rocket fuels Mycotoxins Hydrazines Monoamine oxidase inhibitors Vitamin B6 antagonists Organic compounds with 1 carbon atom Methyl compounds
Monomethylhydrazine
Chemistry
424
314,151
https://en.wikipedia.org/wiki/Moissanite
Moissanite () is naturally occurring silicon carbide and its various crystalline polymorphs. It has the chemical formula SiC and is a rare mineral, discovered by the French chemist Henri Moissan in 1893. Silicon carbide or moissanite is useful for commercial and industrial applications due to its hardness, optical properties and thermal conductivity. Background The mineral moissanite was discovered by Henri Moissan while examining rock samples from a meteor crater located in Canyon Diablo, Arizona, in 1893. At first, he mistakenly identified the crystals as diamonds, but in 1904 he identified the crystals as silicon carbide. Artificial silicon carbide had been synthesized in the lab by Edward G. Acheson in 1891, just two years before Moissan's discovery. The mineral form of silicon carbide was named in honor of Moissan later on in his life. Geological occurrence In its natural form, moissanite remains very rare. Until the 1950s, no other source for moissanite other than as presolar grains in carbonaceous chondrite meteorites had been encountered. Then, in 1958, moissanite was found in the upper mantle Green River Formation in Wyoming and, the following year, as inclusions in the ultramafic rock kimberlite from a diamond mine in Yakutia in the Russian Far East. Yet the existence of moissanite in nature was questioned as late as 1986 by the American geologist Charles Milton. Discoveries show that it occurs naturally as inclusions in diamonds, xenoliths, and such other ultramafic rock such as lamproite. Meteorites Analysis of silicon carbide grains found in the Murchison meteorite has revealed anomalous isotopic ratios of carbon and silicon, indicating an extraterrestrial origin from outside the Solar System. 99% of these silicon carbide grains originate around carbon-rich asymptotic giant branch stars. Silicon carbide is commonly found around these stars, as deduced from their infrared spectra. The discovery of silicon carbide in the Canyon Diablo meteorite and other places was delayed for a long time as carborundum (SiC) contamination had occurred from man-made abrasive tools. Physical properties The crystalline structure is held together with strong covalent bonding similar to diamonds, that allows moissanite to withstand high pressures up to 52.1 gigapascals. Colors vary widely and are graded from D to K range on the diamond color grading scale. Sources All applications of silicon carbide today use synthetic material, as the natural material is very scarce. The idea that a silicon-carbon bond might in fact exist in nature was first proposed by the Swedish chemist Jöns Jacob Berzelius as early as 1824 (Berzelius 1824). In 1891, Edward Goodrich Acheson produced viable minerals that could substitute for diamond as an abrasive and cutting material. This was possible, as moissanite is one of the hardest substances known, with a hardness just below that of diamond and comparable with those of cubic boron nitride and boron. Pure synthetic moissanite can also be made from thermal decomposition of the preceramic polymer poly(methylsilyne), requiring no binding matrix, e.g., cobalt metal powder. Single-crystalline silicon carbide, in certain forms, has been used for the fabrication of high-performance semiconductor devices. As natural sources of silicon carbide are rare, and only certain atomic arrangements are useful for gemological applications, North Carolina–based Cree Research, Inc., founded in 1987, developed a commercial process for producing large single crystals of silicon carbide. Cree is the world leader in the growth of single crystal silicon carbide, mostly for electronics use. In 1995 C3 Inc., a company helmed by Charles Eric Hunter, formed Charles & Colvard to market gem quality moissanite. Charles & Colvard was the first company to produce and sell synthetic moissanite under U.S. patent US5723391 A, first filed by C3 Inc. in North Carolina. Applications Moissanite was introduced to the jewelry market as a diamond alternative in 1998 after Charles & Colvard (formerly known as C3 Inc.) received patents to create and market lab-grown silicon carbide gemstones, becoming the first firm to do so. By 2018 all patents on the original process world-wide had expired. Charles & Colvard currently makes and distributes moissanite jewelry and loose gems under the trademarks Forever One, Forever Brilliant, and Forever Classic. Other manufacturers market silicon carbide gemstones under trademarked names such as Amora. On the Mohs scale of mineral hardness (with diamond as the upper extreme, 10) moissanite is rated as 9.25. As a diamond alternative, Moissanite has some optical properties exceeding those of diamond. It is marketed as a lower price alternative to diamond that does not involve the expensive mining practices used for the extraction of natural diamonds. As some of its properties are quite similar to diamond, moissanite may be used as counterfeit diamond. Testing equipment based on measuring thermal conductivity in particular may give results similar to diamond. In contrast to diamond, moissanite exhibits a thermochromism, such that heating it gradually will cause it to temporarily change color, starting at around . A more practical test is a measurement of electrical conductivity, which will show higher values for moissanite. Moissanite is birefringent (i.e., light sent through the material splits into separate beams that depend on the source polarization), which can be easily seen, and diamond is not. Because of its hardness, it can be used in high-pressure experiments, as a replacement for diamond (see diamond anvil cell). Since large diamonds are usually too expensive to be used as anvils, moissanite is more often used in large-volume experiments. Synthetic moissanite is also interesting for electronic and thermal applications because its thermal conductivity is similar to that of diamonds. High power silicon carbide electronic devices are expected to find use in the design of protection circuits used for motors, actuators, and energy storage or pulse power systems. It also exhibits thermoluminescence, making it useful in radiation dosimetry. See also Charles & Colvard Cubic zirconia Diamond Engagement ring Fair trade Glossary of meteoritics References External links Carbide minerals Hexagonal minerals Minerals in space group 186 Meteorite minerals Native element minerals Gemstones Green River Formation
Moissanite
Physics
1,373
37,689,509
https://en.wikipedia.org/wiki/Neuroscience%20of%20rhythm
The neuroscience of rhythm refers to the various forms of rhythm generated by the central nervous system (CNS). Nerve cells, also known as neurons in the human brain are capable of firing in specific patterns which cause oscillations. The brain possesses many different types of oscillators with different periods. Oscillators are simultaneously outputting frequencies from .02 Hz to 600 Hz. It is now well known that a computer is capable of running thousands of processes with just one high-frequency clock. Humans have many different clocks as a result of evolution. Prior organisms had no need for a fast-responding oscillator. This multi-clock system permits quick response to constantly changing sensory input while still maintaining the autonomic processes that sustain life. This method modulates and controls a great deal of bodily functions. Autonomic rhythms The autonomic nervous system is responsible for many of the regulatory processes that sustain human life. Autonomic regulation is involuntary, meaning we do not have to think about it for it to take place. A great deal of these are dependent upon a certain rhythm, such as sleep, heart rate, and breathing. Circadian rhythms Circadian literally translates to "about a day" in Latin. This refers to the human 24-hour cycle of sleep and wakefulness. This cycle is driven by light. The human body must photoentrain or synchronize itself with light in order to make this happen. The rod cells are the photoreceptor cells in the retina capable of sensing light. However, they are not what sets the biological clock. The photosensitive retinal ganglion cells contain a pigment called melanopsin. This photopigment is depolarized in the presence of light, unlike the rods which are hyperpolarized. Melanopsin encodes the day-night cycle to the suprachiasmatic nucleus (SCN) via the retinohypothalamic tract. The SCN evokes a response from the spinal cord. Preganglionic neurons in the spinal cord modulate the superior cervical ganglia, which synapses on the pineal gland. The pineal gland synthesizes the neurohormone melatonin from tryptophan. Melatonin is secreted into the bloodstream where it affects neural activity by interacting with melatonin receptors on the SCN. The SCN is then able to influence the sleep wake cycle, acting as the "apex of a hierarchy" that governs physiological timing functions. "Rest and sleep are the best example of self-organized operations within neuronal circuits". Sleep and memory have been closely correlated for over a century. It seemed logical that the rehearsal of learned information during the day, such as in dreams, could be responsible for this consolidation. REM sleep was first studied in 1953. It was thought to be the sole contributor to memory due to its association with dreams. It has recently been suggested that if sleep and waking experience are found to be using the same neuronal content, it is reasonable to say that all sleep has a role in memory consolidation. This is supported by the rhythmic behavior of the brain. Harmonic oscillators have the capability to reproduce a perturbation that happened in previous cycles. It follows that when the brain is unperturbed, such as during sleep, it is in essence rehearsing the perturbations of the day. Recent studies have confirmed that off-wave states, such as slow-wave sleep, play a part in consolidation as well as REM sleep. There have even been studies done implying that sleep can lead to insight or creativity. Jan Born, from the University of Lubeck, showed subjects a number series with a hidden rule. She allowed one group to sleep for three hours, while the other group stayed awake. The awake group showed no progress, while most of the group that was allowed to sleep was able to solve the rule. This is just one example of how rhythm could contribute to humans unique cognitive abilities. Central pattern generation A central pattern generator (CPG) is defined as a neural network that does not require sensory input to generate a rhythm. This rhythm can be used to regulate essential physiological processes. These networks are often found in the spinal cord. It has been hypothesized that certain CPG's are hardwired from birth. For example, an infant does not have to learn how to breathe and yet it is a complicated action that involves a coordinated rhythm from the medulla. The first CPG was discovered by removing neurons from a locust. It was observed that the group of neurons was still firing as if the locust was in flight. In 1994, evidence of CPG's in humans was found. A former quadrapalegic began to have some very limited movement in his lower legs. Upon lying down, he noticed that if he moved his hips just right his legs began making walking motions. The rhythmic motor patterns were enough to give the man painful muscle fatigue. A key part of CPG's is half-center oscillators. In its simplest form, this refers to two neurons capable of rhythmogenesis when firing together. The generation of a biological rhythm, or rhythmogenesis, is done by a series of inhibition and activation. For example, a first neuron inhibits a second one while it fires, however, it also induces slow depolarization in the second neuron. This is followed by the release of an action potential from the second neuron as a result of depolarization, which acts on the first in a similar fashion. This allows for self-sustaining patterns of oscillation. Furthermore, new motor patterns, such as athletic skills or the ability to play an instrument, also use half-center oscillators and are simply learned perturbations to CPG's already in place. Respiration Ventilation requires periodic movements of the respiratory muscles. These muscles are controlled by a rhythm generating network in the brain stem. These neurons comprise the ventral respiratory group (VRG). Although this process is not fully understood, it is believed to be governed by a CPG and there have been several models proposed. The classic three phase model of respiration was proposed by D.W. Richter. It contains 2 stages of breathing, inspiratory and expiratory, that are controlled by three neural phases, inspiration, post-inspiration and expiration. Specific neural networks are dedicated to each phase. They are capable of maintaining a sustained level of oxygen in the blood by triggering the lungs to expand and contract at the correct time. This was seen by the measuring of action potentials. It was observed that certain groups of neurons synchronized with certain phases of respiration. The overall behavior was oscillatory in nature. This is an example of how an autonomous biorhythm can control a crucial bodily function. Cognition This refers to the types of rhythm that humans are able to generate, be it from recognition of others or sheer creativity. Sports Muscle coordination, muscle memory, and innate game awareness all rely on the nervous system to produce a specific firing pattern in response to an either an efferent or afferent signal. Sports are governed by the same production and perception of oscillations that govern much of human activity. For example, in basketball, in order to anticipate the game one must recognize rhythmic patterns of other players and perform actions calibrated to these movements. "The rhythm of a game of basketball emerges from the rhythm of individuals, the rhythm among team members, and the rhythmic contrasts between opposing teams". Although the exact oscillatory pattern that modulates different sports has not been found, there have been studies done to show a correlation between athletic performance and circadian timing. It has been shown certain times of the day are better for training and gametime performance. Training has the best results when done in the morning, while it is better to play a game at night. Music The ability to perceive and generate music is frequently studied as a way to further understand human rhythmic processing. Research projects, such as Brain Beats, are currently studying this by developing beat tracking algorithms and designing experimental protocols to analyze human rhythmic processing. This is rhythm in its most obvious form. Human beings have an innate ability to listen to a rhythm and track the beat, as seen here "Dueling Banjos". This can be done by bobbing the head, tapping of the feet or even clapping. Jessica Grahn and Matthew Brett call this spontaneous movement "motor prediction". They hypothesized that it is caused by the basal ganglia and the supplementary motor area (SMA). This would mean that those areas of the brain would be responsible for spontaneous rhythm generation, although further research is required to prove this. However, they did prove that the basal ganglia and SMA are highly involved in rhythm perception. In a study where patients brain activity was recorded using fMRI, increased activity was seen in these areas both in patients moving spontaneously (bobbing their head) and in those who were told to stay still. Computational models Computational neuroscience is the theoretical study of the brain used to uncover the principles and mechanisms that guide the development, organization, information-processing and mental abilities of the nervous system. Many computational models have attempted to quantify the process of how various rhythms are created by humans. Avian song learning Juvenile avian song learning is one of the best animal models used to study generation and recognition of rhythm. The ability for birds to process a tutor song and then generate a perfect replica of that song, underlies our ability to learn rhythm. Two very famous computational neuroscientists Kenji Doya and Terrence J. Sejnowski created a model of this using the Zebra Finch as target organism. The Zebra Finch is perhaps one of the most easily understood examples of this among birds. The young Zebra Finch is exposed to a "tutor song" from the adult, during a critical period. This is defined as the time of life that learning can take place, in other words when the brain has the most plasticity. After this period, the bird is able to produce an adult song, which is said to be crystallized at this point. Doya and Sejnowski evaluated three possible ways that this leaning could happen, an immediate, one shot perfection of the tutor song, error learning, and reinforcement learning. They settled on the third scheme. Reinforcement learning consists of a "critic" in the brain capable of evaluating the difference between the tutor and the template song. Assuming the two are closer than the last trial, this "critic" then sends a signal activating NMDA receptors on the articulator of the song. In the case of the Zebra Finch, this articulator is the robust nucleus of archistriatum or RA. The NMDA receptors allow the RA to be more likely to produce this template of the tutor song, thus leading to learning of the correct song. Dr. Sam Sober explains the process of tutor song recognition and generation using error learning. This refers to a signal generated by the avian brain that corresponds to the error between the tutor song and the auditory feedback the bird gets. The signal is simply optimized in order to be as small of a difference as possible, which results in the learning of the song. Dr. Sober believes that this is also the mechanism employed in human speech learning. Although it's clear that humans are constantly adjusting their speech while birds are believed to have crystallized their song upon reaching adulthood. He tested this idea by using headphones to alter a Bengalese finch's auditory feedback. The bird actually corrected for up to 40% of the perturbation. This provides strong support for error learning in humans. Macaque motor cortex This animal model has been said to be more similar to humans than birds. It has been shown that humans demonstrate 15–30 Hz (Beta) oscillations in the cortex while performing muscle coordination exercises. This was also seen in macaque monkey cortices. The cortical local field potentials (LFPs) of conscious monkeys were recorded while they performed a precision grip task. More specifically, the pyramidal tract neurons (PTNs) were targeted for measurement. The primary frequency recorded was between 15 and 30 Hz, the same oscillation found in humans. These findings indicate that the macaque monkey cortex could be a good model for rhythm perception and production. One example of how this model is used is the investigation of the role of motor cortex PTNs in "corticomuscular coherence" (muscle coordination). In similar study where LFPs were recorded from macaque monkeys while they performed a precision grip task, it was seen that the disruption of the PTN resulted in a greatly reduced oscillatory response. Stimulation of the PTN caused the monkeys to not be able to perform the grip task as well. It was concluded that PTNs in the motor cortex directly influence the generation of Beta rhythms. Imaging Current methods At the moment, recording methods are not capable of simultaneously measuring small and large areas at the same time, with the temporal resolution that the circuitry of the brain requires. These techniques include EEG, MEG, fMRI, optical recordings, and single-cell recordings. Future Techniques such as large scale single-cell recordings are movements in the direction of analyzing overall brain rhythms. However, these require invasive procedures, such as tetrode implantation, which does not allow a healthy brain to be studied. Also, pharmacological manipulation, cell culture imaging and computational biology all make attempts at doing this but in the end they are indirect. Frequency bands The classification of frequency borders allowed for a meaningful taxonomy capable of describing brain rhythms, known as neural oscillations. References Basic neuroscience research Rhythm and meter Articles containing video clips
Neuroscience of rhythm
Physics
2,835
32,074,177
https://en.wikipedia.org/wiki/ARIANNA%20Experiment
Antarctic Ross Ice-Shelf Antenna Neutrino Array (ARIANNA) is a proposed detector for ultra-high energy astrophysical neutrinos. It will detect coherent radio Cherenkov emissions from the particle showers produced by neutrinos with energies above about 10^17 eV. ARIANNA will be built on the Ross Ice Shelf just off the coast of Antarctica, where it will eventually cover about 900 km^2 in surface area. There, the ice-water interface below the shelf reflects radio waves, giving ARIANNA sensitivity to downward going neutrinos and improving its sensitivity to horizontally incident neutrinos. ARIANNA detector stations will each contain 4-8 antennas which search for brief pulses of 50 MHz to 1 GHz radio emission from neutrino interactions. As of 2016, a prototype array consisting of 7 stations had been deployed, and was taking data. An initial search for neutrinos was made; none were found, and an upper limit was generated. References External links ARIANNA Home Page Neutrino experiments Astrophysics
ARIANNA Experiment
Physics,Astronomy
213
51,393
https://en.wikipedia.org/wiki/Czech%20Biomass%20Association
The Czech Biomass Association (CZ Biom - ) is a NGO, which supports the development of phytoenergetics (energy from plant material) in the Czech Republic. Members of CZ BIOM are scientists, specialists, entrepreneurs, and activists interested in using biomass as an energy resource. CZ BIOM is a member of the European Biomass Association. References External links Biom.cz Website of CZ BIOM, 2002 archive Bioenergy organizations Science and technology in the Czech Republic Environmental organizations based in the Czech Republic Biomass Nature conservation organisations based in Europe Renewable energy organizations
Czech Biomass Association
Engineering
121
3,564,042
https://en.wikipedia.org/wiki/Madonna%E2%80%93whore%20complex
In psychoanalytic literature, a Madonna–whore complex (also called a Madonna–mistress complex) is the inability to maintain sexual arousal within a committed and loving relationship. First identified by Sigmund Freud, who called it psychic impotence, it is a psychological complex that is said to develop in men who see women as either saintly Madonnas or debased whores. Men with this complex desire a sexual partner who has been degraded (whore) while they cannot desire the respected partner (Madonna). Freud wrote, "Where such men love they have no desire, and where they desire they cannot love." Clinical psychologist Uwe Hartmann wrote in 2009 that the complex "is still highly prevalent in today's patients". In psychoanalysis Freud argued that the Madonna–whore complex was caused by a split between the affectionate and the sexual currents in male desire. Oedipal and castration anxiety fears prohibit the affection felt for past incestuous objects from being attached to women who are sensually desired: "The whole sphere of love in such persons remains divided in the two directions personified in art as sacred and profane (or animal) love". In order to minimize anxiety, the man categorizes women into two groups: women he can admire and women he finds sexually attractive. Whereas the man loves women in the former category, he despises and devalues the latter group. Psychoanalyst Richard Tuch suggests that Freud offered at least one alternative explanation for the Madonna–whore complex: This earlier theory is based not on oedipal-based castration anxiety but on man's primary hatred of women, stimulated by the child's sense that he had been made to experience intolerable frustration and/or narcissistic injury at the hands of his mother. According to this theory, in adulthood the boy-turned-man seeks to avenge these mistreatments through sadistic attacks on women who are stand-ins for mother. It is possible that such a split may be exacerbated when the sufferer is raised by a cold but overprotective mother, with the lack of emotional nurturing paradoxically strengthening an incestuous tie. Such a man will often court someone with maternal qualities, hoping to fulfill a need for maternal intimacy unmet in childhood, only for a return of the repressed feelings surrounding the earlier relationship to prevent sexual satisfaction in the new. Another theory claims that the Madonna–whore complex derives from the alleged representations of women as either madonnas or whores in mythology and Abrahamic theology rather than developmental disabilities of individual men. Feminist interpretations Feminist theory asserts that the male-written culture (MWC) perpetuates patriarchal norms by controlling women's sexual autonomy through shaming, reinforcing gender stereotypes, and allowing men to maintain power. Sexual script theory, as discussed by sociologists William Simon and John Gagnon, suggests that these scripts are primarily authored by heterosexual males, portraying men as sexual pursuers favoring casual sex and women as gatekeepers favoring relational sex. This limits women's sexual autonomy as assertiveness risks slut-shaming and being seen as unfit partners. Additionally, researchers Emily Kane and argue that assertive female sexuality threatens male social dominance, as men may fear manipulation, reducing female autonomy to preserve their power. Cultural representations Titian's Sacred and Profane Love (1514; the sacred-profane title is from 1693) has several interpretations. The clothed woman has said to be dressed as a bride and as a courtesan. The nude woman seems at first sight to be an allegory of profane love, but 20th-century assessments notice the incense on her hand and the church beyond her. James Joyce widely utilized the Madonna–whore polarity in his novel A Portrait of the Artist as a Young Man. His protagonist, Stephen Daedalus, sees girls who he admires as ivory towers, and the repression of his sexual feelings for them eventually leads him to solicit a prostitute. This mortal sin drives Stephen's inner conflict and eventual transformation towards the end of the novel. In film, Alfred Hitchcock used the Madonna–whore complex as an important mode of representing women. In his film Vertigo, Kim Novak portrays two women that the hero cannot reconcile: a blonde, virtuous, sophisticated, repressed "Madonna" and a dark-haired, single, sensual "fallen woman". The Martin Scorsese films Taxi Driver and Raging Bull featured sexually obsessed protagonists, both played by Robert De Niro, who exhibit the Madonna–whore complex. The David Cronenberg film Spider focuses on the complex. The complex is also pictured in the series Sex and the City, season 3 (ep.16, "Frenemies", directed by Michael Spiller), as Charlotte (Kristin Davis) struggles to make her husband Trey (Kyle McLachlan) see her in a sexual way. See also Ambivalence Coolidge effect Dichotomy Female Chauvinist Pigs Friend zone Gender norms in abstinence-only sex education Love and hate (psychoanalysis) Love–hate relationship Machismo Marianismo Misogyny Ni Putes Ni Soumises Neo-Freudianism Sexism Splitting (psychology) References Further reading Bareket, O., Kahalon, R., Shnabel, N., & Glick, P. (2018). The Madonna-Whore Dichotomy: Men who perceive women's nurturance and sexuality as mutually exclusive endorse patriarchy and show lower relationship satisfaction. Sex Roles, 79, 519-532. https://doi.org/10.1007/s11199-018-0895-7 Kahalon, R., Bareket, O., Vial, A. C., Sassenhagen, N., Becker, J. C., & Shnabel, N. (2019). The Madonna-whore dichotomy is associated with patriarchy endorsement: Evidence from Israel, the United States, and Germany. Psychology of Women Quarterly, 43(3), 348-367. https://doi.org/10.1177/0361684319843298 Klein, V., Kosman, E., & Kahalon, R. (2023). Devaluation of Women’s Sexual Pleasure: Role of Relationship Context and Endorsement of the Madonna-Whore Dichotomy. Sex Roles, 1-15. https://doi.org/10.1007/s11199-023-01424-3 External links Dichotomies Complex (psychology) Cognitive dissonance Object relations theory Psychoanalytic terminology Freudian psychology Problem behavior Violence against women Psychological abuse Misogyny Stereotypes of women
Madonna–whore complex
Biology
1,415
68,304,037
https://en.wikipedia.org/wiki/NGC%207544
NGC 7544 is a lenticular galaxy located in the constellation Pisces. It was discovered by the astronomer Albert Marth on November 18, 1864. References External links Pisces (constellation) 7544 Lenticular galaxies
NGC 7544
Astronomy
47
972,123
https://en.wikipedia.org/wiki/Digitized%20Sky%20Survey
The Digitized Sky Survey (DSS) is a digitized version of several photographic astronomical surveys of the night sky, produced by the Space Telescope Science Institute between 1983 and 2006. Versions and source material The term Digitized Sky Survey originally referred to the publication in 1994 of a digital version of an all-sky photographic atlas used to produce the first version of the Guide Star Catalog. For the northern sky, the National Geographic Society – Palomar Observatory Sky Survey E-band (red, named after the Eastman Kodak IIIa-E emulsion used), provided almost all of the source data (plate code "XE" in the survey). For the southern sky, the J-band (blue, Eastman Kodak IIIa-J) of the ESO/SERC Southern Sky Atlas (known as the SERC-J, code "S") and the "quick" V-band (blue or V in the Johnson–Kron–Cousins system, Eastman Kodak IIa-D) SERC-J Equatorial Extension (SERC-QV, code "XV"), from the UK Schmidt Telescope at the Australian Siding Spring Observatory, were used. Three supplemental plates in the V-band from the SERC and Palomar surveys are included (code "XX"), with shorter exposure times for the fields containing the Andromeda Galaxy, the Large and the Small Magellanic Cloud. The publication of a digital version of these photographic collections has subsequently become known as the First Generation DSS or DSS1. After the original 1994 publication, more digitizations were made using recently completed photographic surveys, and released as the Second Generation DSS or DSS2. Second Generation DSS consists of three spectra bands, blue, red, and near infrared. The red part was first to complete, and includes the F-band (red, Eastman Kodak IIIa-F) plates from the Palomar Observatory Sky Survey II, made with the Oschin Schmidt Telescope at Palomar Observatory for the northern sky. Red band sources for the southern sky include the short red (SR) plates of the SERC I/SR Survey and Atlas of the Milky Way and Magellanic Clouds (referred to as AAO-SR in DSS2), the Equatorial Red (SERC-ER), and the F-band Second Epoch Survey (referred to as AAO-SES in DSS2, AAO-R in the original literature), all made with the UK Schmidt Telescope at Anglo-Australian Observatory. Production The Digitized Sky Survey was produced by the Catalogs and Survey Branch (CASB) of the Space Telescope Science Institute (STScI). They scanned plates using one of two Perkin-Elmer PDS 2020G microdensitometers. The pixel size was 25 ("First Generation", DSS1) or 15 micrometres ("Second Generation", DSS2), corresponding to 1.7 or 1.0 arcseconds in the source material. The scanning resulted in images 14,000 x 14,000 (DSS1) or 23,040 x 23,040 pixels (DSS2) in size, or approximately 0.4 (DSS1) and 1.1 gigabytes (DSS2) each. The scanning of First Generation DSS takes a little under seven hours per plate to complete. Due to the large size of the images, they were compressed using an H-transform algorithm. This algorithm is lossy, but adaptive, and preserves most of the information in the original. Most of the First Generation DSS files were shrunk by a factor of seven. Similar methods were used in the production of the "Second Generation" DSS, but the microdensitometers have since been modified for multi-channel operation, in order to keep the scan time under 12 hours per plate. The CASB has also published several companion scientific products. The most notable is a photometric calibration of part of the "First Generation" DSS. It allows photometric measurements to be made using the digital northern POSS-E, southern SERC-J, and southern Galactic Plane SERC-V data. Publication The compressed version of the First Generation DSS was published by the STScI and the Astronomical Society of the Pacific (ASP) on 102 CD-ROMs in 1994, under the name "Digitized Sky Survey." It has also been made available online by the STScI and several other facilities in databases that can be queried over the web. The moniker "First Generation" was added later. In 1996, a more highly compressed version of the DSS was published by the STScI and ASP under the name RealSky. RealSky files were compressed by a factor of roughly 100. RealSky consequently took up less space, but the additional compression made it inappropriate for use in photometry and fine detail in the images was degraded. The Second Generation DSS has appeared steadily over the course of several years. In 2006, the Second Generation DSS (second epoch POSS-II and SES surveys) was finished, and distributed on CD-ROM to partner institutions. Generally, the data are available through WWW services at partner institutions. Funding See also References External links Digitized Sky Survey A Seamless Spherical Stitch of the Digitized Sky Survey from Microsoft Research Digitized Sky Survey in Google Sky (partly covered by SDSS and other images) Digitized Sky Survey in WIKISKY.ORG Astronomical catalogues Astronomical surveys Astronomical databases
Digitized Sky Survey
Astronomy
1,131
20,362,042
https://en.wikipedia.org/wiki/Iotalamic%20acid
Iotalamic acid, sold under the brand name Conray, is an iodine-containing radiocontrast agent. It is available in form of its salts, sodium iotalamate and meglumine iotalamate. It can be given intravenously or intravesically (into the urinary bladder). A radioactive formulation is also available as sodium iothalamate I-125 injection (brand name Glofil-125). It is indicated for evaluation of glomerular filtration in the diagnosis or monitoring of people with kidney disease. References External links Benzoic acids Radiocontrast agents Acetanilides Benzamides Iodobenzene derivatives
Iotalamic acid
Chemistry
146
18,403
https://en.wikipedia.org/wiki/Logical%20positivism
Logical positivism, also known as logical empiricism or neo-positivism, was a philosophical movement, in the empiricist tradition, that sought to formulate a scientific philosophy in which philosophical discourse would be, in the perception of its proponents, as authoritative and meaningful as empirical science. Logical positivism's central thesis was the verification principle, also known as the "verifiability criterion of meaning", according to which a statement is cognitively meaningful only if it can be verified through empirical observation or if it is a tautology (true by virtue of its own meaning or its own logical form). The verifiability criterion thus rejected statements of metaphysics, theology, ethics and aesthetics as cognitively meaningless in terms of truth value or factual content. Despite its ambition to overhaul philosophy by mimicking the structure and process of empirical science, logical positivism became erroneously stereotyped as an agenda to regulate the scientific process and to place strict standards on it. The movement emerged in the late 1920s among philosophers, scientists and mathematicians congregated within the Vienna Circle and Berlin Circle and flourished in several European centres through the 1930s. By the end of World War II, many of its members had settled in the English-speaking world and the project shifted to less radical goals within the philosophy of science. By the 1950s, problems identified within logical positivism's central tenets became seen as intractable, drawing escalating criticism among leading philosophers, notably from Willard van Orman Quine and Karl Popper, and even from within the movement, from Carl Hempel. These problems would remain unresolved, precipitating the movement's eventual decline and abandonment by the 1960s. In 1967, philosopher John Passmore pronounced logical positivism "dead, or as dead as a philosophical movement ever becomes". Origins Logical positivism emerged in Germany and Austria amid a philosophical backdrop characterised by the dominance of Hegelian metaphysics, and the work of Hegelian successors such as F. H. Bradley, who portrayed reality by postulating metaphysical entities without empirical basis. The late 19th century also saw the emergence of neo-Kantianism as a philosophical movement, under the rationalist tradition. The logical positivist program established its theoretical foundations in the empiricism of David Hume, Auguste Comte and Ernst Mach, along with the positivism of Comte and Mach, defining its exemplar of science in Einstein's general theory of relativity. Per Mach's phenomenalism (whereby the mind knows only actual or potential sensory experience) logical positivists took all scientific knowledge to be only sensory experience. Further influence came from Percy Bridgman's operationalism, whereby a physical theory is understood by the experimental methods performed to test its predictions, as well as Immanuel Kant's perspectives on aprioricity. Ludwig Wittgenstein's Tractatus Logico-Philosophicus established the theoretical foundations for the verifiability principle. His work introduced the view of philosophy as "critique of language", discussing theoretical distinctions between intelligible and nonsensical discourse. Tractatus adhered to a correspondence theory of truth, as opposed to a coherence theory of truth. Logical positivists were also influenced by Wittgenstein's interpretation of probability though, according to Neurath, some objected to the metaphysics in Tractatus. History Vienna and Berlin Circles The Vienna Circle, whose gatherings centered around the University of Vienna and at the Café Central, was led principally by Moritz Schlick. In Germany, Hans Reichenbach was pre-eminent in the Berlin Circle, whose members maintained closely cooperative ties with the Viennese. Schlick had held a neo-Kantian position, but later converted, via Carnap's 1928 book Der logische Aufbau der Welt (The Logical Structure of the World). A 1929 manifesto written by Otto Neurath, Hans Hahn and Rudolf Carnap summarised the Vienna Circle's positions. Another member among its ranks to later prove very influential was Carl Hempel. A friendly but tenacious critic of the movement was Karl Popper, whom Neurath nicknamed the "Official Opposition". Early in their history, Carnap and other members, including Hahn and Neurath, noted that the verifiability criterion was too stringent. Notably, it excluded universal statements that are vital to scientific hypothesis. A radical left wing emerged from the Vienna Circle, led by Neurath and Carnap, who began a program they referred to as the "liberalisation of empiricism", proposing revisions to weaken the criterion. A conservative right wing, led by Schlick and Waismann, sought to reclassify universal statements as analytic truths, thereby to reconcile them with the existing criterion. Among other ideas espoused by the liberal wing, Carnap emphasised fallibilism and pragmatics, which he considered integral to empiricism. Though Neurath prescribed a move from Mach's phenomenalism to physicalism, this would be rejected by Carnap. As Neurath and Carnap sought to pose science toward social reform, the split in the Vienna Circle also reflected political differences. Both Schlick and Carnap had been influenced by and sought to define logical positivism versus the neo-Kantianism of Ernst Cassirer, the contemporary leading figure of the Marburg school, and against Edmund Husserl's phenomenology. Logical positivists especially opposed Martin Heidegger's obscure metaphysics, the epitome of what they had rejected through their epistemological doctrines. In the early 1930s, Carnap debated Heidegger over "metaphysical pseudosentences". Anglosphere As the movement's first emissary to the New World, Moritz Schlick visited Stanford University in 1929, yet otherwise remained in Vienna and was murdered in 1936 at the University by a former student, Johann Nelböck, who was reportedly deranged. That year, A. J. Ayer, a British attendee at some Vienna Circle meetings since 1933, saw his Language, Truth and Logic import logical positivism to the English-speaking world. By that time, the Nazi Party's 1933 rise to power in Germany had triggered flight of intellectuals. Upon Germany's annexation of Austria in 1938, the remaining logical positivists, many of whom were also Jewish, were targeted and continued flight. Logical positivism thus became dominant in the English-speaking world. By the late 1930s, many in the movement had replaced phenomenalism with Neurath's physicalism, whereby science's content is not actual or potential sensations, but instead consists of entities that are publicly observable. In exile in England, Neurath died in 1945. Carnap, Reichenbach and Hempel—Carnap's protégé who had studied in Berlin with Reichenbach—settled permanently in America. Post-war period Following the second world war, logical positivism, now referred to by some as logical empiricism, turned to less radical objectives, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation. The movement became a major underpinning of analytic philosophy and dominated philosophy in the English-speaking world, notably in philosophy of science, while influencing sciences, but especially social sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly criticized, most trenchantly by Willard Van Orman Quine, Norwood Hanson, Karl Popper, Thomas Kuhn, and Carl Hempel. Principles Logicism By reducing mathematics to logic, Bertrand Russell sought to convert the mathematical formulas of physics to symbolic logic. Gottlob Frege began this program of logicism, continuing it with Russell, but eventually lost interest. Russell then continued it with Alfred North Whitehead in their Principia Mathematica, inspiring some of the more mathematical logical positivists, such as Hans Hahn and Rudolf Carnap. Carnap's early anti-metaphysical works employed Russell's theory of types. Like Russell, Carnap envisioned a universal language that could reconstruct mathematics and thereby encode physics. Yet Kurt Gödel's incompleteness theorem showed this to be impossible, except in trivial cases, and Alfred Tarski's undefinability theorem finally undermined all hopes of reducing mathematics to logic. Thus, a universal language failed to stem from Carnap's 1934 work Logische Syntax der Sprache (Logical Syntax of Language). Still, some logical positivists, including Carl Hempel, continued support of logicism. Analytic-synthetic distinction In the theory of knowledge, a priori statements are those that are knowable without, or prior to, observation, whereas a posteriori statements are knowable only through observation. Statements may further be categorised into the analytic and synthetic. Analytic statements are true by virtue of their own meaning or their own logical form, therefore are tautologies that are true by necessity but uninformative about the world. Synthetic statements, in contrast, refer to a state of facts concerning the world, therefore are contingencies. David Hume categorised knowledge exclusively as either "relations of ideas" (which are a priori, analytic, necessary and abstract) or "matters of fact and real existence" (a posteriori, synthetic, contingent and concrete), a classification referred to as Hume's fork. Immanuel Kant identified a further category of knowledge—synthetic a priori statements—which affirm a state of facts concerning the world, but are knowable prior to experience. This was characterised in his Critique of Pure Reason per transcendental idealism, attributing the mind a constructive role in phenomena by arranging sense data into the very experience of space, time, and substance. His thesis would serve to rescue Newton's law of universal gravitation from Hume's problem of induction by finding uniformity of nature to be a priori knowledge. Though logical positivists adopted the Kantian position of defining logic and mathematics as a priori knowledge, they would re-affirm Hume's fork and reject Kant's conception of synthetic a priori knowledge due to its conflict with verificationism. Building upon Gottlob Frege's work and Wittgenstein's Tractatus, they reformulated the analytic-synthetic distinction, reinterpreting truths of logic (and mathematics, now reduced to logic via logicism) as tautologies. This would be critical to the logical positivist program in rendering logic and mathematics—ordinarily considered synthetic truths—permissible under verificationism, as analytic truths. Observation-theory distinction Early in the movement, most logical positivists proposed that all knowledge is based on logical inference from simple "protocol sentences" grounded in observable facts. Theoretical terms would garner meaning from observational terms via correspondence rules, and thereby theoretical laws would be reduced to empirical laws. In the 1936 and 1937 papers "Testability and Meaning", Carnap referred to Russell's logical atomism, the view that individual terms, representing discrete units of meaning, replace sentences in ordinary language. Rational reconstruction would thereby convert ordinary statements into standardised equivalents composed of subunits of meaning that are assembled via a logical syntax. Furthermore, theoretical terms no longer need to acquire meaning by explicit definition from observational terms: the connection may be indirect, through a system of implicit definitions. Carnap also provided an important, pioneering discussion of disposition predicates. Verification and Confirmation Verifiability Criterion of Meaning According to the verifiability criterion of meaning, a statement is cognitively meaningful only if it is either verifiable by empirical observation or is an analytic truth (ie. true by virtue of its own meaning or its own logical form). Cognitive meaningfulness was defined variably: possessing truth value; or corresponding to a possible state of affairs; or intelligible or understandable as are scientific statements. Other types of meaning—for instance, emotive, expressive or figurative—were dismissed from further review. Metaphysics, theology, as well as much of ethics and aesthetics failed this criterion, and so were found cognitively meaningless and only emotively meaningful (though, notably, Moritz Schlick did not view ethical or aesthetic statements as meaningless). Ethics and aesthetics were considered subjective preferences, while theology and metaphysics contained "pseudostatements" that were neither true nor false. Thus, logical positivism indirectly asserted Hume's law, the principle that factual statements cannot justify evaluative statements, and that the two are separated by an unbridgeable gap. A. J. Ayer's Language, Truth and Logic (1936) presented an extreme version of this principle—the boo/hooray doctrine—whereby all evaluative judgments are merely emotional reactions. Revisions to the criterion Logical positivists in the Vienna Circle recognised quickly that the verifiability criterion was too restrictive. Specifically, universal statements were noted to be empirically unverifiable, rendering vital domains of science and reason, such as scientific hypothesis, cognitively meaningless under verificationism. This would pose significant problems for the logical positivist program, absent revisions to its criterion of meaning. In his 1936 and 1937 papers, Testability and Meaning, Carnap proposed confirmation in place of verification, determining that, though universal laws cannot be verified, they can be confirmed. Carnap employed abundant logical and mathematical tools to research an inductive logic that would account for probability according to degrees of confirmation. However, he was never able to formulate a model. In Carnap's inductive logic, a universal law's degree of confirmation was always zero. The formulation of what eventually came to be called the "criterion of cognitive significance", stemming from this research, took three decades (Hempel 1950, Carnap 1956, Carnap 1961). Carl Hempel, who became a prominent critic of the logical positivist movement, elucidated the paradox of confirmation. In his 1936 book, Language, Truth and Logic, A. J. Ayer distinguished strong and weak verification. He stipulated that, "A proposition is said to be verifiable, in the strong sense of the term, if, and only if, its truth could be conclusively established by experience", but is verifiable in the weak sense "if it is possible for experience to render it probable". He would add that, "no proposition, other than a tautology, can possibly be anything more than a probable hypothesis". Thus, he would conclude that all are open to weak verification. Philosophy of science The logical positivist movement shed much of its revolutionary zeal following the defeat of Nazism and the decline of rival philosophies that sought radical reform, notably Marburg neo-Kantianism, Husserlian phenomenology and Heidegger's existential hermeneutics. Hosted in the climate of American pragmatism and commonsense empiricism, its proponents no longer crusaded to revise traditional philosophy into a radical scientific philosophy, but became respectable members of a new philosophical subdiscipline, philosophy of science. Receiving support from Ernest Nagel, they were especially influential in the social sciences. Scientific explanation Carl Hempel was prominent in the development of the deductive-nomological (DN) model, then the foremost model of scientific explanation defended even among critics of neo-positivism such as Popper. According to the DN model, a scientific explanation is valid only if it takes the form of a deductive inference from a set of explanatory premises (explanans) to the observation or theory to be explained (explanandum). The model stipulates that the premises must refer to at least one law, which it defines as an unrestricted generalization of the conditional form: "If A, then B". Laws therefore differ from mere regularities ("George always carries only $1 bills in his wallet") which do not necessarily support counterfactual claims. Furthermore, laws must be empirically verifiable in compliance with the verification principle. The DN model ignores causal mechanisms beyond the principle of constant conjunction ("first event A and then always event B") in accordance with the Humean empiricist postulate that, though sequences of events are observable, the underpinning causal principles are not. Hempel stated that well-formulated natural laws (empirically confirmed regularities) are satisfactory in approximating causal explanation. Hempel later proposed a probabilistic model of scientific explanation: The inductive-statistical (IS) model. Derivation of statistical laws from other statistical laws would further be designated as the deductive-statistical (DS) model. The DN and IS models are collectively referred to as the "covering law model" or "subsumption theory", the latter referring to the movement's stated goals of "theory reduction". Unity of science Logical positivists were committed to the vision of a unified science encompassing all scientific fields (including the special sciences, such as biology, anthropology, sociology and economics, and the fundamental science, or fundamental physics) which would be synthesised into a singular epistemic entity. Key to this concept was the doctrine of theory reduction, according to which the covering law model would be used to interconnect the special sciences and, thereupon, to reduce all laws in the special sciences to fundamental physics. The movement envisioned a universal scientific language that could express statements with common meaning intelligible to all scientific fields. Carnap sought to realise this goal through the systematic reduction of the linguistic terms of more specialised fields to those of more fundamental fields. Various methods of reduction were proposed, referring to the use of set theory to manipulate logically primitive concepts (as in Carnap's Logical Structure of the World, 1928) or via analytic and a priori deductive operations (as described in Testability and Meaning, 1936, 1937). A number of publications over a period of thirty years would attempt to elucidate this concept. Criticism In the post-war period, key tenets of logical positivism, including its atomistic philosophy of science, the verifiability criterion and analytic-synthetic distinction, drew escalated criticism. This would become sustained from various directions by the 1950s such that, even among fractious philosophers who disagreed on the general objectives of epistemology, most would concur that the logical positivist program had become untenable. Notable critics included Popper, W. V. O. Quine, Norwood Hanson, Kuhn, Putnam, J. L. Austin, Peter Strawson, Nelson Goodman, and Richard Rorty. Hempel himself became a major critic within the logical positivism movement criticizing the positivist thesis that empirical knowledge is restricted to basic statements, observation statements or protocol statements. Karl Popper Karl Popper, a graduate of the University of Vienna, was an outspoken critic of the logical positivist movement from its inception. In Logik der Forschung (1934, published in English in 1959 as The Logic of Scientific Discovery) he attacked verificationism directly, proposing that the problem of induction renders it impossible for scientific hypotheses and other universal statements to be verified conclusively. Any attempt to do so, he argued, would commit the fallacy of affirming the consequent, given that verification cannot in itself exclude alternative explanations for a specific phenomenon or instance of observation. He would later affirm that the very concept of the verifiability criterion cannot be empirically verified, thus is meaningless by its own proposition and ultimately self-defeating as a principle. In the same book, Popper presented falsifiability, which he distinguished, not as a criterion of cognitive meaning like verificationism (as commonly misunderstood), but as a criterion to differentiate scientific from non-scientific statements, thereby to demarcate the boundaries of science. Popper observed that, though universal statements cannot be verified, they can be falsified, and that the most productive scientific theories appeared to be those carried the greatest predictive risks of being falsified by observation. He would conclude that the scientific method should be a hypothetico-deductive model, wherein scientific hypotheses must be falsifiable (per his criterion), held as provisionally true until proven false by observation, and corroborated by supporting evidence rather than verified or confirmed. In rejecting neo-positivist views of cognitive meaningfulness, Popper considered metaphysics to be rich in meaning and important in the origination of scientific theories, and value systems to be integral to science's quest for truth. At the same time, he disparaged pseudoscience, referring to the confirmation biases that emboldened support for unfalsifiable conjectures (notably those in psychology and psychoanalysis) and ad hoc arguments recruited to defend predictive theories that have been proven conclusively false. Quine Although an empiricist, American logician Willard Van Orman Quine published the 1951 paper "Two Dogmas of Empiricism", which challenged conventional empiricist presumptions. Quine attacked the analytic/synthetic division, which the verificationist program had been hinged upon in order to entail, by consequence of Hume's fork, both necessity and aprioricity. Quine's ontological relativity explained that every term in any statement has its meaning contingent on a vast network of knowledge and belief, the speaker's conception of the entire world. Quine later proposed naturalized epistemology. Hanson In 1958, Norwood Hanson's Patterns of Discovery undermined the division of observation versus theory, as one can predict, collect, prioritize, and assess data only via some horizon of expectation set by a theory. Thus, any dataset—the direct observations, the scientific facts—is laden with theory. Kuhn With his landmark The Structure of Scientific Revolutions (1962), Thomas Kuhn critically destabilized the verificationist program, which was presumed to call for foundationalism. (But already in the 1930s, Otto Neurath had argued for nonfoundationalism via coherentism by likening science to a boat (Neurath's boat) that scientists must rebuild at sea.) Although Kuhn's thesis itself was attacked even by opponents of neopositivism, in the 1970 postscript to Structure, Kuhn asserted, at least, that there was no algorithm to science—and, on that, even most of Kuhn's critics agreed. Powerful and persuasive, Kuhn's book, unlike the vocabulary and symbols of logic's formal language, was written in natural language open to the layperson. Kuhn's book was first published in a volume of International Encyclopedia of Unified Science—a project begun by logical positivists but by Neurath whose view of science was already nonfoundationalist as mentioned above—and some sense unified science, indeed, but by bringing it into the realm of historical and social assessment, rather than fitting it to the model of physics. Kuhn's ideas were rapidly adopted by scholars in disciplines well outside natural sciences, and, as logical empiricists were extremely influential in the social sciences, ushered academia into postpositivism or postempiricism. Putnam The "received view" operates on the correspondence rule that states, "The observational terms are taken as referring to specified phenomena or phenomenal properties, and the only interpretation given to the theoretical terms is their explicit definition provided by the correspondence rules". According to Hilary Putnam, a former student of Reichenbach and of Carnap, the dichotomy of observational terms versus theoretical terms introduced a problem within scientific discussion that was nonexistent until this dichotomy was stated by logical positivists. Putnam's four objections: Something is referred to as "observational" if it is observable directly with our senses. Then an observational term cannot be applied to something unobservable. If this is the case, there are no observational terms. With Carnap's classification, some unobservable terms are not even theoretical and belong to neither observational terms nor theoretical terms. Some theoretical terms refer primarily to observational terms. Reports of observational terms frequently contain theoretical terms. A scientific theory may not contain any theoretical terms (an example of this is Darwin's original theory of evolution). Putnam also alleged that positivism was actually a form of metaphysical idealism by its rejecting scientific theory's ability to garner knowledge about nature's unobservable aspects. With his "no miracles" argument, posed in 1974, Putnam asserted scientific realism, the stance that science achieves true—or approximately true—knowledge of the world as it exists independently of humans' sensory experience. In this, Putnam opposed not only the positivism but other instrumentalism—whereby scientific theory is but a human tool to predict human observations—filling the void left by positivism's decline. Decline and legacy By the late 1960s, logical positivism had become exhausted. In 1976, A. J. Ayer quipped that "the most important" defect of logical positivism "was that nearly all of it was false," though he maintained "it was true in spirit." Although logical positivism tends to be recalled as a pillar of scientism, Carl Hempel was key in establishing the subdiscipline of the philosophy of science, where Thomas Kuhn and Karl Popper brought in the era of postpositivism. John Passmore found logical positivism to be "dead, or as dead as a philosophical movement ever becomes". Logical positivism's fall reopened the debate over the metaphysical merit of scientific theory, whether it can offer knowledge of the world beyond human experience (scientific realism) versus whether it is but a human tool to predict human experience (instrumentalism). Philosophers increasingly critiqued logical positivism, often misrepresenting it without thorough examination. It was generally reduced to oversimplifications and stereotypes, particularly associating it with foundationalism. The movement helped anchor analytic philosophy in the English-speaking world and reintroducing empiricism in Britain. Its influence extended beyond philosophy, particularly in psychology and social sciences. See also The Structure of Science People Notes References Bechtel, William, Philosophy of Science: An Overview for Cognitive Science (Hillsdale NJ: Lawrence Erlbaum Assoc, 1988). Friedman, Michael, Reconsidering Logical Positivism (New York: Cambridge University Press, 1999). Novick, Peter, That Noble Dream: The 'Objectivity Question' and the American Historical Profession (Cambridge UK: Cambridge University Press, 1988). Stahl, William A & Robert A Campbell, Yvonne Petry, Gary Diver, Webs of Reality: Social Perspectives on Science and Religion (Piscataway NJ: Rutgers University Press, 2002). Suppe, Frederick, ed, The Structure of Scientific Theories, 2nd edn (Urbana IL: University of Illinois Press, 1977). Further reading Achinstein, Peter and Barker, Stephen F. The Legacy of Logical Positivism: Studies in the Philosophy of Science. Baltimore: Johns Hopkins Press, 1969. Ayer, Alfred Jules. Logical Positivism. Glencoe, Ill: Free Press, 1959. Barone, Francesco. Il neopositivismo logico. Roma Bari: Laterza, 1986. Bergmann, Gustav. The Metaphysics of Logical Positivism. New York: Longmans Green, 1954. Cirera, Ramon. Carnap and the Vienna Circle: Empiricism and Logical Syntax. Atlanta, GA: Rodopi, 1994. Edmonds, David & Eidinow, John; Wittgenstein's Poker, Friedman, Michael. Reconsidering Logical Positivism. Cambridge, UK: Cambridge University Press, 1999 Gadol, Eugene T. Rationality and Science: A Memorial Volume for Moritz Schlick in Celebration of the Centennial of his Birth. Wien: Springer, 1982. Geymonat, Ludovico. La nuova filosofia della natura in Germania. Torino, 1934. Giere, Ronald N. and Richardson, Alan W. Origins of Logical Empiricism. Minneapolis: University of Minnesota Press, 1997. Hanfling, Oswald. Logical Positivism. Oxford: B. Blackwell, 1981. Holt, Jim, "Positive Thinking" (review of Karl Sigmund, Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science, Basic Books, 449 pp.), The New York Review of Books, vol. LXIV, no. 20 (21 December 2017), pp. 74–76. Jangam, R. T. Logical Positivism and Politics. Delhi: Sterling Publishers, 1970. Janik, Allan and Toulmin, Stephen. Wittgenstein's Vienna. London: Weidenfeld and Nicolson, 1973. Kraft, Victor. The Vienna Circle: The Origin of Neo-positivism, a Chapter in the History of Recent Philosophy. New York: Greenwood Press, 1953. McGuinness, Brian. Wittgenstein and the Vienna Circle: Conversations Recorded by Friedrich Waismann. Trans. by Joachim Schulte and Brian McGuinness. New York: Barnes & Noble Books, 1979. Milkov, Nikolay (ed.). Die Berliner Gruppe. Texte zum Logischen Empirismus von Walter Dubislav, Kurt Grelling, Carl G. Hempel, Alexander Herzberg, Kurt Lewin, Paul Oppenheim und Hans Reichenbach. Hamburg: Meiner 2015. (German) Mises von, Richard. Positivism: A Study in Human Understanding. Cambridge: Harvard University Press, 1951. Parrini, Paolo. Empirismo logico e convenzionalismo: saggio di storia della filosofia della scienza. Milano: F. Angeli, 1983. Parrini, Paolo; Salmon, Wesley C.; Salmon, Merrilee H. (ed.) Logical Empiricism – Historical and Contemporary Perspectives, Pittsburgh: University of Pittsburgh Press, 2003. Reisch, George. How the Cold War Transformed Philosophy of Science : To the Icy Slopes of Logic. New York: Cambridge University Press, 2005. Rescher, Nicholas. The Heritage of Logical Positivism. Lanham, MD: University Press of America, 1985. Richardson, Alan and Thomas Uebel (eds.) The Cambridge Companion to Logical Positivism. New York: Cambridge University Press, 2007. Salmon, Wesley and Wolters, Gereon (ed.) Logic, Language, and the Structure of Scientific Theories: Proceedings of the Carnap-Reichenbach Centennial, University of Konstanz, 21–24 May 1991, Pittsburgh: University of Pittsburgh Press, 1994. Sarkar, Sahotra (ed.) The Emergence of Logical Empiricism: From 1900 to the Vienna Circle. New York: Garland Publishing, 1996. Sarkar, Sahotra (ed.) Logical Empiricism at its Peak: Schlick, Carnap, and Neurath. New York: Garland Pub., 1996. Sarkar, Sahotra (ed.) Logical Empiricism and the Special Sciences: Reichenbach, Feigl, and Nagel. New York: Garland Pub., 1996. Sarkar, Sahotra (ed.) Decline and Obsolescence of Logical Empiricism: Carnap vs. Quine and the Critics. New York: Garland Pub., 1996. Sarkar, Sahotra (ed.) The Legacy of the Vienna Circle: Modern Reappraisals. New York: Garland Pub., 1996. Spohn, Wolfgang (ed.) Erkenntnis Orientated: A Centennial Volume for Rudolf Carnap and Hans Reichenbach, Boston: Kluwer Academic Publishers, 1991. Stadler, Friedrich. The Vienna Circle. Studies in the Origins, Development, and Influence of Logical Empiricism. New York: Springer, 2001. – 2nd Edition: Dordrecht: Springer, 2015. Stadler, Friedrich (ed.). The Vienna Circle and Logical Empiricism. Re-evaluation and Future Perspectives. Dordrecht – Boston – London, Kluwer 2003. External links Articles by logical positivists The Scientific Conception of the World: The Vienna Circle Carnap, Rudolf. 'The Elimination of Metaphysics Through Logical Analysis of Language' Carnap, Rudolf. 'Empiricism, Semantics, and Ontology.' Excerpt from Carnap, Rudolf. Philosophy and Logical Syntax. Feigl, Herbert. 'Positivism in the Twentieth Century (Logical Empiricism)', Dictionary of the History of Ideas, 1974, Gale Group (Electronic Edition) Hempel, Carl. 'Problems and Changes in the Empiricist Criterion of Meaning.' Articles on logical positivism Kemerling, Garth. 'Logical Positivism', Philosophy Pages Murzi, Mauro. 'Logical Positivism', The New Encyclopedia of Unbelief, Tom Flynn (ed.). Prometheus Books, 2007 (PDF version) Murzi, Mauro. 'The Philosophy of Logical Positivism.' Articles on related philosophical topics Hájek, Alan. 'Interpretations of Probability', The Stanford Encyclopedia of Philosophy (Summer 2003 Edition), Edward N. Zalta (ed.) Rey, Georges. 'The Analytic/Synthetic Distinction', The Stanford Encyclopedia of Philosophy (Fall 2003 Edition), Edward N. Zalta (ed.) Ryckman, Thomas A., 'Early Philosophical Interpretations of General Relativity', The Stanford Encyclopedia of Philosophy (Winter 2001 Edition), Edward N. Zalta (ed.) Woleński, Jan. 'Lvov-Warsaw School', The Stanford Encyclopedia of Philosophy (Summer 2003 Edition), Edward N. Zalta (ed.) Woodward, James. 'Scientific Explanation', The Stanford Encyclopedia of Philosophy (Summer 2003 Edition), Edward N. Zalta (ed.) Analytic philosophy Empiricism Epistemological theories Epistemology of science History of science Linguistic turn Meaning in religious language Philosophical schools and traditions Philosophy of science Positivism Theories of language
Logical positivism
Mathematics,Technology
7,196
49,875,297
https://en.wikipedia.org/wiki/Holin%20superfamily%20V
The Holin superfamily V is a superfamily of integral membrane transport proteins. It is one of the seven different holin superfamilies in total. In general, these proteins are thought to play a role in regulated cell death, although functionality varies between families and individual members. The Holin superfamily V includes the TC families: 1.E.21 - The Listeria Phage A118 Holin (Hol118) Family 1.E.29 - The Holin Hol44 (Hol44) Family Superfamily V includes protein families classified in the Transporter Classification Database as TC# 1.E.21 and TC# 1.E.29. Both families possess members from Bacillota, Actinomycetota and Chloroflexota. Proteins of this superfamily all appear to have 3 transmembrane segments (TMSs) and have average sizes of 97 and 101 amino acyl residues (aas), respectively. See also Holin Lysin Transporter Classification Database References Holins Protein superfamilies
Holin superfamily V
Biology
217
19,348,817
https://en.wikipedia.org/wiki/Controlled%20Demolition%2C%20Inc.
Controlled Demolition, Inc. (CDI) is a controlled demolition firm headquartered in Phoenix, Maryland. The firm was founded by Jack Loizeaux who used dynamite to remove tree stumps in the Baltimore, Maryland area, and moved on to using explosives to take down chimneys, overpasses and small buildings in the 1940s. The company has demolished several notable buildings by implosion, including the Gettysburg National Tower, the Seattle Kingdome, and the uncollapsed portion of the Champlain Towers South condominium. Records The firm has claimed world records for a series of 1998 projects: The June 23 demolition of the 1,201-foot-high Omega Radio Tower in Trelew, Argentina, "the tallest manmade structure ever felled with explosives"; The August 16 implosion of the 17-building Villa Panamericana and Las Orquideas public housing complex in San Juan, Puerto Rico, "the most buildings shot in a single implosion sequence"; and the October 24 project at the J. L. Hudson Department Store in Detroit, Michigan, which at in height became "the tallest building & the tallest structural steel building ever imploded" and its making it "the largest single building ever imploded". Selected projects Old Sunshine Skyway Bridge In 1990, the FDOT awarded a bid to Hardaway Company (owner of Controlled Demolition, Inc.) to demolish all steel and concrete sections of the old Sunshine Skyway spans. The scope of the project required that all underwater piles and piers, and surface roadway, girders, and beams, be dismantled. Special care had to be taken in removing underwater bridge elements near the channel, and the central portion of the original bridge had to be removed in one piece to minimize closure of the only approach to the busy Port of Tampa. Most of the concrete material was used to create an artificial reef near the southbound approach of the old bridge, which was converted into a long pier for newly created Skyway Fishing Pier State Park. Unused approaches to the original spans were demolished in 2008. Alfred P. Murrah Building, Oklahoma City On May 23, 1995, the firm was responsible for the demolition of the Alfred P. Murrah Federal Building after its bombing on April 19, 1995. The Seattle Kingdome On March 26, 2000, the firm used 4,450 pounds of dynamite placed in 5,905 carefully sited holes and of detonation cord inserted over a period of four months to take down the 25,000-ton concrete roof of the Kingdome in Seattle, Washington in 16.8 seconds, one day before the 24th birthday of the stadium that had been the home of the Seattle Mariners of Major League Baseball and the Seattle Seahawks of the National Football League. The total cost for the demolition project was $9 million. The firm planned the collapse of the roof to prevent its simultaneous free fall, creating a delay pattern that would break the roof into pieces and setting up 15-foot-high earth berms on the floor of the stadium to absorb the impact of the falling concrete. The demolition of the Kingdome established the record for the largest structure, by volume, ever demolished with explosives. The implosion of the 125,000-ton concrete structure did not cause a single crack in the foundation of the new stadium being built away. Gettysburg National Tower CDI demolished the Gettysburg National Tower on July 3, 2000, which was the 137th anniversary of the final day of the Battle of Gettysburg. The demolition was done for free for the National Park Service. The tower was felled by of explosives in front of a crowd of 10,000. World Trade Center Site On September 22, 2001, eleven days after the 9/11 attacks, a preliminary cleanup plan for the World Trade Center site was delivered by Controlled Demolition, Inc. in which Mark Loizeaux, president of CDI, emphasized the importance of protecting the slurry wall (known as "the bathtub") which kept the Hudson River from flooding the WTC's basement. Cape Canaveral Air Force Station Space Launch Complex 40 The tower was disassembled during late 2007 and early 2008. Demolition of the Mobile Service Structure (MSS), by means of a controlled explosion, occurred on 2008-04-27. National Geographic Channel: Man Made: Rocket Tower has a full episode on the demolition Martin Tower Martin Tower, the 21-story world headquarters building of defunct Bethlehem Steel and the tallest building in Bethlehem, Pennsylvania, was imploded by Controlled Demolition on May 19, 2019, at a reported cost of $575,000. Champlain Towers South The company was contracted to demolish the remaining portion of the 12-story condominium building near Miami Beach, Florida, after it partially collapsed on June 24, 2021; the work was expedited due to the potential threat of Hurricane Elsa. The demolition occurred on July 4, 2021, after only a day of preparation, including placement of explosives; city officials had feared that the demolition could take weeks. As the still-standing structure was unstable, it was considered unsafe to enter and CDI had originally estimated that the demolition could not occur until the following day, since the work had to be done carefully and slowly to avoid a premature collapse. This risk of collapse and its risk to rescuers warranted the controlled demolition, which was directed away from the original collapse footprint. Other projects Manchester Bridge Pruitt–Igoe Traymore Hotel Woodmen of the World Building Marlborough-Blenheim Hotel Hotel Manger Corbett Building Hotel Charlotte Dunes Hotel and Casino Commonwealth Building Landmark Hotel and Casino Sands Hotel and Casino Hacienda (resort) Farmers Bank Building Omni Coliseum Aladdin Hotel and Casino Omega Tower Trelew J. L. Hudson Department Store and Addition Lake Michigan High-Rises St. Louis Arena Mapes Hotel El Rancho Hotel and Casino Three Rivers Stadium Naval Hospital Philadelphia Market Square Arena Capital Centre Everglades Hotel Baptist Memorial Hospital Cooling tower at the Trojan Nuclear Power Plant Stardust Resort and Casino Cooling towers at the Calder Hall nuclear power station Sands Atlantic City RCA Dome Ocean Tower Houston Main Building Fort Steuben Bridge Plaza Hotel Grand Palace Hotel Innerbelt Bridge Queen Lane Apartments Riviera Hotel and Casino Capital Plaza Office Tower 505 North Ervay The Palace of Auburn Hills Trump Plaza Hotel and Casino Ferrybridge Power Station sub-contracted by Keltbray Decommissioning 420 Main Hotel Deauville Francis Scott Key Bridge Capital One Tower Tropicana Las Vegas Chimney at Alma Station References External links Controlled Demolition, Inc. Interview with Stacey Loizeaux by Nova (TV series) Loizeaux Group, LLC Youtube channel Companies based in Baltimore County, Maryland Demolition American companies established in 1947 1947 establishments in Maryland
Controlled Demolition, Inc.
Engineering
1,366
18,739,787
https://en.wikipedia.org/wiki/Nassif%20Ghoussoub
Nassif A. Ghoussoub is a Canadian mathematician working in the fields of non-linear analysis and partial differential equations. He is a Professor of Mathematics and a Distinguished University Scholar at the University of British Columbia. Early life and education Ghoussoub was born to Lebanese parents in Western Africa (now Mali). He completed his doctorat 3ème cycle (PhD) in 1975, and a Doctorat d'Etat in 1979 at the Pierre and Marie Curie University, where his advisors were Gustave Choquet and Antoine Brunel. Career Ghoussoub completed his post-doctoral fellowship at the Ohio State University during 1976–77. He then joined the University of British Columbia, where he currently holds a position of Professor of Mathematics and a Distinguished University Scholar. Ghoussoub is known for his work in functional analysis, non-linear analysis, and partial differential equations. He was vice-president of the Canadian Mathematical Society from 1994 to 1996, the founding director of the Pacific Institute for the Mathematical Sciences (PIMS) for the period 1996–2003, the co-editor-in-chief of the Canadian Journal of Mathematics during 1993–2002, a co-founder of the MITACS Network of Centres of Excellence, and is the founder and scientific director (2001 - 2020) of the Banff International Research Station (BIRS). In 1994, Ghoussoub became a fellow of the Royal Society of Canada, and in 2012, a fellow of the American Mathematical Society. Ghoussoub has been awarded multiple awards and distinctions, including the Coxeter-James prize in 1990, and the Jeffrey-Williams prize in 2007. He holds honorary doctorates from the Université Paris-Dauphine (France), and the University of Victoria (Canada). He was awarded the Queen Elizabeth II Diamond Jubilee Medal in 2012, and appointed to the Order of Canada in 2015, with the grade of officer for contributions to mathematics, research, and education. In 2018, Ghoussoub was elected a faculty representative on the University of British Columbia's Board of Governors. He will serve until February 29, 2020. Ghoussoub has previously served two consecutive terms in this role from 2008 to 2014. Ghoussoub's scholarly work has been cited over 5,900 times and has an h-index of 40. Awards Coxeter-James Prize, Canadian Mathematical Society (1990) Killam Senior Research Fellowship, UBC (1992) Fellow of the Royal Society of Canada (1994) Distinguished University Scholar, UBC (2003) Doctorat Honoris Causa, Paris Dauphine University Jeffery–Williams Prize, Canadian mathematical Society (2007) Faculty of Science Achievement Award for outstanding service and leadership, UBC (2007) David Borwein Distinguished Career Award, Canadian Mathematical Society (2010) Fellow of the American Mathematical Society (2012) Queen Elizabeth II Diamond Jubilee Medal (2012) Honorary Doctor of Science-University of Victoria (June 2015) Officer of the Order of Canada (December 2015) Inaugural fellow of the Canadian Mathematical Society, 2018 Bibliography Selected Academic Publications Books See also Banff International Research Station References External links Nassif Ghoussoub's homepage Piece of Mind, Nassif's personal blog A biography Living people 20th-century Canadian mathematicians 21st-century Canadian mathematicians Canadian people of Lebanese descent Mathematical analysts Academic staff of the University of British Columbia Pierre and Marie Curie University alumni Fellows of the Royal Society of Canada Fellows of the American Mathematical Society Fellows of the Canadian Mathematical Society Functional analysts Partial differential equation theorists Officers of the Order of Canada 1953 births
Nassif Ghoussoub
Mathematics
738
36,476,741
https://en.wikipedia.org/wiki/San%20Justo%20Dam
San Justo Dam is a dam and reservoir in San Benito County, California, about southwest of Hollister. The dam provides offstream water storage for the federal Central Valley Project via the Pacheco Conduit and Hollister Conduit fed by the San Luis Reservoir. Completed in January 1986, the dam is an earthfill structure high. Along with a companion dike structure high, it forms a reservoir with a capacity of about . San Justo Reservoir [CLOSED] was closed in 2008, following a Zebra mussel infection and remains closed until further notice. See also List of lakes in California References External links Dams in California United States Bureau of Reclamation dams Reservoirs in San Benito County, California Dams completed in 1986 Central Valley Project Reservoirs in California Reservoirs in Northern California
San Justo Dam
Engineering
157
14,054,023
https://en.wikipedia.org/wiki/Honeycomb%20sea%20wall
A honeycomb sea wall (also known as a "Seabee") is a coastal defense structure that protects against strong waves and tides. It is constructed as a sloped wall of ceramic or concrete blocks with hexagonal holes on the slope, which makes it look like a honeycomb, hence the name of the unit. Its role is to capture sand and to discharge wave energy. Ceramic honeycomb sea wall units usually have 6 or 7 holes and are safer to walk on. These are placed as a revetment over gravel or rock. During strong storms, surging sea water loses energy as it travels down the holes and through the underlayer. The water returns to the sea by upward flow through holes at levels below the transient phreatic surface in the underlayer, causing the downslope disturbing drag force to be reduced. Water that does not go through the holes is redirected by the concrete wall back into the path of oncoming waves, creating more turbulence. Cost comparisons between various seawalls are always site specific, but Seabees use approximately 22% the mass of rock for the same exposure. As the area of the unit is sensibly independent of height [aspect ratios in use vary from 0.4 to 2.5] the mass of the unit can be optimised for all stages of the production and construction process. Surface roughness may also be determined by using combinations of different height units. Allowance for wear is easily allowed [e.g. Shoreham, 1989-90 & various Lincolnshire Seawalls]. Reductions of almost 50% in runup have been achieved, both in the laboratory and at chosen sites. See also Beach Coastal management, for creation and maintenance of beach Coastal erosion Longshore drift Coastal geography Strand plain Sand dune stabilization References External links Close up picture of a seabee wall Coastal engineering Seawalls
Honeycomb sea wall
Engineering
378
9,054,395
https://en.wikipedia.org/wiki/Wombling
In statistics, Wombling is any of a number of techniques used for identifying zones of rapid change, typically in some quantity as it varies across some geographical or Euclidean space. It is named for statistician William H. Womble. The technique may be applied to gene frequency in a population of organisms, and to evolution of language. References William H. Womble 1951. "Differential Systematics". Science vol 114, No. 2961, p315–322. Fitzpatrick M.C., Preisser E.L., Porter A., Elkinton J., Waller L.A., Carlin B.P. and Ellison A.E. (2010) "Ecological boundary detection using Bayesian areal wombling", Ecology 91:3448–3455 Liang, S., Banerjee, S. and Carlin, B.P. (2009) "Bayesian Wombling for Spatial Point Processes", Biometrics, 65 (11), 1243–1253 Ma, H. and Carlin, B.P. (2007) "Bayesian Multivariate Areal Wombling for Multiple Disease Boundary Analysis", Bayesian Analysis, 2 (2), 281–302 Banerjee, S. and Gelfand, A.E. (2006) "Bayesian Wombling: Curvilinear Gradient Assessment Under Spatial Process Models", Journal of the American Statistical Association, 101(476), 1487–1501. Quick, H., Banerjee, S. and Carlin, B.P. (2015). "Bayesian Modeling and Analysis for Gradients in Spatiotemporal Processes" Biometrics, 71, 575–584. Quick, H., Banerjee, S. and Carlin, B.P. (2013). "Modeling temporal gradients in regionally aggregated California asthma hospitalization data" Annals of Applied Statistics, 7(1), 154–176. Halder, A., Banerjee, S. and Dey, D. K. "Bayesian modeling with spatial curvature processes." Journal of the American Statistical Association (2023): 1-13. Available Software: Git Change detection Spatial analysis Gao, L., Banerjee, S. and Ritz, B. "Spatial Difference Boundary Detection for Multiple Outcomes Using Bayesian Disease Mapping." Biostatistics (journal) (2023): 922–944.
Wombling
Physics
519
33,495,646
https://en.wikipedia.org/wiki/Risk%E2%80%93return%20ratio
The risk-return ratio is a measure of return in terms of risk for a specific time period. The percentage return (R) for the time period is measured in a straightforward way: where and simply refer to the price by the start and end of the time period. The risk is measured as the percentage maximum drawdown (MDD) for the specific period: where DDt, DDt-1, Pt and Pt-1 refer the drawdown (DD) and prices (P) at a specific point in time, t, or the time right before that, t-1. The risk-return ratio is then defined and measured, for a specific time period, as: Note that dividing a percentage numerator by a percentage denominator renders a single number. This RRR number is a measure of the return in terms of risk. It is fully comparable, i.e. it's possible to compare the RRR for one share with the RRR of another share, just as long as it's the same time period. The RRR as defined here is formally the same as the so-called MER ratio, and shares some similarities with the Calmar ratio, the Sterling ratio and the Burke ratio. However, the RRR can arguably be regarded as more general than the MER ratio since it can be used for any time interval even daily or intra-day prices, while the MER ratio seems to be confined to measuring only the risk and return of a fund since inception until the current date. It is also less ad hoc than the Calmar, the Sterling and the Burke ratios. The RRR was first defined and popularized by Dr. Richard CB Johnsson in his investment newsletter ('A Simple Risk-Return-Ratio', July 25, 2010). References Johnsson, Richard CB, A Simple Risk-Return-Ratio Financial ratios Investment indicators
Risk–return ratio
Mathematics
382
36,721,453
https://en.wikipedia.org/wiki/Critical%20Path%20Project
The Critical Path Project (stylized CRITICAL///PATH) is a video archive of interviews with video game designers and developers. Launched on July 23, 2012, Critical Path contains over 1,000 videos of interviews with over 100 developers, conducted between 2010 and the present. According to director David Grabias, the project's goals include: To provide a documentary-based venue for critical discussion about the art of making video games. To provide developers with a place where they can come for inspiration. To provide players with insight into their game experience. To make gamers aware of the great minds behind the great games. To document the current state of game development for future generations. Topics covered in the interviews include violence in games, methods of storytelling, game mechanics, player interaction, psychology behind playing games, commercialism in the industry, and the future of video games, among others. All clips on the site are available for free viewing online, and there are plans to release a full-length documentary in the future. Interview Subjects The site currently features video clips with interviews from the following notable developers, among others: Ernest Adams Brian Allgeier Stig Asmussen Chris Avellone Daniel Benmergui Cliff Bleszinski Ian Bogost Nolan Bushnell David Cage John Carmack Jenova Chen and Kellee Santiago Brendon Chung Michael Condrey N'Gai Croal Don Daglow Patrice Désilets Denis Dyack Noah Falstein Josef Fares Tracy Fullerton Toby Gard Richard Garriott Steve Gaynor Ron Gilbert Auriea Harvey Trip Hawkins Chris Hecker David Helgason Richard Hilleman Clint Hocking Todd Howard Rod Humble Robin Hunicke Kenji Inafune Toru Iwatani Marcin Iwinski Daniel James David Jones Hideo Kojima Raph Koster Frank Lantz Ken Levine Laralyn McWilliams Jordan Mechner Sid Meier Peter Molyneux Ray Muzyka and Greg Zeschuk Frank O'Connor Yoshinori Ono Rob Pardo Randy Pitchford Rhianna Pratchett Zoë Quinn Amir Rao Siobhan Reddy Warren Robinett Jason Rohrer Tim Schafer Jesse Schell Glen Schofield Harvey Smith Warren Spector Joseph Staten Davey Wreden Will Wright Brianna Wu Vince Zampella Eric Zimmerman Discussion A few quotes from the site have raised discussions between critics and fans alike. For example, Metal Gear Solid Director Hideo Kojima (famous for his cinematic narrative scenes) mentions that he's "not trying to tell a story." Sid Meier says he's "failed as a designer" when players use cheats, causing some stir with gamers when they discovered an all-powerful Sid Meier character in the latest Firaxis release, X-COM: Enemy Unknown. The archive presents a variety of differing opinions from developers. For example, Cliff Bleszinski speaks about creating empowerment fantasies for players, while Warren Spector condemns the practice. Sid Meier says that "micromanagement is not fun" and other developers, like Fable's Peter Molyneux, Ultima's Richard Garriott, and others attempt to create games that give the player as much freedom and decision-making as possible. References External links History of video games
Critical Path Project
Technology
663
38,318,979
https://en.wikipedia.org/wiki/Social%20media%20intelligence
Social media intelligence (SMI or SOCMINT) comprises the collective tools and solutions that allow organizations to analyze conversations, respond to synchronize social signals, and synthesize social data points into meaningful trends and analysis, based on the user's needs. Social media intelligence allows one to utilize intelligence gathering from social media sites, using both intrusive or non-intrusive means, from open and closed social networks. This type of intelligence gathering is one element of OSINT (Open- Source Intelligence). The term was coined in a 2012 paper written by Sir David Omand, Jamie Bartlett and Carl Miller for the Centre for the Analysis of Social Media, at the London-based think tank, Demos. The authors argued that social media is now an important part of intelligence and security work, but that technological, analytical, and regulatory changes are needed before it can be considered a powerful new form of intelligence, including amendments to the United Kingdom Regulation of Investigatory Powers Act 2000. Given the dynamic evolution of social media and social media monitoring, our current understanding of how social media monitoring can help organizations create business value is inadequate. As a result, there is a need to study how organizations can (a) extract and analyze social media data related to their business (Sensing), and (b) utilize external intelligence gained from social media monitoring for specific business initiatives (Seizing). Governmental Use In Thailand, the Technology Crime Suppression Division not only employs a 30-person team to scrutinize social media for content deemed disrespectful to the monarchy, known as lèse-majesté but also encourages citizens to report such content. Particularly targeting the youth, they run a "Cyber Scout" program where participants are rewarded for reporting individuals posting material perceived as detrimental to the monarchy. Instances in Israel involve the arrest of Palestinians by the police for their social media posts. An example includes a 15-year-old girl who posted a Facebook status with the words "forgive me," raising suspicions among Israeli authorities that she might be planning an attack. In Egypt, a leaked 2014 call for tender from the Ministry of Interior reveals efforts to procure a social media monitoring system to identify leading figures and prevent protests before they occur. In the United States, ZeroFOX faced criticism for sharing a report with Baltimore officials showcasing how their social media monitoring tool could track riots following Freddie Gray's funeral. The report labeled 19 individuals, including two prominent figures from the #BlackLivesMatter movement, as "threat actors." In the UK, the Association of Chief Police Officers of England, Wales, and Northern Ireland emphasized the significance of social media in intelligence gathering during anti-fracking protests in 2011. Social media analysis closely monitored protests against the badger cull in 2013, with a 2013 report revealing a team of 17 officers in the National Domestic Extremism Unit scanning public tweets, YouTube videos, Facebook profiles, and other online content from UK citizens. Effects on Political Opinion During the 2016 United States presidential election, the Senate Intelligence Committee released reports containing information about Russia’s use of troll farms to mislead black voters about voting. Also, German researchers in 2010 analyzed Twitter messages regarding the German federal election concluding that Twitter played a role in leading users to a specific political opinion. In a broad sense, social media refers to a conversational, distributed mode of content generation, dissemination, and communication among communities. Different from broadcast-based traditional and industrial media, social media has torn down the boundaries between authorship and readership, while the information consumption and dissemination process is becoming intrinsically intertwined with the process of generating and sharing information. An example of how SOCMINT is used to affect political opinions is the Cambridge Analytica Scandal. Cambridge Analytica was a company that purchased data from Facebook about its users without the consent or knowledge of Americans. They used this data to build a "psychological warfare tool" to persuade US voters to elect Donald Trump as president in the 2016 election. Christopher Wylie, the whistleblower, reported that personal information was taken in early 2014, and used to build a system that could target US voters with personalized pollical advertisements. More than 50 million individuals' data was exploited and manipulated. Law Enforcement In September of 2023, the Philadelphia Police Department began using social media to track and stay one step ahead of criminal activity to stop meetups and potential robberies. This new approach has made officers utilize another tool in their field by being able to find new information as quickly as possible. Law enforcement agencies worldwide are increasingly employing social media intelligence to enhance their capabilities in both crime prevention and investigation. By analyzing publicly available data from social platforms such as Facebook, Twitter, and Instagram, police can track criminal activities, identify suspects, and even prevent potential crimes before they occur. For instance, the FBI utilizes SOCMINT to monitor threats and investigate criminal activities, including analyzing posts, images, and videos that might signal illegal activities or security concerns. Marketing SOCMINT collects data from both organizations and people on an individual level. It has a variety of different purposes, and though its main goal is to improve national security advancements, there are several other benefits as well. This intelligence can identify patterns, predict trends, gather information in current time, etc. In addition, these aspects have allowed for both improvement within businesses and help for law enforcement. Artificial Social Networking Intelligence (ASNI) refers to the application of artificial intelligence within social networking services and social media platforms. It encompasses various technologies and techniques used to automate, personalize, enhance, improve, and synchronize user's interactions and experiences within social networks. ASNI is expected to evolve rapidly, influencing how we interact online and shaping their digital experiences. Transparency, ethical considerations, media influence bias, and user control over data will be crucial to ensure responsible development and positive impact. Google provides many free services and has built an entire media brand with its vast variety of products. Along with data collection, Google also owns two advertising services, Google Ads, and Google AdSense. Surprisingly, most of its revenue comes from advertising, not direct sales of its services or products. Google makes money by selling advertising services to advertisers. They provide ad space to websites on Google, and target ads to consumers of Google services and products. Google can market ads using SOCMINT to collect data from its users and generate revenue. Research shows that various social media platforms on the Internet such as Twitter, Tumblr (micro-blogging websites), Facebook (a popular social networking website), YouTube (largest video sharing and hosting website), Blogs and discussion forums are being misused by extremist groups for spreading their beliefs and ideologies, promoting radicalization, recruiting members and creating online virtual communities sharing a common agenda. Popular microblogging websites such as Twitter are being used as a real-time platform for information sharing and communication during the planning and mobilization of civil unrest-related events. See also Algorithmic curation Ambient awareness Collective influence algorithm Information retrieval Media intelligence Online algorithm Open-source intelligence Sentiment analysis Social bot Social cloud computing Social data revolution Social media analytics Social media mining Social media optimization Social profiling Social software Virtual collective consciousness References External links Social media Social media management Social information processing Collective intelligence Surveillance Intelligence gathering disciplines Mass media monitoring Open-source intelligence
Social media intelligence
Technology
1,484
26,453,896
https://en.wikipedia.org/wiki/C6H7NO2
{{DISPLAYTITLE:C6H7NO2}} The molecular formula C6H7NO2 (molar mass: 125.125 g/mol, exact mass: 125.0477 u) may refer to: Ethyl cyanoacrylate (ECA) 3-Hydroxyisonicotinaldehyde (HINA) N-Ethylmaleimide (NEM)
C6H7NO2
Chemistry
88
4,421,042
https://en.wikipedia.org/wiki/Histone%20H2A
Histone H2A is one of the five main histone proteins involved in the structure of chromatin in eukaryotic cells. The other histone proteins are: H1, H2B, H3 and H4. Background Histones are proteins that package DNA into nucleosomes. Histones are responsible for maintaining the shape and structure of a nucleosome. One chromatin molecule is composed of at least one of each core histones per 100 base pairs of DNA. There are five families of histones known to date; these histones are termed H1/H5, H2A, H2B, H3, and H4. H2A is considered a core histone, along with H2B, H3 and H4. Core formation first occurs through the interaction of two H2A molecules. Then, H2A forms a dimer with H2B; the core molecule is complete when H3-H4 also attaches to form a tetramer. Sequence variants Histone H2A is composed of non-allelic variants. The term "Histone H2A" is intentionally non-specific and refers to a variety of closely related proteins that vary often by only a few amino acids. Apart from the canonical form, notable variants include H2A.1, H2A.2, H2A.X, and H2A.Z. H2A variants can be explored using "HistoneDB with Variants" database Changes in variant composition occur in differentiating cells. This was observed in differentiating neurons during synthesis and turnover; changes in variant composition were seen among the H2A.1 histone. The only variant that remained constant in the neural differentiation was variant H2A.Z. H2A.Z is a variant that exchanges with conventional H2A core protein; this variant is important for gene silencing. Physically, there are small changes on the surface area of the nucleosome that make the histone differ from H2A. Recent research suggests that H2AZ is incorporated into the nucleosome using a Swr1, a Swi2/Snf2- related adenosine triphosphatase. Another H2A variant that has been identified is H2AX. This variant has a C-terminal extension that is utilized for DNA repair. The method of repair this variant employs is non-homologous end joining. Direct DNA damage can induce changes to the sequence variants. Experiments performed with ionizing radiation linked γ- phosphorylation of H2AX to double-strand breaks. A large amount of chromatin is involved with each DNA double-strand break; a response to DNA damage is the formation of γ- H2AX. Lastly, MacroH2A variant is a variant that is similar to H2A; it is encoded by the H2AFY gene. This variant differs from H2A because of the addition of a fold domain in its C-terminal tail. MacroH2A is expressed in the inactive X chromosome in females. Structure H2A consists of a main globular domain, an N-terminal tail and a C-terminal tail. Both tails are the location of post-translational modification. Thus far, researchers have not identified any secondary structures that arise in the tails. H2A utilizes a protein fold known as the ‘histone fold’. The histone fold is a three-helix core domain that is connected by two loops. This connection forms a ‘handshake arrangement.’ Most notably, this is termed the helix-turn-helix motif, which allows for dimerization with H2B. The ‘histone fold’ is conserved among H2A at the structural level; however the genetic sequence that encodes for this structure differs between variants. The structure of macroH2A variant was exposed through X-ray crystallography. The conserved domain contains a DNA binding structure and a peptidase fold. The function of this conserved domain remains unknown. Research suggests that this conserved domain may function as an anchor site for Xist DNA or it may also function as a modifying enzyme. Function DNA Folding: H2A is important for packaging DNA into chromatin. Since H2A packages DNA molecules into chromatin, the packaging process will affect gene expression. H2A has been correlated with DNA modification and epigenetics. H2A plays a major role in determining the overall structure of chromatin. Inadvertently, H2A has been found to regulate gene expression. DNA modification by H2A occurs in the cell nucleus. Proteins responsible for nuclear import of H2A protein are karyopherin and importin. Recent studies also show that nucleosome assembly protein 1 is also used to transport of H2A into the nucleus so it can wrap DNA. Other functions of H2A have been seen in the histone variant H2A.Z. This variant is associated with gene activation, silencing and suppression of antisense RNA. In addition, when H2A.Z was studied in human and yeast cells, it was used to promote RNA polymerase II recruitment. Antimicrobial peptide: Histones are conserved eukaryotic cationic proteins present in the cells and are involved in the antimicrobial activities. In vertebrates and invertebrates, Histone H2A variant is reported to be involved in host immune response by acting as antimicrobial peptides (AMPs). H2A are α-helical molecule, amphipathic protein with hydrophobic and hydrophilic residues on opposing sides that enhances the antimicrobial activity of H2A. DNA damage response Site specific ubiquitination of histone H2A has a role in the recruitment of DNA repair proteins to DNA double strand breaks which then may be repaired by either homologous recombination or non-homologous end joining. In the DNA damage response, it is thought that ubiquitination of H2A by the BRCA1/BARD1 heterodimer promotes homologous recombination, and that ubiquitination of H2A by RNF168 protein promotes non-homologous end joining. Genetics H2A is coded by many genes in the human genome, including: H2AFB1, H2AFB2, H2AFB3, H2AFJ, H2AFV, H2AFX, H2AFY, H2AFY2, and H2AFZ. Genetic patterns among the different H2A molecules are mostly conserved among variants. The variability in gene expression exists among the regulatory machinery that manages H2A expression. Researchers studied eukaryotic evolutionary lineages of histone proteins and found diversification among the regulatory genes. The greatest differences were observed in core histone gene cis-regulatory sequence motifs and associated protein factors. Variability in gene sequence was seen in bacterial, fungi, plant, and mammalian genes. One variant of H2A protein is H2ABbd (Barr body deficient) variant. This variant is composed of a different genetic sequence compared to H2A. The variant functions with transcriptionally active domains. Other variations associated with H2ABbd are located within its C-terminus. H2ABbd has a shorter C-terminal domain compared to the large C-terminal found on H2A. The two C terminals are about 48% identical. H2ABbd functions with active chromosomes. Thus far, it is missing from Xi chromosomes in fibroblast cells. Lastly, it found to be associated with acetylated H4. Different functions of H2A.Z compared to H2A are correlated with genetic differences between H2A and the variant. Resistance to nucleosomes occurs in H2A.Z by binding to H1 factor. H2A.Z gene is an essential gene in yeast and it is denoted as Htz1. Comparatively, vertebrates have two H2A.Z genes. These genes, H2A.Z1 and H2A.Z2 encode for proteins that differ from H2A.Z by three residues. At first researchers figured that these genes were redundant; however, when a mutant H2A.Z1 was created, it resulted in lethality during mammalian tests. Therefore, H2A.Z1 is an essential gene. On the other hand, researchers have not identified the function of H2A.Z2 variant. It is known that it is transcribed in mammals and this gene expression is conserved among mammalian species. This conservation suggests that the gene is functional. When studying H2A.Z in plants species, the protein different among residues from species to species. These differences contribute to differences in cell-cycle regulation. This phenomenon was only observed in plants. Phylogenetic trees were created to show the divergence of variants from their ancestors. The divergence of variant, H2A.X, from H2A occurred at multiple origins in a phylogenetic tree. Acquisition of the phosphorylation motif was consistent with the many origins of H2A that arose from an ancestral H2A.X. Finally, the presence of H2A.X and absence of H2A in fungi leads researchers to believe that H2A.X was the original ancestor of the histone protein H2A Modification of H2A H2A modification is under current research. However, modification of H2A does occur. Serine phosphorylation sites have been identified on H2A. Threonine O-GlcNAc has also been identified on H2A. Large differences exist between the modified residues of H2A variants. For example, H2ABbd lacks modified residues that exist in H2A. The differences in modification change the function of H2ABbd compared to H2A. As previously mentioned, variant H2AX was found to function in DNA repair. This function is dependent upon the phosphorylation of H2AX C-terminal. Once H2AX becomes phosphorylated, it can function in DNA repair. The H2A.X variant differs from H2A through modification. The C-terminal of H2A.X contains an additional motif compared to H2A. The motif that is added is Ser-Gln-(Glu/Asp)- (hydrophobic residue). The motif becomes heavily phosphorylated at the serine residue; if this phosphorylation occurs the variant becomes γH2A.X. Phosphorylation occurs due to dsDNA breaks. Modification on histone proteins can sometimes result in a change in function. Different H2A variants were exploited to have different functions, genetic sequences, and modifications. See also Histone code Chromatin Nucleosome References External links Nextbio Proteins
Histone H2A
Chemistry
2,270
17,760,201
https://en.wikipedia.org/wiki/%28Z%29-Stilbene
(Z)-Stilbene is a diarylethene, that is, a hydrocarbon consisting of a cis ethene double bond substituted with a phenyl group on both carbon atoms of the double bond. The name stilbene was derived from the Greek word , which means shining. Isomers Stilbene exists as two possible isomers known as (E)-stilbene and (Z)-stilbene. (Z)-Stilbene is sterically hindered and less stable because the steric interactions force the aromatic rings 43° out-of-plane and prevent conjugation. (Z)-Stilbene has a melting point of , while (E)-stilbene melts around , illustrating that the two compounds are quite different. Uses Stilbene is used in manufacture of dyes and optical brighteners, and also as a phosphor and a scintillator. Stilbene is one of the gain mediums used in dye lasers. Properties Stilbene will typically have the chemistry of a diarylethene, a conjugated alkene. Stilbene can undergo photoisomerization under the influence of UV light. Stilbene can undergo stilbene photocyclization, an intramolecular reaction. (Z)-Stilbene can undergo electrocyclic reactions. Natural occurrence Many stilbene derivatives (stilbenoids) are present naturally in plants. An example is resveratrol and its cousin, pterostilbene. References Luminescence Fluorescent dyes Phosphors and scintillators Laser gain media Stilbenoids Phenyl compounds
(Z)-Stilbene
Chemistry
348
420,881
https://en.wikipedia.org/wiki/California%20Air%20Resources%20Board
The California Air Resources Board (CARB or ARB) is an agency of the government of California that aims to reduce air pollution. Established in 1967 when then-governor Ronald Reagan signed the Mulford-Carrell Act, combining the Bureau of Air Sanitation and the Motor Vehicle Pollution Control Board, CARB is a department within the cabinet-level California Environmental Protection Agency. The stated goals of CARB include attaining and maintaining healthy air quality; protecting the public from exposure to toxic air contaminants; and providing innovative approaches for complying with air pollution rules and regulations. CARB has also been instrumental in driving innovation throughout the global automotive industry through programs such as its ZEV mandate. One of CARB's responsibilities is to define vehicle emissions standards. California is the only state permitted to issue emissions standards under the federal Clean Air Act, subject to a waiver from the United States Environmental Protection Agency. Other states may choose to follow CARB or the federal vehicle emission standards but may not set their own. Governance CARB's governing board is made up of 16 members, with two non-voting members appointed for legislative oversight, one each by the California State Assembly and Senate. 12 of the 14 voting members are appointed by the governor and subject to confirmation by the Senate: five from local air districts, four air pollution subject-matter experts, two members of the public, and the Chair. The other two voting members are appointed from environmental justice committees by the Assembly and Senate. Five of the governor-appointed board members are chosen from regional air pollution control or air quality management districts, including one each from: Bay Area AQMD (San Francisco Bay Area), currently John Gioia San Diego County APCD, currently Nathan Fletcher San Joaquin Valley APCD, currently Alexander Sherriffs, M.D. South Coast AQMD, currently Judy Mitchell A Sacramento-area district: Sacramento Metropolitan AQMD, Yolo-Solano AQMD, Placer County APCD, Feather River AQMD, or El Dorado County AQMD, currently Phil Serna Four governor-appointed board members are subject matter experts in specific fields: automotive engineering, currently Dan Sperling; science, agriculture, or law, currently John Eisenhut; medicine, currently John R. Balmes, M.D.; and air pollution control. The governor is also responsible for two appointees from members of the public, and the final governor appointee is the Board's Chair. The first Chair of CARB was Dr. Arie Jan Haagen-Smit, who was previously a professor at the California Institute of Technology and started research into air pollution in 1948. Dr. Haagen-Smit is credited with discovering the source of smog in California, which led to the development of air pollution controls and standards. In honor of his legacy, CARB started the Haagen-Smit Clean Air Awards program in 2001 to recognize individuals who have had significant accomplishments in the field of air quality and climate change. The two legislature-appointed board members work directly with communities affected by air pollution. They are currently Diane Takvorian and Dean Florez, appointed by the Assembly and Senate respectively. Organizational structure CARB is a part of the California Environmental Protection Agency, an organization which reports directly to the Governor's Office in the Executive Branch of California State Government. CARB has 15 divisions and offices: Office of the Chair Executive Office Office of Community Air Protection Air Quality Planning and Science Division Emission Certification and Compliance Division Enforcement Division Industrial Strategies Division Mobile Source Control Division Mobile Source Laboratory Division Research Division Sustainable Transportation and Communities Division Transportation and Toxics Division Office of Information Services Administrative Services Division Air Quality Planning and Science Division The division assesses the extent of California's air quality problems and the progress being made to abate them, coordinates statewide development of clean air plans and maintains databases pertinent to air quality and emissions. The division's technical support work provides a basis for clean air plans and CARB's regulatory programs. This support includes management and interpretation of emission inventories, air quality data, meteorological data and of air quality modeling. The Air Quality Planning and Science Division has five branches: Special Assessment Branch Emission Inventory and Economic Analysis Branch Modeling & Meteorology Branch Air Quality Planning Branch Mobile Source Analysis Branch Consumer Products and Air Quality Assessment Branch Atmospheric Modeling & Support Section The Atmospheric Modeling & Support Section is one of three sections within the Modeling & Meteorology Branch. The other two sections are the Regional Air Quality Modeling Section and the Meteorology Section. The air quality and atmospheric pollution dispersion models routinely used by this Section include a number of the models recommended by the U.S. Environmental Protection Agency (EPA). The section uses models which were either developed by CARB or whose development was funded by CARB, such as: CALPUFFOriginally developed by the Sigma Research Company (SRC) under contract to CARB. Currently maintained by the TRC Solution Company under contract to the U.S. EPA. CALGRIDDeveloped by CARB and currently maintained by CARB. SARMAPDeveloped by CARB and currently maintained by CARB. Role in reducing greenhouse gases The California Air Resources Board is charged with implementing California's comprehensive suite of policies to reduce emissions of greenhouse gases. In part due to CARB, California has successfully decoupled greenhouse gas emissions from economic growth, and achieved its goal of reducing emissions to 1990 levels four years earlier than the target date of 2020. Alternative Fuel Vehicle Incentive Program Alternative Fuel Vehicle Incentive Program (also known as Fueling Alternatives) is funded by the California Air Resources Board (CARB), offered throughout the State of California and administered by the California Center for Sustainable Energy (CCSE). Low-Emission Vehicle Program The CARB first adopted the Low-Emission Vehicle (LEV) Program standards in 1990 to address smog-forming pollutants, which covered automobiles sold in California from 1994 through 2003. An amendment to the LEV Program, known as LEV II, was adopted in 1999, and covered vehicles for the 2004 through 2014 model years. Greenhouse gas (GHG) emission regulations were adopted in 2004 starting for the 2009 model year, and are named the "Pavley" standards after Assemblymember Fran Pavley, who had written Assembly Bill 1493 in 2002 to establish them. A second amendment, LEV III, was adopted in 2012, and covers vehicles sold from 2015 onward for both smog (superseding LEV II) and GHG (superseding Pavley) emissions. The rules created under the LEV Program have been codified as specific sections in Title 13 of the California Code of Regulations; in general, LEV I is § 1960.1; LEV II is § 1961; Pavley is § 1961.1; LEV III is § 1961.2 (smog-forming pollutants) and 1961.3 (GHG). The ZEV regulations, which were initially part of LEV I, have been broken out separately into § 1962. For comparison, the average new car sold in 1965 would produce approximately of hydrocarbons over of driving; under the LEV I standards, the average new car sold in 1998 was projected to produce hydrocarbon emissions of over the same distance, and under LEV II, the average new car in 2010 would further reduce hydrocarbon emissions to . Required labeling In 2005, the California State Assembly passed AB 1229, which required all new vehicles manufactured after January 1, 2009 to bear an Environmental Performance Label, which scored the emissions performance of the vehicle on two scales ranging between 1 (worst) and 10 (best): one for global warming (emissions of GHG such as , , air conditioning refrigerants, and ) and one for smog-forming compounds (non-methane organic gases (NMOG), , and ). The Federal Government followed suit and required a similar "smog score" on new vehicles sold starting in 2013; the standards were realigned for labels applied to 2018 model year vehicles. Vehicle categories The LEV program has established several categories of reduced emissions vehicles. LEV I defined LEV and ULEV vehicles, and added TLEV and Tier 1 temporary classifications that would not be sold after 2003. LEV II added SULEV and PZEV vehicles, and LEV III tightened emission standards. The actual emission levels depend on the standards in use. LEV (Low Emission Vehicle): The least stringent emission standard for all new cars sold in California beyond 2004. ULEV (Ultra Low Emission Vehicle): 50% cleaner than the average new 2003 model year vehicle. SULEV (Super Ultra Low Emission Vehicle): These vehicles emit substantially lower levels of hydrocarbons, carbon monoxide, oxides of nitrogen and particulate matter than conventional vehicles. They are 90% cleaner than the average new 2003 model year vehicle. LEV I defined emission limits for several different classes of vehicle, including passenger cars (PC), light-duty trucks (LDT), and medium-duty vehicles (MDV). Heavy-duty vehicles were specifically excluded from LEV I. LEV I also defined a loaded vehicle weight (LVW) as the vehicle's Curb weight plus an allowance of . In general, the most stringent standards were applied to passenger cars and light-duty trucks with a LVW up to (these "light" LDTs were later denoted LDT1 under LEV II). LEV II increased the scope of vehicles classed as light-duty trucks to encompass a higher GVWR up to , compared to the LEV I standard of . In addition, LEV I had defined less stringent limits for heavier LDTs (denoted LDT2 with a LVW ); LEV II closed that discrepancy and defined a single emissions standard for all PCs and LDTs. Under LEV III, medium-duty passenger vehicles (MDPV) were brought under the most stringent standards alongside PCs and LDTs. Smog-forming compound emissions limits Rather than providing a single standard for vehicles based on age, purpose, and weight, the LEV I standards introduced different tiers of limits for smog-forming compound emissions starting in the 1995 model year. After 2003, LEV was the minimum standard to be met. Greenhouse gas emissions limits CARB adopted regulations for limits on greenhouse gas emissions in 2004 starting with the 2009 model year to support the direction provided by AB 1493. In June 2005, Governor Arnold Schwarzenegger signed Executive Order S-03-05, which required a reduction in California GHG emissions, targeting an 80% reduction compared to 1990 levels by 2050. Assembly Bill 32, better known as the California Global Warming Solutions Act of 2006, codified these requirements. CARB filed a waiver request with the United States Environmental Protection Agency (EPA) under Section 209(b) of the Clean Air Act in December 2005 to permit it to establish limits on greenhouse gas emissions; although the waiver request was initially denied in March 2008, it was later approved on June 30, 2009 after President Barack Obama signed a Presidential Memorandum directing the EPA to reconsider the waiver. In the initial denial, EPA Administrator Stephen L. Johnson stated the Clean Air Act was not "intended to allow California to promulgate state standards for emissions from new motor vehicles designed to address global climate change problems" and further, that he did not believe "the effects of climate change in California are compelling and extraordinary compared to the effects in the rest of the country." Johnson's successor, Lisa P. Jackson, signed the waiver overturning Johnson's denial, writing that "EPA must grant California a waiver if California determines that its standards are, in the aggregate, at least as protective of the public health and welfare as applicable Federal standards." Jackson also noted that in the history of the waiver process, over 50 waivers had been granted and only one had been fully denied, namely the March 2008 denial of the GHG emissions regulation. CARB decided to adopt regulation of GHG emissions under Executive Order G-05-061, which provided phase-in targets for fleet average GHG emissions in -equivalent grams per mile starting with the 2009 model year. The calculation of -equivalent emissions was based on contributions from four different chemicals: , , , and air conditioning refrigerants. The emissions in g/mi -equivalent are calculated according to the formula , which has two terms for direct and indirect emissions allowances of air conditioning refrigerants, depending on the refrigerant used, such as HFC134a, and the system design. Vehicles powered by alternative fuels use a slightly modified formula, , where is a fuel adjustment factor depending on the alternative fuel used (1.03 for natural gas, 0.89 for LPG, and 0.74 for E85). ZEVs are also required to calculate GHG as the processes to generate the energy (or fuel) used also produce GHG. For ZEVs, , where is the upstream emissions factor (130 g/mi for battery electric vehicles, 210 for hydrogen/fuel cell, and 290 for hydrogen/internal combustion). Direct emissions could be calculated in a relatively straightforward fashion based on fuel consumption. Manufacturers that do not wish to measure emissions may assume a value of 0.006 g/mi. An update was issued in 2010 which allowed manufacturers to calculate GHG emissions using CAFE data; for conventionally powered vehicles, the contribution from the nitrous oxide and methane terms could be assumed to be 1.9 g/mi. CARB voted unanimously in March 2017 to require automakers to average for new cars in 2025. Section 177 states Because California had emissions regulations prior to the 1977 Clean Air Act, under Section 177 of that bill, other states may adopt the more stringent California emissions regulations as an alternative to federal standards. Thirteen other states and the District of Columbia have chosen to do so, and ten of those have additionally adopted the California Zero-Emission Vehicle regulations. In December 2020, Minnesota announced its intention to adopt California LEV and ZEV rules; following a hearing before an administrative law judge in February 2021, the Minnesota Pollution Control Agency adopted the California regulations. In August 2022, Virginia, citing to a 2021 law, announced it would follow California regulations for ZEV registrations. Arizona and New Mexico had previously adopted California LEV regulations under Section 177, but later repealed those states' clean car standards in 2012 and 2013, respectively. In Canada, the province of Quebec adopted CARB standards effective in 2010. CARB and the Government of Canada entered into a Memorandum of Understanding in June 2019 to cooperate on greenhouse gas emissions mitigation. Zero-Emission Vehicle Program The CARB Zero-Emission Vehicle (ZEV) program was enacted by the California government starting in 1990 to promote the use of zero emission vehicles. The program goal is to reduce the pervasive air pollution affecting the main metropolitan areas in the state, particularly in Los Angeles, where prolonged pollution episodes are frequent. The California ZEV rule was first adopted by CARB as part of the 1990 Low-Emission Vehicle (LEV I) Program. The focus of the 1990 rules (ZEV-90) was to meet air quality standards for ozone rather than the reduction of greenhouse gas (GHG) emissions. Under LEV II in 1999, the ZEV regulations were moved to a separate section (13 CCR § 1962) and the requirements for ZEVs as a percentage of fleet sales was made more formal. Executive Order S-03-05 (2005) and Assembly Bills 1493 (2002) and 32 (2006) prompted CARB to reevaluate the ZEV program as last amended in 1996, which had been primarily concerned with reducing emissions of smog-forming pollutants. By the time AB 32 passed in 2006, vehicles complying with PZEV and AT PZEV standards had become commercially successful, and the ZEV program could then shift towards reducing both smog-forming compounds and greenhouse gases. The next set of ZEV regulations were adopted in 2012 with LEV III. CARB put both LEV and ZEV rules together as the Advanced Clean Cars Program (ACC), adopted in 2012, which included regulations for cars sold through the 2025 model year. The regulations include updates to regulations for LEV III (for smog-forming emissions), LEV III GHG (for greenhouse gas emissions), and ZEV. Since then, in September 2020 Governor Gavin Newsom signed an executive order directing that by 2035, all new cars and passenger trucks sold in California will be zero-emission vehicles. Executive Order N-79-20 directs CARB to develop regulations to require that ZEVs be an increasing share of new vehicles sold in the state, with light-duty cars and trucks and off-road vehicles and equipment meeting the 100% ZEV goal by 2035 and medium and heavy-duty trucks and buses meeting the same 100% ZEV goal by 2045. The order also directs Caltrans to develop near-term actions to encourage "an integrated, statewide rail and transit network" and infrastructure to support bicycles and pedestrians. In response, CARB began development of the Advanced Clean Cars II (ACC II) Program, focusing on emissions of vehicles sold after 2025. ACC II reiterated the aim to have all new passenger cars, trucks and SUVs sold in the state to be zero emissions vehicles by 2035, and was scheduled for consideration before CARB in June 2022. The regulations of ACC II were adopted by California in August 2022. Vehicle definitions LEV I defined a ZEV as one that produces "zero emissions of any criteria pollutants under any and all possible operational modes and conditions." A vehicle could still qualify as a ZEV with a fuel-fired heater, as long as the heater was unable to be operated at ambient temperatures above and did not have any evaporative emissions. Under LEV II (ZEV-99), the ZEV definition was updated to include precursor pollutants, but did not consider upstream emissions from power plants. The ZEV regulation has evolved and been modified several times since 1990, and several new partial or low-emission categories were created and defined, including the introduction of PZEV and AT PZEV categories in ZEV-99. PZEV (Partial Zero Emission Vehicle): Meets SULEV tailpipe standards, has a 15-year / 150,000 mile warranty, and zero evaporative emissions. These vehicles are 80% cleaner than the average 2002 model year car. AT PZEV (Advanced Technology PZEV): These are advanced technology vehicles that meet PZEV standards and include ZEV enabling technology, typically hybrid electric vehicles (HEV). They are 80% cleaner than the average 2002 model year car. ZEV (Zero Emission Vehicle): Zero tailpipe emissions, and 98% cleaner than the average new 2003 model year vehicle. Manufacturer sales volume Under ZEV-90, CARB classified manufacturers according to the average sales per year between 1989 and 1993; small volume manufacturers were those that sold 3,000 or fewer new vehicles per year; intermediate volume manufacturers sold between 3,001 and 35,000; and large volume manufacturers sold more than 35,000 per year. For large volume manufacturers, CARB required that 2% of 1998 to 2000 model year vehicles sold were ZEVs, ramping up to 5% ZEVs by 2001 and 10% ZEVs in 2003 and beyond. Intermediate volume manufacturers were not required to meet the goals until 2003, and small volume manufacturers were exempted. These percentages were calculated based on total production of passenger cars and light-duty trucks with a loaded vehicle weight (LVW) less than . ZEV credit system The LEV I rules also introduced the concept of emission credits. Under LEV I, the vehicle fleet average emissions rate of non-methane organic gases (NMOG) produced by a manufacturer was required to meet increasingly stringent requirements starting in 1994. The calculation of fleet average NMOG emissions was based on a weighted sum of vehicle NMOG emissions, based on the number sold and type of certification (i.e., TLEV, LEV, ULEV, etc.), divided by the total number of vehicles produced, including ZEVs. Manufacturers whose fleet average NMOG emissions met or exceeded the NMOG emissions goal would be subjected to civil penalties; those which fell below the goal would receive credits, which could then be marketed to other manufacturers. The 1996 amendments to the ZEV regulations in LEV I (ZEV-96) introduced credits where a ZEV could be counted more than once based on vehicle range or battery specific energy to encourage deployment of ZEVs prior to 2003. Under LEV II/ZEV-99, the PZEV and AT PZEV categories were introduced, and the percentage of ZEVs sold by a manufacturer could be partially met by the sales of PZEV and AT PZEVs. If a vehicle met PZEV criteria, it qualified for a credit equal to 0.2 of one ZEV for the purposes of calculating that manufacturer's ZEV production. AT PZEVs capable of traveling with zero emissions for a limited range were allowed additional credit if the urban all-electric range was at least ten miles. ZEVs that were introduced prior to 2003 received a multiplier, with a value ranging up to 10× a single ZEV depending on the all-electric range and fast-charging capability. MOA demonstration fleet In March 1996, ZEV-96 eliminated the ZEV ramp-up planned to start in 1998, but the goal of 10% ZEVs by 2003 was retained, with credits granted for sales of partial ZEVs (PZEVs). According to comment responses, CARB determined that advanced batteries would not be ready in time to meet the ZEV requirements until at least 2003. In conjunction with relaxing the requirements in ZEV-96, CARB signed memoranda of agreement (MOAs) with the seven large scale manufacturers to begin rolling out demonstration fleets of ZEVs with limited public availability in the near term. The GM EV1 was the first battery electric vehicle (BEV) offered to the public, in partial fulfillment of the agreement with CARB. The EV1 was available only through a /month lease starting in December 1996; the initial markets were South Coast, San Diego, and Arizona, and expanded to Sacramento and the Bay Area. GM also offered an electric S-10 pickup truck to fleet operators. In 1997, Honda (EV Plus, May 1997), Toyota (RAV4 EV, October 1997), and Chrysler (EPIC, 1997) followed suit. Ford also introduced the Ranger EV for the 1998 model year, and Nissan stated they planned to offer the Altra in the 1998 model year as well to fulfill the MOA. As an acceptable alternative, Mazda stated they would purchase ZEV credits from Ford. Advanced Clean Cars The Low-Emission Vehicle Program was revised to define modified ZEV regulations for 2015 models. CARB estimates that ACC will result in 10% of all sales to be ZEVs by 2025. The share remained at 3% between 2014 and 2016. Battery vehicles receive 3 or 4 credits, while fuel cell cars receive 9. , a credit has a market value of $3-4,000, and some automakers have more credits than required. CARB held a public workshop in September 2020 where several new consumer-friendly regulations for ZEVs were proposed to improve adoption: Standardization of a DC Fast Charge inlet (proposing to use CCS Combo 1, with adapters provided by the vehicle manufacturer if applicable) Standardization of vehicle and battery data (to assist assessment of need for repairs/condition) Implement a standardized battery state-of-health (SOH) indicator (using SAE J1634 dynamometer testing to define battery capacity) and define a value of battery SOH that qualifies for warranty repair Make ZEV powertrain service and repair information available to independent technicians and repair shops (including standardization of communication protocols for vehicle data) In May 2021, additional draft requirements were added: Durability: BEVs to maintain 80% of certified range for 15 years/150,000 miles Durability: FCEVs to maintain 90% of fuel cell system output power after 4,000 hours of operation Battery Labelling: standardized content to improve the efficiency of recycling batteries to recover materials or potential repurposing To improve access to ZEVs, CARB added proposed environmental justice (EJ) credits in August 2021 for manufacturers who improve options for clean transportation to underserved communities, such as by providing a discount on a ZEV that would be used in a community-based clean mobility program. The August workshop also included additional regulations for ZEVs: Range: starting in 2026, minimum (2-cycle) range to be On-board charger: minimum 5.76 kW for AC (Level 2) charging, sufficient for a BEV to charge overnight (8 hours) from a 30A source The final workshop in October 2021 proposed that ZEVs would be taken out of fleet calculations for vehicle emissions and provided yearly targets for ZEV vehicle sales as a percent of total sales, including potential EJ credits. Additionally, the required warranty period and requirements to take credit for PHEV sales were defined: Battery to retain ≥ 80% SoH for 8 years/100,000 miles PHEVs to meet one of two requirements: Transitional PHEVs (2026–28): minimum all-electric range with additional credit if vehicle exceeds on the US06 high speed/acceleration cycle; 8 year/100,000 80%SOH battery warranty, 5.76 kW on-board charger Full credit PHEVs (2026+): minimum all-electric range, minimum on the US06 high speed/acceleration cycle; 8 year/100,000 80%SOH battery warranty, 5.76 kW on-board charger "Small volume" manufacturers (defined as those selling fewer than 4,500 cars per year) are required to comply with the ZEV mandate starting with the 2035 model year OHV Emission Standards The California DMV implements the policy dictates of the California Air Resources Board (CARB) with respect to registration of off-highway motor vehicles (OHVs). Registration consists of ID plates or placards issued by the DMV. Operating a motorized vehicle off-highway in California requires either a Green Sticker or a Red Sticker ID. The Green Sticker indicates that the vehicle has passed emission requirements. The Red Sticker (issued through 2021) restricts OHV use due to not meeting emission standards established by the CARB. The red sticker program began in 1994 when CARB adopted standards for emissions from two-stroke engines used primarily on dirt bikes. Between 1998 and 2003, the red sticker program was refined allowing vehicles that did not meet peak ozone season standards to be operated only at specific times of the year. As of model year 2022, the CARB no longer authorizes issuing of red stickers. Commercial Harbor Craft Regulation The California Air Resources Board's (CARB) Commercial Harbor Craft regulation is a regulatory framework aimed at reducing emissions from commercial vessels operating in California's harbors and ports. The rule primarily targets diesel-powered vessels such as ferries, tugboats, and other workboats that operate in and around California's ports. Since the original adoption of regulation in 2008, and its amendments in 2010 and 2022, vessel owners in the state have been required to either replace their engines or send their boats out of the state. Low-carbon fuel standard The Low-Carbon Fuel Standard (LCFS) requires oil refineries and distributors to ensure that the mix of fuel they sell in the Californian market meets the established declining targets for greenhouse gas emissions measured in CO2-equivalent grams per unit of fuel energy sold for transport purposes. The 2007 Governor's LCFS directive calls for a reduction of at least 10% in the carbon intensity of California's transportation fuels by 2020. These reductions include not only tailpipe emissions but also all other associated emissions from production, distribution and use of transport fuels within the state. Therefore, California LCFS considers the fuel's full life cycle, also known as the "well to wheels" or "seed to wheels" efficiency of transport fuels. The standard is aimed to reduce the state’s dependence on petroleum, create a market for clean transportation technology, and stimulate the production and use of alternative, low-carbon fuels in California. On April 23, 2009, CARB approved the specific rules for the LCFS that will go into effect in January 2011. The rule proposal prepared by its technical staff was approved by a 9-1 vote, to set the 2020 maximum carbon intensity reference value to 86 grams of carbon dioxide released per megajoule of energy produced. PHEV Research Center The PHEV Research Center was launched with funding from the California Air Resources Board. Innovative Clean Transit Under the Innovative Clean Transit (formerly known as the Advanced Clean Transit) regulation adopted in December 2018, public transportation agencies in California will gradually transition to a zero-emission bus fleet by 2040. Large transit agencies (defined as those operating more than 65 buses in the San Joaquin Valley Air Basin or South Coast Air Quality Management District, or those operating more than 100 buses elsewhere with populations greater than 200,000) are required to have 25% of new bus purchases as zero-emission buses (ZEBs) starting in 2023, 50% of new purchases as ZEBs starting in 2026, and 100% of new purchases as ZEBs starting in 2029. Small transit agencies are required to make 25% of new purchases as ZEBs in 2026 and 100% of new purchases as ZEBs in 2029+. Per the regulation, ZEBs are defined to include battery electric buses and fuel cell buses, but do not include electric trolleybuses which draw power from overhead lines. The Antelope Valley Transit Authority has set a goal to be the first all-electric fleet by the end of 2018, ahead of the tightened regulations. Regulation of ozone produced by air cleaners and ionizers The California Air Resources Board has a page listing air cleaners (many with ionizers) meeting their indoor ozone limit of 0.050 parts per million. From that article: Southern California headquarters, Mary D. Nichols Campus On October 27, 2017 CARB broke ground on its new state-of-the-art Southern California headquarters. CARB chose the site near the University of California, Riverside, in March 2016 and completed environmental studies in June 2017. Construction costs of $419 million, which include $108 million for specialized laboratory and testing equipment, were approved by the Legislature in July. Of those costs, $154 million comes from fines paid by Volkswagen for air quality violations related to the diesel car cheating case. Additional funds will come from the Motor Vehicle Account, the Air Pollution Control Fund and the Vehicle Inspection Repair Fund. Over a decade of planning has gone into the development of a replacement for CARB’s aging Haagen-Smit Laboratory. Opened in 1973 in El Monte, California, the Haagen-Smit Laboratory is the site of many of CARB’s groundbreaking efforts to reduce the emissions of cars and trucks, as well as efforts to introduce zero-emission and plug-in vehicles to California. In 2015, engineers and technicians based at the Haagen-Smit Laboratory were instrumental in discovering the infamous VW diesel “defeat device,” leading to the largest emissions control violation settlement in national and California history. The new campus features an extended range of dedicated test cells, including heavy-duty testing. There is also workspace for accommodating new test methods for future generations of vehicles, and space for developing enhanced on-board diagnostics and portable emissions measurement systems. The facility also includes a separate advanced chemistry laboratory. The Southern California Headquarters’ office and administration space accommodates 460 employees and includes visitor reception and public areas, a press room, flexible conference and workshop space, and a 250-person public auditorium. Sustainability drove the striking architecture and every detail of the campus. Designed by ZGF Architects and built by Hensel Phelps, the new headquarters is built for the future. At 402,000 square feet, it is designed to be the largest Zero Net Energy building in the United States, aided by solar arrays throughout the campus that generate 3.5 Megawatts of electricity, and a chilled beam temperature management system that provides increased energy efficiency and occupant comfort. As a result, the facility achieves Leadership in Energy and Environmental Design (LEED) Platinum certification, and California Green Building Standards Code (CALGreen) Tier 2 standards and is designed to achieve Zero-Net Energy performance . On November 18, 2021, CARB dedicated the new Southern California headquarters in honor of former Chair Mary D. Nichols whose career at CARB spanned four decades under three different California governors. See also California Air Resources Board List of California Air Districts 2008 California Statewide Truck and Bus Rule Carl Moyer Memorial Air Quality Standards Attainment Program Other Bioenergy Action Plan California Center for Sustainable Energy California Code of Regulations California Energy Commission California Environmental Protection Agency California Public Utilities Commission Carl Moyer Program Ecology of California Emission standards Emissions trading Greenhouse gas emissions by the United States Million Solar Roofs (SB 1) Plug-in hybrids in California Pollution in California Regional Greenhouse Gas Initiative Spare the Air program Texas Low Emission Diesel standards Timeline of major US environmental and occupational health regulation Upstream emission factor US Emission standard Vehicle acronyms and abbreviations Who Killed the Electric Car? Zero-emissions vehicle References External links Title 13 Motor Vehicles, Division 3 regulations in the California Code of Regulations (CCR) from Westlaw Title 17 Public Health, Division 3 regulations in the CCR from Westlaw CARB's Low-Emission Vehicle Regulations and Test Procedures CARB web site page on Climate Change CARB's Diesel Emission Control Strategies Verification News California charts course to fight global warming: California's greenhouse gas emissions by 30 percent over the next 12 years. California air board announces plan for carbon-credit trading. Air Air pollution in California Air pollution organizations Air Resources Board Environment of California Government agencies established in 1967 1967 establishments in California Sustainable transport
California Air Resources Board
Physics
6,939
9,365,585
https://en.wikipedia.org/wiki/NONMEM
NONMEM is a non-linear mixed-effects modeling software package developed by Stuart L. Beal and Lewis B. Sheiner in the late 1970s at University of California, San Francisco, and expanded by Robert Bauer at Icon PLC. Its name is an acronym for nonlinear mixed effects modeling but it is especially powerful in the context of population pharmacokinetics, pharmacometrics, and PK/PD models. NONMEM models are written in NMTRAN, a dedicated model specification language that is translated into FORTRAN, compiled on the fly and executed by a command-line script. Results are presented as text output files including tables. There are multiple interfaces to assist modelers with housekeeping of files, tracking of model development, goodness-of-fit evaluations and graphical output, such as PsN and xpose and Wings for NONMEM. Current version for NONMEM is 7.5. Model estimation NONMEM estimates its models according to principles of maximum likelihood estimation. nonlinear mixed-effects model generally do not have close-formed solutions, and therefore specific estimation methods are applied, such as linearization methods as first-order (FO), first-order conditional (FOCE) or the laplacian (LAPL), approximation methods such as iterative-two stage (ITS), importance sampling (IMP), stochastic approximation estimation (SAEM) or direct sampling. References External links Product site NONMEM UsersNet Archive Numerical software Pharmacodynamics Pharmacokinetics
NONMEM
Chemistry,Mathematics
319
5,283,362
https://en.wikipedia.org/wiki/Reading%20motivation
Reading motivation is the motivational drive to read, an area of interest in the field of education. Studying and implementing the conditions under which students are motivated to read is important in the process of teaching and fostering learning. Reading and writing motivation are the processes to put more effort on reading and writing activities. Different strategies can be followed to develop a student's motivation to read. Integrating sensory organs with text materials. For example, when reading the word "apple", read it loudly, visualize, feel the texture, taste, and odor. Pronounce each word properly. Differentiate pronunciation for the purpose of spelling and for the purpose of communicating ideas. In pronunciation, give emphasis on phonic discrimination, such as, C-A-T, C-A-N. Change from extrinsic to intrinsic reading motivation. Although incentives are a good motivator, further interest in reading will come from intrinsic wants and needs. Instead of rewarding reading with a gift, relate reading completion to increased reading competency and accomplishment. Organize reading material in an attractive way. For students who know how to read, but need extra encouragement, giving a book talk is a way to inspire reading. It is an especially effective tool with reluctant readers who need a hook before they will invest the energy into reading a book. Reading motivation for children can be enhanced when it is read with songs or music playing. Intrinsic and extrinsic motivation Intrinsic motivation is when one does something because of personal interest in that particular thing. Extrinsic motivation has to do with avoiding the consequences of not doing something. The motivation to read is one of the major factors that determine student success or failure in elementary school. Therefore, it is crucial to come up with ways to motivate and include all students to read. Reading is a task requiring interest and effort; as such, the reading skill of students has been associated with reading motivation. Students who are extremely motivated to read choose to find the time to read, which in turn will develop into a lifelong reading habit. Hence, motivation plays a crucial role in elementary schools to foster reading. Multiple factors, including both intrinsic and extrinsic motivators, significantly contribute to students' academic success in higher education. Intrinsic motivators, such as personal interest and enjoyment of learning, work in conjunction with extrinsic factors like career prospects and grades to shape student achievement and engagement in academic pursuits. Teachers' response to encourage intrinsic motivation in reading The main goal of educators should be to move students motivation from extrinsic to intrinsic. Students who display intrinsic motivation in reading are more likely to read and thus learn and grow outside of the classroom. Teachers can improve intrinsic motivation in a variety of ways including: Link real life experiences to text: students who feel as though they are reading things that are relevant to their personal interests are more likely to be involved in the reading of the story and classroom discussions Selecting texts that connect to students' interests and backgrounds-reading articles or stories about common interests can spark conversations in classrooms as well as promote interest in reading the texts. If some students in a classroom play baseball, reading a story about a baseball player can spark interest and enthusiasm in the classroom Give choices to students to allow them to take ownership of their reading-assigned readings (while sometimes necessary) do not encourage intrinsic motivation in reading. Rather, allowing students to have a choice in what they read allows them to take ownership of their own learning and can choose texts that is personally relevant to their culture and interests. Provide constant positive feedback to students: because students who feel as though they are good readers will be more likely to want to read, always encouraging students that they are talented and able to read well can create increased intrinsic motivation Encourage collaboration in the classroom that is centered around reading: students who work in groups and all individually contribute towards a common cause are more likely to be interested in the subject. Organizing small "book clubs" in the classroom that allow students to socialize and talk about a book can create a laid back atmosphere where students feel as though they can freely express their views and ideas about the texts, and will increase interest in the context of the article or story. Motivation within culture Motivation in school is something that all members of a school community want to support in students, though few may realize that it can be influenced by culture. This means that because of the culture of each student, levels of motivation may be quite different. As of now, the cultural practice schools tend to follow is that of the dominant U.S. culture. However, many students come from families that are much more diverse. For example, students of the Navajo and Apache cultures are less likely to answer their teacher's question in class if it seems as though they are trying to compete with their peers. Students of lower SES families typically display lower motivation and achievement and are at greater risk for school failure and dropout. One of the most significant reasons for this is because of familial socialization within lower SES families. Their level of socialization within the family is much lower than that of a middle or higher class family. Therefore, teachers should be aware of the cultural identities of these and all students, which will represent their learning characteristics and motivation enhancing their learning achievement. Cultural backgrounds of learners are significant because ethnic, racial, linguistic, social, religious or economic differences can cause cultural disconnection leading corruption of motivation to learning. Researchers like Eleuterio (1997) and Hoelscher (1999) observed that classrooms filled with teachers and students who share their cultural identities build trust and foster stronger relationships, which leads to student engagement, higher motivation and excitement about learning together. If a school community truly wants to promote the success of all students, it must recognize how achievement motivation varies culturally within the population it serves. See also Patricia Alexander Book talk References Further reading External links Article abstracts about reading motivation Motivation Educational psychology
Reading motivation
Biology
1,196
45,239,565
https://en.wikipedia.org/wiki/Purple%20Crow%20Lidar
The Purple Crow Lidar is a powerful lidar (laser radar) that emits pulses of intensely bright light. The bright light scatters off air molecules, and the reflections are collected by a large telescope. Telescope The telescope is formed by rotating liquid mercury at 10 r.p.m. in a 2.65-m diameter container. This liquid mirror technology has been made and developed at Université Laval in Québec City. Such rotating measurement allow air density, pressure, temperature, and composition to be measured. Which can be useful to collect data for global warming and weather prediction. Weather prediction This chart is used to determine and predict the weather at Purple Crow Lidar Facility in London Ontario for astronomical observing. Location The Purple Crow operates from the Echo Base Observatory located at Western's Environmental Science Field Station located near London, Ontario, Canada. People in charge The Purple Crow Lidar research project is headed by Professor Robert J. Sica in the Physics Department at UWO London Ontario. Its main support comes from the Natural Science and Engineering Research Council of Canada (NSERC). See also Large Zenith Telescope Liquid-mirror telescope References Liquid mirror telescopes Mirrors Telescope types Telescopes
Purple Crow Lidar
Astronomy
237
34,402,185
https://en.wikipedia.org/wiki/Strozzi%20Institute
Strozzi Institute is an organization located in Oakland, California that offers coaching services and trainings in leadership, organizational development, and personal mastery. It uses a somatic approach to learning. Programs are offered primarily at the institute's Sonoma training center. History Strozzi Institute was founded in 1985 by Richard Strozzi-Heckler, Ph.D. as an application of his research into a "somatic philosophy of learning". In the 1970s Strozzi-Heckler and Robert K. Hall, M.D. developed The Lomi School of body oriented psychotherapy, influenced by the work of Fritz Perls, Ida Rolf, Randolph Stone, and Charan Singh. In addition to the usual psychiatric and psychoanalytic methods, this program includes touch, group process, breathwork, attention training, and movement to its approach, to provide its framework for working with the mind unified with the physical body. In 1985 Strozzi-Heckler contributed to a pilot program for the U.S. Army Special Forces to evaluate mind-body approaches to military training. In addition to many of the measured outcomes related to increased endurance, alertness, capacity for stress management, and team cohesion, participants unexpectedly demonstrated significant increases in leadership characteristics. Reflecting on this, Strozzi-Heckler began exploring ways to adapt the program for leaders in business and government. In collaboration with Fernando Flores during the 1980s and 1990s, Flores's Ontology of Language research into speech acts was integrated and applied to the somatic leadership domain. In 2000, Strozzi Institute introduced a training program for coaches interested in their somatic approach. Strozzi Somatics The Strozzi Institute methodology, known as Strozzi Somatics, is used one-on-one and in groups of varying size. The Strozzi Somatics methodology makes a distinction between soma - the living body in its entirety, and the mechanistic view of the physical body as an assemblage of anatomical parts. Using this first definition the body is regarded as the primary domain of feeling, action, language, and meaning. From this perspective, coaches observe the ways people hold their bodies and how they respond to stress situations, such as verbal or physical surprises. The body’s overall organization in this way is referred to as a somatic shape. Each person’s unique somatic shape is formed by responses to past experiences, positive and negative, which are established as deep, mostly unconscious patterns of muscular activity in the body. Over time these patterns produce conditioned tendencies of reaction to people, situations and environments. Strozzi-Heckler has described the process in which, when an individual is exposed to a stressful stimulus, they revert to this conditioned tendency, limiting their available choices for action. Withdrawal, fear, attempting to dominate, rigidity, and over-accommodation are examples of different conditioned tendency shapes. Because this is a somatic event rather than an exclusively cognitive one, new information or theoretical insight will not shift the response. As an illustration, he describes a team leader who has extensive education in management principles but, especially under stress, comports himself in a way that produces mistrust and resentment from the people he manages, eventually creating significant breakdowns within his team. Leadership Strozzi Institute has stressed that leadership characteristics often considered innate are teachable, and can be improved with practice. These traits include: high self awareness and awareness of the environment, being open to possibilities rather than limited to past options, being motivated by a connection to what one cares about, the ability to deal directly with matters that need attention, directing attention outward in a way that enables listening and connection with others, and the ability to coordinate with others and empathize with their concerns. Aikido Influence Many of the practices taught are adapted from aikido and different forms of meditation. Aikido movements are presented in a non-martial context and principles of the art such as: centering oneself, facing an attack, extending outward into the environment, entering into shared space, and blending with the momentum of an incursion, are used as physical metaphors to guide the practice of embodying leadership characteristics. An aikido dojo, Two Rock Aikido, is located on the Strozzi Institute site in Sonoma. Strozzi Bodywork Working from the premise that the body and the self are indistinguishable from one another, Strozzi Institute offers training in a style of bodywork developed by Strozzi-Heckler and Hall to produce change in a person's core historical limitations. Strozzi Bodywork involves addressing deeply held muscular contractions (also known as armoring) maintained in the soma using touch, breath, and directed attention. Practitioners train to develop an empathetic, compassionate presence that can build trust and enable them to work with others through a variety of emotional states. Some somatic coaches use Strozzi Bodywork in their coaching sessions. Applications The Strozzi Somatics methodology has been applied to a broad range of organizations and with diverse populations. These include Fortune 50 companies, U.S. Navy SEALs, U.S. Marines, law enforcement agencies, social justice groups, professional sports teams, as well as urban gang members, prisoners, Olympic athletes, and survivors of sexual trauma. Strozzi Institute has contributed to U.S. Military counter-insurgency training, integrating somatic practices to enhance soldiers' abilities to connect with others across cultures rather than rely predominantly on force. Strozzi-Heckler has said, "Working with the body gives you a way to do that because it transcends words and language. It takes us to that common core of being human." Further reading Strozzi-Heckler, R., The Leadership Dojo, Frog Books (2007) Strozzi-Heckler, R., The Anatomy of Change, North Atlantic Books (1997) Haines, S., Healing Sex: A Mind-Body Approach to Healing Sexual Trauma, Cleis (2007) Leonard, G., Mastery: The Keys to Success and Long-Term Fulfillment, Penguin (1992) Keleman, Stanley, Somatic Reality, Center Press (1982) References External links Strozzi Institute website Leadership studies Personal development Mind–body interventions Life coaches Educational psychology organizations Petaluma, California Somatic psychology
Strozzi Institute
Biology
1,302
33,353,100
https://en.wikipedia.org/wiki/Vibrio%20anguillarum
Vibrio anguillarum is a species of prokaryote that belongs to the family Vibrionaceae, genus Vibrio. V. anguillarum is typically 0.5 - 1 μm in diameter and 1 - 3 μm in length. It is a gram-negative, comma-shaped rod bacterium that is commonly found in seawater and brackish waters. It is polarly flagellated, non-spore-forming, halophilic, and facultatively anaerobic. V. anguillarum has the ability to form biofilms. V. anguillarum is pathogenic to various fish species, crustaceans, and mollusks. Vibrio anguillarum can grow at temperatures as low as 5 °C but peaks at 37 °C, and favors saline and slightly basic water for growth. V. anguillarum was shown to be penicillin-resistant when tested with Rosco Neo-sensitabs System against antibiotics novobiocin and penicillin. In lab cultures, colonies get up to 1mm after 24 hours of incubation and 4-5mm after a week of incubation. Young colonies appear yellow and turn brown as they get older. When grown in broth, growth starts in the upper part of the test tube and reaches the bottom over two days. Cultures start as lightly turbid but develop into films and deposits in later stages. Discovery The discovery and understanding of Vibrio anguillarum has evolved over time through the contributions of various researchers. Canestrini's observations (1893) In 1893, Canestrini made pioneering observations on epizootics among migrating eels (Anguilla vulgaris), noting their association with a bacterium he termed Bacillus anguillarum. Canestrini meticulously documented the clinical signs exhibited by infected eels, laying the groundwork for further investigations into the pathogenic nature of this bacterium. Bergman's description (1909) Expanding upon Canestrini's work, Bergman's description in 1909 provided a comprehensive account of Vibrio anguillarum as the etiological agent responsible for the 'Red Pest of eels' in the Baltic Sea. Bergman's observations detailed the clinical manifestations of the disease in infected eels, explaining the pathological changes associated with V. anguillarum infection. His work not only confirmed the pathogenicity of this bacterium but also underscored its significance as a major threat to aquatic organisms in marine environments. Disease description Research by Gunnar Holt provided crucial insights into the emergence of Vibrio anguillarum as a pathogen in Norwegian coastal waters. Until 1964, V. anguillarum had not been associated with fish disease in Norway. However, Holt documented epizootic outbreaks of vibriosis in rainbow trout reared in seawater, causing substantial mortality in affected populations. Holt's investigations revealed a range of disease manifestations associated with vibriosis, including sudden mortality and varied pathological findings upon necropsy. These findings highlighted the severity and diversity of symptoms observed in affected fish populations, emphasizing the need for further research into disease prevention and control strategies. Biochemistry In addition to basic, saline water, Vibrio anguillarum can grow on MacConkey agar and TCBS agar. Larsen (1983) tested the hemolysis of V. anguillarum by measuring growth in an agar base with 5% citrated calf blood; hemolysis was observed just beneath the colonies and in a semitransparent zone surrounding the colonies. In general, different Vibrio anguillarum strains respond similarly to various biochemical tests. Larsen (1983) tested V. anguillarum fermentation of various carbohydrates and glycosides. Most V. anguillarum strains were found to be able to ferment glucose, fructose, galactose, mannitol, mannose, maltose, sucrose, trehalose, dextrin, glycogen, chitin and ONPG. No fermentation reactions were found in xylose, adonitol, dulcitol, rhamnose, inositol, melezitose, raffinose, and inulin. Only a few V. anguillarum strains were found to ferment lactose, melibiose, aesculin, and salicin. In tests with amino acids, proteins, lipids, and other compounds, most or all V. anguillarum strains showed positive activity with arginine dihydrolase, indole (tryptophan deaminase), catalase, oxidase, nitrate, and hemolysin, lipase and various proteins. Fish pathogen strains of V. anguillarum showed positive reactions in VP, 2,3-butanediol, citrate, NH4/glucose medium, and gluconate but not environmental strains. Iron uptake systems Vibrio anguillarum has multiple iron uptake systems, including TonB-dependent transporters and outer membrane receptors. V. anguillarum also has an iron sequestering system that allows it to sequester iron from haem and haem-containing proteins. Vibrio anguillarum produces siderophores anguibactin and vanchrobactin, which are small molecules used to scavenge and transport iron. Siderophores are important virulence factors for V. anguillarum because they enable the bacteria to obtain iron from the host and evade the host’s immune system, essentially allowing the bacteria to compete with the host for iron and establish an infection. The genes involved in the biosynthesis and uptake of these siderophores are located on the virulence plasmid of V. anguillarum. After the secreted siderophore binds to iron, the chelated iron complex is transported to the cytosol. The complex then binds to FatA receptors on the outer membrane and is transported into the cell. FatB/FatC/FatD receptors are also involved in iron transport between the periplasm and cytosol. The iron uptake system is negatively controlled by the Fur protein, which is chromosomally encoded and represses transcription by binding to and bending the DNA. The iron uptake system is further controlled by plasmid-encoded regulators: AngR and TAFr. Genome Vibrio anguillarum has two circular chromosomes, and many strains have a virulence plasmid. The number of protein-coding genes can vary by strain, but on average chromosome one has 1891 genes and chromosome two has 479 genes. A study on Vibrio anguillarum NB10Sm, a pathogenic serotype O1 strain, found 329 essential genes, 95 domain-essential genes, and 25 essential genes not found in other Vibrio species. Serotypes Strains are categorized into O serotype, since O-antigens were found to be the most specific surface antigens. There are 23 known serotypes of Vibrio anguillarum, O1 through O23, but only serotypes O1, O2, and O3 are known to be pathogenic. pJM1 The pJM1 virulence plasmid and pJM1-like plasmids allow strains of Vibrio anguillarum that carry it to survive in environments with low levels of bioavailable iron, like inside of a fish, by releasing iron from molecules that sequester it such as transferrin and lactoferrin. The pJM1 plasmid has approximately 65 Kbp and a G+C content of 42.6%. pMJ1 plasmids from different host species and geographical regions generally have low amounts of variation. One study found almost all serotype O2 and O3 strains, as well as the serotype O1 strain without a pJM1-like plasmid, carried genes encoding the biosynthesis of the siderophore piscibactin. Pathogenicity Vibrio anguillarum can infect many species of fresh water and marine fish, as well as bivalves, and crustaceans. In fish, V. anguillarum infection can cause hemorrhagic septicemia called vibriosis. V. anguillarum is more virulent at cooler temperatures, potentially influenced by the fact that piscibactin production is favored at lower temperatures. Chemotactic mobility via flagella is necessary for the virulence of V. anguillarum in water. The discovery of a metalloprotease with mucinase activity, and a severe reduction in virulence in its absence, suggest its use in penetrating the host fish’s protective mucus layer. V. anguillarum also possesses genes for several hemolysins, which are thought to be the main contributor to hemorrhaging in fish with vibriosis. Vibrio anguillarum is capable of colonizing and growing in the gastrointestinal tract of fish, utilizing intestinal mucus as a nutrient. Clinical signs of vibriosis include skin ulcers, hemorrhages, sepsis, and systemic infections. Vibriosis outbreaks are a significant concern in global aquaculture due to their impact on fish health and the development of antibiotic resistance, which can lead to significant economic losses in aquaculture. Control measures for V. anguillarum in aquaculture include hygiene practices, vaccination, and the use of antibiotics in some cases. Inactivated whole-cell vaccines are available, but there is a need for more effective and safer subunit vaccines. Vibrio anguillarum is known to produce an extracellular protease called empA metalloprotease which plays a role in its pathogenesis. This protease enzyme is encoded in the empA gene in V. anguillarum. This gene is induced when cells are at high density and incubated in gastrointestinal mucus, and expressed during the stationary phase when V. anguillarum cells are incubated. EmpA expression is regulated by multiple factors, including cell density, gastrointestinal mucus, quorum sensing (QS) signals such as quorum-sensing molecules, and the alternative sigma factor RpoS. EmpA metalloprotease is a main factor involved in tissue damage and destruction during infection in salmonids, similar to other proteases produced by pathogenic bacteria. Conditioned cells from an empA mutant strain were found to induce protease activity which suggests the presence of an unidentified autoinducer. Although typically not associated with disease in humans, in 2017 an immunocompromised woman died in hospital from sepsis and multiorgan failure and laboratory tests confirmed the presence of Vibrio anguillarum in her blood. Ecology Vibrio anguillarum is a ubiquitous marine bacterium found in various aquatic environments worldwide, particularly in marine coastal ecosystems. Its ecology is closely linked to its ability to infect and colonize a range of aquatic organisms, including fish, shellfish, and crustaceans. Impact on aquaculture The presence of Vibrio anguillarum poses a significant threat to aquaculture operations, particularly those focused on fish farming. Vibriosis outbreaks can result in substantial economic losses due to mortality and decreased productivity. The economic burden of preventing and treating vibriosis can be considerable, as it often involves the use of antibiotics, vaccines, and other management strategies. Additionally, the loss of valuable fish stocks can have long-term implications for the sustainability of aquaculture businesses. Environmental factors The behavior of Vibrio anguillarum is intricately linked to environmental factors, including temperature, iron availability, and water conditions, which play pivotal roles in its pathogenicity and disease management. Temperature Temperature is a critical environmental factor influencing the virulence and expression of virulence factors in Vibrio anguillarum. Despite its optimal growth temperature of around 25–34 °C, Vibrio anguillarum exhibits temperature-dependent variations in virulence. This temperature-dependent expression of virulence factors underscores the significance of understanding how environmental cues shape the pathogenicity of Vibrio anguillarum, particularly in the context of aquaculture practices conducted in varied temperature regimes. Iron availability The presence of iron, a vital nutrient crucial for both bacterial growth and virulence, plays a significant role in regulating the expression of virulence factors in Vibrio anguillarum. When iron levels are low, Vibrio anguillarum undergoes significant metabolic adjustments, leading to an increase in the expression of genes associated with virulence. Notably, genes linked to siderophore systems like vanchrobactin and piscibactin are particularly active under conditions of iron scarcity, with piscibactin showing heightened transcription at lower temperatures. This heightened activity of siderophore systems contributes to the increased virulence of Vibrio anguillarum in colder environments, illustrating the intricate relationship between iron availability, temperature, and the expression of virulence factors in determining the severity of the disease. Water conditions The aquatic environment significantly influences Vibrio anguillarum ecology and the control of vibriosis outbreaks in aquaculture. Factors like salinity, nutrient availability, water flow, oxygen levels, and biofilm presence affect Vibrio anguillarum's survival, growth, and virulence, impacting disease spread among aquatic organisms. Effective management of water quality parameters, including salinity levels and nutrient levels, is crucial for regulating Vibrio anguillarum populations and mitigating vibriosis risks in aquaculture settings. Diligent monitoring and maintenance of optimal water conditions are vital aspects of disease control strategies, fostering the well-being and productivity of aquaculture operations while reducing the impact of bacterial pathogens. See also Vibrio fischerii Vibrio harveyi Vibrio ordalii Vibrio tubiashii Vibrio vulnificus Serotype Virulence References External links Type strain of Vibrio anguillarum at BacDive - the Bacterial Diversity Metadatabase Vibrionales Bacterial diseases of fish Bacteria described in 1854 Marine microorganisms
Vibrio anguillarum
Biology
3,004
11,157,348
https://en.wikipedia.org/wiki/Cobalt%28II%29%20naphthenate
Cobalt(II) naphthenate is a mixture of cobalt(II) derivatives of naphthenic acids. These coordination complexes are widely used as oil drying agents for the autoxidative crosslinking of drying oils. Metal naphthenates are not well defined in conventional chemical sense that they are mixtures. They are widely employed catalysts because they are soluble in the nonpolar substrates, such as the alkyd resins or linseed oil. The fact that naphthenates are mixtures helps to confer high solubility. A second virtue of these species is their low cost. A well-defined compound that exhibits many of the properties of cobalt naphthenate is the cobalt(II) complex of 2-ethylhexanoic acid. Often in technical literature, naphthenates are described as salts, but they are probably also non-ionic coordination complexes with structures similar to basic zinc acetate. The catalytic properties of cobalt(II) naphthenates are similar to those of related compounds containing manganese and iron. Such species are sometimes classified as active driers. Active driers are catalysts that feature redox-active metal centers. Such centers promote redox reactions with hydroperoxide-containing intermediates. Toxicity and safety Cobalt Naphthenate is a moderately toxic substance which can cause a range of acute and chronic conditions, and it is also a carcinogen. It is most commonly used diluted in mineral spirit or mineral oil. Safety equipment must be used to avoid eye and skin contact. The pure compound has a high vapor density of 3.9 (air = 1), and a low vapor pressure of 1 mm Hg at 25 °C (77 °F). References Salts of carboxylic acids Cobalt(II) compounds
Cobalt(II) naphthenate
Chemistry
370
9,985,739
https://en.wikipedia.org/wiki/Assessment%20of%20basic%20language%20and%20learning%20skills
The assessment of basic language and learning skills (ABLLS, often pronounced "ables") is an educational tool used frequently with applied behavior analysis (ABA) to measure the basic linguistic and functional skills of an individual with developmental delays or disabilities. Development The revised assessment of basic language and learning skills (ABLLS-R) is an assessment tool, curriculum guide, and skills-tracking system used to help guide the instruction of language and critical learner skills for children with autism or other developmental disabilities. It provides a comprehensive review of 544 skills from 25 skill areas including language, social interaction, self-help, academic and motor skills that most typically developing children acquire prior to entering kindergarten. Expressive language skills are assessed based upon the behavioral analysis of language as presented by B.F. Skinner in his book, Verbal Behavior (1957). The task items within each skill area are arranged from simpler to more complex tasks. This practical tool facilitates the identification of skills needed by the child to effectively communicate and learn from everyday experiences. The information obtained from this assessment allows parents and professionals to pinpoint obstacles that have been preventing a child from acquiring new skills and to develop a comprehensive language-based curriculum. The ABLLS-R comprises two documents. The ABLLS-R Protocol is used to score the child's performance on the task items and provides 15 appendices that allow for the tracking of a variety of specific skills that are included in the assessment. The ABLLS-R Guide provides information about the features of the ABLLS-R, how to correctly score items, and how to develop Individualized Education Program (IEP) goals and objectives that clearly define and target the learning needs of a student. The original version was first released in 1998 by Behavior Analysts, Inc. and was developed by James W. Partington, Ph.D., BCBA-D and Mark L. Sundberg, Ph.D, BCBA-D. It was revised in 2006 by Partington. The revised version incorporates many new task items and provides a more specific sequence in the developmental order of items within the various skill areas. Significant changes were made in the revised version of the vocal imitation section with input from Denise Senick-Pirri, SLP-CCC. Additional improvements were made to incorporate items associated with social interaction skills, motor imitation and other joint attention skills, and to ensure the fluent use of established skills. Dr. Mark Sundberg, later went on to author his Verbal Behavioral assessment called the VB-MAPP in 2008. Another assessment tool for learning is the International Development and Early Learning Assessment. This tool is used to measure and compare a child's, usually between the ages of three to six years, behavioral development and learning capabilities in other countries. Countries that used IDELA included Afghanistan, Bolivia, Ethiopia, Uganda, and Vietnam. The IDELA is based on a child's emergent literacy, emergent numeracy, Social-emotional skills, and motor skills. WebABLLS and normative data The WebABLLS is an electronic version of the assessment. It allows parents, teachers, speech pathologists, behavior analysts, and others who design, coordinate, or supervise language or skill-acquisition programs to expedite the development of IEPs, progress reports, and to easily share information about a child. The WebABLLS provides videos of many skills that are measured by the ABLLS-R and can be used to demonstrate those specific skills. Over the past four years, parents, relatives and friends of typically developing children have been participating in an ongoing research project by entering data into the WebABLLS. The data are collected by parents or professionals who both know the children and have received training in the administration of the ABLLS-R. The data are updated at three-month intervals (i.e., 6 months, 9 months, 12 months) in order to track the specific changes in skills over the course of the children's development. These preliminary data have been collected in a systematic manner to provide information about when each skill measured by the ABLLS-R is usually acquired by typically developing children. The preliminary data from this research project are from 81 children (42 females and 39 males) ranging in age from 6 months to 60 months. Children are from a variety of geographical locations (both nationally and internationally) and of differing ethnic, socio-economic and educational backgrounds. The average percent of the total possible scores along with the range from the highest to the lowest scores for the sample at each 3-month age intervals are presented. The data clearly indicate that typically developing children demonstrate most of the basic language and learning skills measured by the ABLLS-R by the time they are 4 to 5 years of age. Usage While the ABLLS-R is most commonly used on children with developmental disabilities and delays (including autism), it can be used for anyone who may be lacking in basic communication or life skills. It assesses the strengths and weaknesses of an individual in each of the 25 skill sets. Each skill set is broken down into multiple skills, ordered by typical development or complexity. So, a skill of F1 (Requests by indicating) is a simpler skill than F12 (Requesting Help). Usually, lower level skills are needed before proceeding to teach higher skills. However, many individuals display splinter skills that are above their practical level. The ABLLS-R is conducted via observation of the child's behavior in each skill area. The instructor will provide a stimulus to the child (verbal, hand-over-hand, non-verbal, etc.), and, depending on what the child does (the behavior), determines their skill-level. Some skills are difficult or time-consuming to test; instructors frequently accept anecdotal evidence from parents and other instructors as to a child's ability at a given skill-level. Sections The ABLLS-R is split into 25 functional areas, each corresponding to a letter in the alphabet. The sections between the ABLLS and ABLLS-R are similar; it is mostly the skills that vary in number and scope. Advantages and disadvantages The following is a very brief list of advantages and disadvantages to using the ABLLS-R assessment. Advantages Can be conducted by most people with a minimal understanding of applied behavior analysis. Addresses basic language, academic, self-help, classroom, and gross and fine motor skill sets. Provides quick review for parents and educators to identify skill level of student Easy for parents and teachers to communicate about the student's educational programming Provides data to indicate the skill level of normal development Disadvantages Skill lists are not exhaustive (544 skills). Skills are mostly in order of childhood development, but every child learns differently. Not a fully standardized assessment IDELA is too generalized making biased comparisons among international countries. See also Applied behavior analysis Autism therapies Educational psychology Verbal Behavior (book) The Analysis of Verbal Behavior (journal) References Further reading Aman, M. G., Novotny, S., Samango-Sprouse, C., Lecavalier, L., Leonard, E., Gadow, K. D., King, B. H., Pearson, D. A., Gernsbacher, M. A. & Chez, M. (2004). Outcome Measures for Clinical Drug Trials in Autism. CNS Spectrums, 9 (1), 36–47. National Research Council (2002). Educating Children with Autism. Committee on Educational Interventions for Children with Autism. Catherine Lord and James P. McGee, eds. Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. Neisworth, J. T. & Wolfe, P. S. (2005). The Autism Encyclopedia. Baltimore, MD: Paul H. Brookes Publishing Co. Sallows, G & Graupner, T. (2005). Intensive Behavioral Treatment for Children With Autism: Four-Year Outcome and Predictors. American Journal on Mental Retardation, 110(6), 417-438 Schwartz, I. S., Boulware, G., McBride, B. J. & Sandall, S. R. (2001). Functional Assessment Strategies for Young Children with Autism. Focus on Autism and Other Developmental Disabilities, 16 (4), 222–227. Thompson, Travis. (2011). Individualized Autism Interventions for Young Children. Baltimore, MD: Paul H. Brookes Publishing Co. Thompson, Travis. (2007). Making Sense of Autism. Baltimore, MD: Paul H. Brookes Publishing Co. External links Cambridge Center for Behavioral Studies - an organization that provides research-based information regarding effective treatment for individuals with a diagnosis of autism spectrum disorders Association for Science in Autism Treatment - an organization that provides research-based information regarding effective treatment for individuals with a diagnosis of autism spectrum disorders Behavior Analysts, Inc. – the company that designed and publishes the ABLLS-R WebABLLS.com - web based implementation of ABLLS-R Child development Educational psychology Behaviorism
Assessment of basic language and learning skills
Biology
1,846
2,680,696
https://en.wikipedia.org/wiki/List%20of%20compounds%20with%20carbon%20number%209
This is a partial list of molecules that contain 9 carbon atoms. See also Carbon number List of compounds with carbon number 8 List of compounds with carbon number 10 C09
List of compounds with carbon number 9
Chemistry
36
22,016,602
https://en.wikipedia.org/wiki/Bar%20%28river%20morphology%29
A bar in a river is an elevated region of sediment (such as sand or gravel) that has been deposited by the flow. Types of bars include mid-channel bars (also called braid bars and common in braided rivers), point bars (common in meandering rivers), and mouth bars (common in river deltas). The locations of bars are determined by the geometry of the river and the flow through it. Bars reflect sediment supply conditions, and can show where sediment supply rate is greater than the transport capacity. Mid-channel bars A mid-channel bar, is also often referred to as a braid bar because they are often found in braided river channels. Braided river channels are broad and shallow and found in areas where sediment is easily eroded like at a glacial outwash, or at a mountain front with high sediment loads. These types of river systems are associated with high slope, sediment supply, stream power, shear stress, and bed load transport rates. Braided rivers have complex and unpredictable channel patterns, and sediment size tends to vary among streams. It is these features that are responsible for the formations of braid bars. Braided streams are often overfed with massive amounts of sediment which creates multiple stream channels within one dominant pair of flood bank plains. These channels are separated by mid-channel or braid bars. Anastomosing river channels also create mid-channel bars, however they are typically vegetated bars, making them more permanent than the bars found in a braided river channel which have high rates of change because of the large amounts of non-cohesive sediment, lack of vegetation, and high stream powers found in braided river channels. Bars can also form mid-channel due to snags or logjams. For example, if a stable log is deposited mid-channel in a stream, this obstructs the flow and creates local flow convergence and divergence. This causes erosion on the upstream side of the obstruction and deposition on the downstream side. The deposition that occurs on the downstream side can create a central bar, and an arcuate bar can be formed as flow diverges upstream of the obstruction. Continuous deposition downstream can build up the central bar to form an island. Eventually the logjam can become partially buried, which protects the island from erosion, allowing for vegetation to begin to grow, and stabilize the area even further. Over time, the bar can eventually attach to one side of the channel bank and merge into the flood plain. Point bars A point bar is an area of deposition typically found in meandering rivers. Point bars form on the inside of meander bends in meandering rivers. As the flow moves around the inside of the bend in the river, the water slows down because of the shallow flow and low shear stresses there reduce the amount of material that can be carried there. Point bars are usually crescent shaped and located on the inside curve of the river bend. The excess material falls out of transport and, over time, forms a point bar. Point bars are typically found in the slowest moving, shallowest parts of rivers and streams, and are often parallel to the shore and occupy the area farthest from the thalweg, on the outside curve of the river bend in a meandering river. Here, at the deepest and fastest part of the stream is the cut bank, the area of a meandering river channel that continuously undergoes erosion. The faster the water in a river channel, the better it is able to pick up greater amounts of sediment, and larger pieces of sediment, which increases the river's bed load. Over a long enough period of time, the combination of deposition along point bars, and erosion along cut banks can lead to the formation of an oxbow lake. Mouth bars A mouth bar is an elevated region of sediment typically found at a river delta which is located at the mouth of a river where the river flows out to the ocean. Sediment is transported by the river and deposited, mid channel, at the mouth of the river. This occurs because, as the river widens at the mouth, the flow slows, and sediment settles out and is deposited. After initial formation of a river mouth bar, they have the tendency to prograde. This is caused by the pressure from the flow on the upstream face of the bar. This pressure creates erosion on that face of the bar, allowing the flow to transport this sediment over or around, and re-deposit it farther downstream, closer to the ocean. River mouth bars stagnate, or cease to prograde when the water depth above the flow is shallow enough to create a pressure on the upstream side of the bar strong enough to force the flow around the deposit rather than over the top of the bar. This divergent channel flow around either side of the sediment deposit continuously transports sediment, which over time is deposited on either side of this original mid channel deposit. As more and more sediment accumulates across the mouth of the river, it builds up to eventually create a sand bar that has the potential to extend the entire length of the river mouth and block the flow. See also References Further reading Fluvial landforms Sedimentology Geomorphology Hydrology River islands Water streams
Bar (river morphology)
Chemistry,Engineering,Environmental_science
1,059
1,859,100
https://en.wikipedia.org/wiki/Tributyl%20phosphate
Tributyl phosphate, known commonly as TBP, is an organophosphorus compound with the chemical formula (CH3CH2CH2CH2O)3PO. This colourless, odorless liquid finds some applications as an extractant and a plasticizer. It is an ester of phosphoric acid with n-butanol. Production Tributyl phosphate is manufactured by reaction of phosphoryl chloride with n-butanol. POCl3 + 3 C4H9OH → PO(OC4H9)3 + 3 HCl Production is estimated at 3,000–5,000 tonnes worldwide. Use TBP is a solvent and plasticizer for cellulose esters such as nitrocellulose and cellulose acetate, similarly to tricresyl phosphate. It is also used as a flame retardant for cellulose fabrics such as cotton. It forms stable hydrophobic complexes with some metals; these complexes are soluble in organic solvents as well as supercritical CO2. The major uses of TBP in industry are as a component of aircraft hydraulic fluid, brake fluid, and as a solvent for extraction and purification of rare-earth metals from their ores. TBP finds its use as a solvent in inks, synthetic resins, gums, adhesives (namely for veneer plywood), and herbicide and fungicide concentrates. As it has no odour, it is used as an anti-foaming agent in detergent solutions, and in various emulsions, paints, and adhesives. It is also found as a de-foamer in ethylene glycol-borax antifreeze solutions. In oil-based lubricants addition of TBP increases the oil film strength. It is used also in mercerizing liquids, where it improves their wetting properties. It can be used as a heat-exchange medium. TBP is used in some consumer products such as herbicides and water-thinned paints and tinting bases. Nuclear chemistry Tributyl phosphate is used in combination with di(2-ethylhexyl)phosphoric acid for the solvent extraction of uranium, as part of the purification of natural ores. It is also used in nuclear reprocessing as part of the PUREX process. A 15–40% (usually about 30%) solution of tributyl phosphate in kerosene or dodecane is used in the liquid–liquid extraction (solvent extraction) of uranium, plutonium, and thorium from spent uranium nuclear fuel rods dissolved in nitric acid. Liquid extraction can also be used for chemical uranium enrichment. Hazards In contact with concentrated nitric acid the TBP-kerosene solution forms hazardous and explosive red oil. References Organophosphates Solvents Plasticizers Radioactive waste Chelating agents Phosphate esters Butyl esters Flame retardants
Tributyl phosphate
Chemistry,Technology
616
2,891,737
https://en.wikipedia.org/wiki/Paul%20Werner%20Gast
Paul Werner Gast (September 11, 1930 – May 16, 1973) was an American geochemist and geologist. He was born in Chicago to German immigrants and attended Wheaton College, Illinois, whence he graduated in 1952. He earned a Ph.D. from Columbia University in 1957. After graduation, he taught at the University of Minnesota until 1965 when he became professor of geology at Columbia. In 1969 Paul Gast assumed leadership of the geo-science management of the Manned Spacecraft Center in preparation for Apollo mission sample return from the Moon. He served as chief scientist of the Apollo Lunar Science Staff. He was one of the science consultant group known unofficially as the "Four Horsemen," along with Jim Arnold, Bob Walker, and Gerry Wasserburg. He died at the age of 43, being survived by his wife, Joyce Rinehart, and two sons and a daughter. During his career he pioneered the study of rare earth elements in examining the crust, mantle, and interior of the planet. He led the development of the use of rubidium-strontium and uranium-lead radiometric dating methods for rocks, particularly for samples returned from the Moon. His examinations of trace elements resulted in new understanding of how volcanic fluids originate. Awards and honors Geochemistry Fellow of the Geochemical Society. V. M. Goldschmidt Award of the Geochemical Society, 1972. James Furman Kemp medal, 1973. Space Science Award, 1973. Geochemical society named their Paul W. Gast Lecture Series after him. The publication "Trace Elements in Igneous Petrology" was published in his memory. Dorsum Gast, a wrinkle ridge on the Moon, is named after him. Works Gast, P. W., "Isotopic Geochemistry", Columbia University. He was also co-author of multiple papers on Geology. References 1930 births 1991 deaths American geochemists Wheaton College (Illinois) alumni Rare earth scientists Columbia University faculty Columbia Graduate School of Arts and Sciences alumni Recipients of the V. M. Goldschmidt Award 20th-century American chemists
Paul Werner Gast
Chemistry
429
2,527,122
https://en.wikipedia.org/wiki/Isotopes%20of%20beryllium
Beryllium (4Be) has 11 known isotopes and 3 known isomers, but only one of these isotopes () is stable and a primordial nuclide. As such, beryllium is considered a monoisotopic element. It is also a mononuclidic element, because its other isotopes have such short half-lives that none are primordial and their abundance is very low (standard atomic weight is ). Beryllium is unique as being the only monoisotopic element with both an even number of protons and an odd number of neutrons. There are 25 other monoisotopic elements but all have odd atomic numbers, and even numbers of neutrons. Of the 10 radioisotopes of beryllium, the most stable are with a half-life of million years and with a half-life of . All other radioisotopes have half-lives under , most under . The least stable isotope is , with a half-life of . The 1:1 neutron–proton ratio seen in stable isotopes of many light elements (up to oxygen, and in elements with even atomic number up to calcium) is prevented in beryllium by the extreme instability of toward alpha decay, which is favored due to the extremely tight binding of nuclei. The half-life for the decay of is only . Beryllium is prevented from having a stable isotope with 4 protons and 6 neutrons by the very lopsided neutron–proton ratio for such a light element. Nevertheless, this isotope, , has a half-life of million years, which indicates unusual stability for a light isotope with such a large neutron/proton imbalance. Other possible beryllium isotopes have even more severe mismatches in neutron and proton number, and thus are even less stable. Most in the universe is thought to be formed by cosmic ray nucleosynthesis from cosmic ray spallation in the period between the Big Bang and the formation of the Solar System. The isotopes , with a half-life of , and are both cosmogenic nuclides because they are made on a recent timescale in the Solar System by spallation, like . List of isotopes |-id=Beryllium-5 | |4 |1 | # | | p ? | ? | (1/2+)# | |-id=Beryllium-6 | |4 |2 | | [] | 2p | | 0+ | |- | |4 |3 | | | ε | | 3/2− | Trace |- | |4 |4 | | [] | α | | 0+ | |-id=Beryllium-8m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | | | α | | 2+ | |-id=Beryllium-9 | |4 |5 | | colspan=3 align=center|Stable | 3/2− | 1 |-id=Beryllium-9m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | | [] | | | 3/2− | |- | |4 |6 | | | β− | | 0+ | Trace |-id=Beryllium-11 | rowspan=3| | rowspan=3|4 | rowspan=3|7 | rowspan=3| | rowspan=3| | β− () | | rowspan=3|1/2+ | rowspan=3| |- | β−α () | |- | β−p () | |-id=Beryllium-11m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | | [] | IT ? | ? | 3/2− | |-id=Beryllium-12 | rowspan=2| | rowspan=2|4 | rowspan=2|8 | rowspan=2| | rowspan=2| | β− () | | rowspan=2|0+ | rowspan=2| |- | β−n () | |-id=Beryllium-12m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | | | IT | | 0+ | |-id=Beryllium-13 | |4 |9 | | | n ? | ? | (1/2−) | |-id=Beryllium-13m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | | | | | (5/2+) | |-id=Beryllium-14 | rowspan=5| | rowspan=5|4 | rowspan=5|10 | rowspan=5| | rowspan=5| | β−n () | | rowspan=5|0+ | rowspan=5| |- | β− (> ) | |- | β−2n () | |- | β−t () | |- | β−α (< ) | |-id=Beryllium-14m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | | | | | (2+) | |-id=Beryllium-15 | |4 |11 | | | n | | (5/2+) | |-id=Beryllium-16 | |4 |12 | | [] |2n | | 0+ | Beryllium-7 Beryllium-7 is an isotope with a half-life of 53.3 days that is generated naturally as a cosmogenic nuclide. The rate at which the short-lived is transferred from the air to the ground is controlled in part by the weather. decay in the Sun is one of the sources of solar neutrinos, and the first type ever detected using the Homestake experiment. Presence of in sediments is often used to establish that they are fresh, i.e. less than about 3–4 months in age, or about two half-lives of . Beryllium-10 Beryllium-10 has a half-life of , and decays by beta decay to stable boron-10 with a maximum energy of 556.2 keV. It is formed in the Earth's atmosphere mainly by cosmic ray spallation of nitrogen and oxygen. 10Be and its daughter product have been used to examine soil erosion, soil formation from regolith, the development of lateritic soils and the age of ice cores. 10Be is a significant isotope used as a proxy data measure for cosmogenic nuclides to characterize solar and extra-solar attributes of the past from terrestrial samples. Decay chains Most isotopes of beryllium within the proton/neutron drip lines decay via beta decay and/or a combination of beta decay and alpha decay or neutron emission. However, decays only via electron capture, a phenomenon to which its unusually long half-life may be attributed. Notably, its half-life can be artificially lowered by 0.83% via endohedral enclosure (7Be@C60). Also anomalous is , which decays via alpha decay to . This alpha decay is often considered fission, which would be able to account for its extremely short half-life. Notes References Beryllium Beryllium
Isotopes of beryllium
Chemistry
1,638
42,212,055
https://en.wikipedia.org/wiki/Adventurers%20%28land%20drainage%29
Adventurers were groups of English engineers and wealthy landowners, who funded large-scale land drainage projects in the seventeenth century, in return for rights to some of the land reclaimed. Early entrepreneurs Land drainage works were expensive, and were usually undertaken in sparsely populated areas. In the seventeenth century a number of such schemes were carried out by Adventurers, who acted under parliamentary sanction, but who financed the works carried out themselves. In return, they gained rights to the land reclaimed as a result of the civil engineering works. One such scheme was the draining of the Bedford Levels. The Bedford Level Corporation was in charge of the works, which when conceived in 1630, would create large tracts of "summer lands", which would be suitable for grazing during the summer months, but would still be liable to flooding in the winter. Funds for the work were provided by the Earl of Bedford and thirteen other Adventurers. Of the land reclaimed, the fourteen men were to receive , to be shared between them, while were to be given to the king, and another were designated to provide income to maintain the works once they were completed. The Dutch engineer Cornelius Vermuyden was employed to carry out the work, and on 12 October 1637, it was judged to be complete, when the Court of Sewers met at St Ives. However, there was dissatisfaction with the decision, and the Royal Commission of Sewers overturned it in 1639, when they met at Huntingdon. An Act of Parliament passed in 1649 authorised the fifth Earl of Bedford to carry out further work, so that the land could be used throughout the year for agriculture. Another Act followed in 1660, but after initial improvements, the schemes gradually deteriorated, and the Bedford Level Corporation found it increasingly difficult to find anyone prepared to invest their money, when the outcome was so full of risk. As well as the engineering challenges faced by the Adventurers, there was also opposition from those who judged that their livelihood was affected by the works. In 1631, a group of Adventurers led by Sir Anthony Thomas were authorised to drain the East Fen, the West Fen and the Wildmore Fen, to the north and west of Boston, Lincolnshire. They spent some £30,000 on the work, and received of the drained land. They subsequently spent £20,000 on improvements and buildings, and the land generated some £8,000 per year in rent. The land had previously been extra-parochial, on which people from adjacent villages had grazing rights. After seven years, the Commoners rioted in 1642, breaking down the sluices, destroying crops, and demolishing houses. The Adventurers took their case to the House of Lords, who passed a bill for the "relief and security of the drainers", but the House of Commons were less supportive, refusing to take sides. They ordered that the Sheriff and the local Justices of the Peace should act to prevent and suppress riots. The Commoners then took their grievances to court, and won. The outcome was that when the monarchy was restored in 1661, management of the Fens returned to the Court of Sewers, and remained in a poor state until the mid eighteenth century. To the south of Boston, the Earl of Lindsay and another group of Adventurers faced similar problems. Having reached an agreement with the Court of Sewers, they worked on draining the Lindsay Levels, the main feature of which was the South Forty-Foot Drain, running for from Bourne to Boston. The land reclaimed was suitable for agriculture, and in 1636 they took possession of it, building houses and growing crops. Again, Commoners and Fenmen felt that they had been dispossessed, and attempted to get Parliament to rule in their favour. After three years, they gave up their attempts at a legal solution, and took direct action, destroying the drains, buildings and crops. In the political turmoil of the time, just prior to the start of the Civil War, the Adventurers received no compensation for their loss. See also Land reclamation Twenty, Lincolnshire Witham Navigable Drains Bedford Level Corporation References Bibliography Hydraulic engineering Land drainage in the United Kingdom
Adventurers (land drainage)
Physics,Engineering,Environmental_science
835
39,084,356
https://en.wikipedia.org/wiki/S%C3%A4teri%20roof
A säteritak ("manorial roof") is a type of roof, similar to a clerestory, that enjoyed great popularity in Sweden from the mid-seventeenth century. Structure Originally used for higher-status buildings such as manors (hence the name), it consists of a hip roof, where the uppermost part has been cut off from the bottom part by an additional strip of wall and often an additional line of roof windows. It would later spread to rural buildings of more modest social status. The model for this type of roof was the more elaborate one of Riddarhuset, a palatial building in Stockholm housing the parliamentary meetings of the nobility, which was given its final form by Simon de la Vallée. Purpose The upper part, with its additional windows, was often purely decorative, but it could contain an additional floor, as in the modest Manor of Vahlsta in Västmanland (from c. 1700). References Roofs
Säteri roof
Technology,Engineering
197
51,893,358
https://en.wikipedia.org/wiki/List%20of%20internet%20service%20providers%20in%20Canada
This is an alphabetical list of notable internet service providers in Canada. Among Canada's biggest internet service providers (ISP) are Bell, Rogers, Telus, and Shaw—with the former two being the largest in Ontario, and the latter two dominating western provinces. List Former ISPs Craig Wireless Internex Online Look Communications Mountain Cablevision Persona Communications — acquired by Eastlink Rush Communications Ltd. See also Internet in Canada Telecommunications in Canada List of companies of Canada References Internet Internet service providers in Canada
List of internet service providers in Canada
Technology
102
41,962,812
https://en.wikipedia.org/wiki/LaunchCode
LaunchCode is a non-profit organization headquartered in St. Louis, Missouri. Founded in 2013 by Jim McKelvey, it aims to help people enter the technology field by providing free and accessible education, training, and paid apprenticeship placements. It has locations in Kansas City and Philadelphia. In 2020, 60% of LaunchCode students identified as women or non-binary, 49% identified as people of color, 19% identified as LGBTQIA+, and 46% did not have a 4-year degree. References External links LaunchCode website Computer programming Privately held companies of the United States American educational websites
LaunchCode
Technology,Engineering
124
2,620,487
https://en.wikipedia.org/wiki/French%20units%20of%20measurement
France has a unique history of units of measurement due to its radical decision to invent and adopt the metric system after the French Revolution. In the Ancien régime and until 1795, France used a system of measures that had many of the characteristics of the modern Imperial System of units but with no unified system. There was widespread abuse of the king's standards, to the extent that the lieue could vary from 3.268 km in Beauce to 5.849 km in Provence. During the revolutionary era and motivated in part by the inhomogeneity of the old system, France switched to the first version of the metric system. This system was not well received by the public, and between 1812 and 1837, the country used the mesures usuelles – traditional names were restored, but the corresponding quantities were based on metric units: for example, the livre (pound) became exactly 500 g. After 1837, the metric system was reintroduced and progressively became the only system of use, with other units now in only residual use. Ancien régime (to 1795) In the pre-revolutionary era (before 1795), France used a system of measures that had many of the characteristics of the modern Imperial System of units, but there was no unified system of measurement. Charlemagne and successive kings had tried but failed to impose a unified system of measurement in France. (In England, by contrast, the Magna Carta decreed that "there shall be one unit of measure throughout the realm.") The names and relationships of many units of measure were adopted from Roman units of measure and many more were added – it has been estimated that there were seven or eight hundred different names for the various units of measure. In addition, the quantity associated with each unit of measure differed from town to town and even from trade to trade to such an extent that the lieue (league) could vary from 3.268 km in Beauce to 5.849 km in Provence. It has been estimated that, on the eve of the Revolution, a quarter of a million different units of measure were in use in France. Although certain standards, such as the pied du roi (the king's foot) had a degree of pre-eminence and were used by savants across Europe, many traders chose to use their own measuring devices, giving scope for fraud and hindering commerce and industry. As an example, the weights and measures used at Pernes-les-Fontaines in southeastern France differ from those catalogued later in this article as having been used in Paris. In many cases, the names are different, while the livre is shown as being 403 g, as opposed to 489 g – the value of the livre du roi. (The Imperial pound is about 453.6 g.) Revolutionary France (1795–1812) The French Revolution and subsequent Napoleonic Wars marked the end of the Age of Enlightenment. The forces of change that had been brewing manifested themselves across all of France, including the way in which units of measure should be defined. The savants of the day favored the use of a system of units that were inter-related and which used a decimal basis. There was also a wish that the units of measure should be for all people and for all time and therefore not dependent on an artefact owned by any one particular nation. Talleyrand, at the prompting of the savant Condorcet, approached the British and the Americans in the early 1790s with proposals of a joint effort to define the metre. In the end, these approaches came to nothing and France decided to "go it alone". Decimal time was introduced in the decree of 5 October 1793 under which the day was divided into 10 "decimal hours", the "hour" into 100 " decimal minutes" and the "decimal minute" into 100 "decimal seconds". The "decimal hour" corresponded to 2 hr 24 min, the "decimal minute" to 1.44 min and the "decimal second" to 0.864 s. The implementation of decimal time proved an immense task and under the article 22 of the law of 18 Germinal, Year III (7 April 1795), the use of decimal time was no longer mandatory. On 1 January 1806, France reverted to the traditional timekeeping. The metric system of measure was first given a legal basis in 1795 by the French Revolutionary government. Article 5 of the law of 18 Germinal, Year III (7 April 1795) defined five units of measure. The units and their preliminary values were: The metre, for length – defined as being one ten millionth of the distance between the North Pole and the Equator through Paris The are (100 m2) for area [of land] The stère (1 m3) for volume of firewood The litre (1 dm3) for volumes of liquid The gram, for mass – defined as being the mass of one cubic centimetre of water Decimal multiples and submultiples of these units would be defined by Greek prefixes - "myria", "kilo", "hecta" (100), "deka" - and Latin prefixes - "deci", "centi" and "milli". Using Cassini's survey of 1744, a provisional value of 443.44 lignes was assigned to the metre which, in turn, defined the other units of measure. The final value of the metre had to wait until 1799, when Delambre and Mechain presented the results of their survey between Dunkirk and Barcelona that fixed the length of the metre at 443.296 lignes. The law 19 Frimaire An VIII (10 December 1799) defined the metre in terms of this value and the kilogram as being 18,827.15 grains. These definitions enabled the construction of reference copies of the kilogram and metre, which were to be used as standards for the next 90 years. At the same time, a new decimal-based system for angular measurement was implemented. The right angle was divided into 100 grads, which in turn was divided in 100 centigrads. An arc on the earth’s surface formed by an angle of one centigrad was one kilometre. Mesures usuelles (1812–1839) The metric system was introduced into France in 1795 on a district by district basis, starting with Paris. However, the introduction was by modern standards poorly managed. Although thousands of pamphlets were distributed, the Agency of Weights and Measures, which oversaw the introduction, underestimated the work involved. Paris alone needed 500,000 metre sticks, yet one month after the metre became the sole legal unit of measure, there were only 25,000 in store. This, combined with other excesses of the Revolution made the metric system unpopular. Napoleon ridiculed the metric system, but as an able administrator, he recognized the value of a sound basis for a uniform system of measurement. Under the décret impérial du 12 février 1812 (imperial decree of 12 February 1812), he introduced a revised system of measure – the mesures uselles or "customary measures" for use in small retail businesses. However, all government, legal and similar works still had to use the metric system and the metric system continued to be taught at all levels of education. Many pre-metric units were reintroduced, with their old relations to each other, but were redefined in terms of metric units. Thus the aune was defined as 120 centimetres and the toise (fathom) as being two metres, with as before six pied (feet) making up one toise, twelve pouce (inches) making up one pied and twelve lignes making up one pouce. Likewise, for mass and weight, the livre (pound) was defined as being 500 g, each livre comprising sixteen once and each once eight gros. The metric system restored (1840–1875) La loi du 4 juillet 1837 (the law of 4 July 1837) of the July Monarchy effectively revoked the use of mesures usuelles by reaffirming the laws of measurement of 1795 and 1799 to be used from 1 May 1840. However, many units of measure, such as the livre, remained in colloquial use for many years and the livre still does to some extent. When this legislation was introduced, the metric system was beginning to take hold across Europe. Switzerland and the German state of Baden had both defined their Fuß (foot) as being 300 mm and the German state of Hessen-Darmstadt has defined its Fuß as being 250 mm. Moreover, the Netherlands, Belgium, Greece, Lombardy and Venice had all adopted the metric system, albeit with local names for the "metre", "kilogram" and so on. The metric system was given a boost when the German Zollverein (Customs Union) introduced the Zollpfund of 500 g in 1850. The Great Exhibition of 1851 in London was followed by international exhibitions in Paris in 1855 and 1867. The 1867 exhibition had a stand showing how the diverse units of measure were converging onto the metric system – a system that had been developed in France and whose standards were in the custody of the French government, but available for world use. In 1870, while France was preparing to host an international conference to discuss international cooperation in the sphere of units of measurement, the war broke out. France was humiliated by Prussia's military action, but in 1872 France seized the diplomatic initiative and re-issued the invitations for the 1870 conference. The conference met in 1875 and concluded with the signing of the Treaty of the Metre. The principal agreements under the treaty were: A three-tier organization would be put into place to provide political (CGPM), scientific (CIPM) and secretarial (BIPM) support for coordinating calibrations of national standards against an international standard. One of the eighteen seats on the CIPM would always be filled by a Frenchman. France would provide premises for the secretariat. These premises, at the Pavillon de Breteuil, near Paris would have a diplomatic status. New prototypes of the kilogram and metre would be manufactured. These were ultimately made in England and delivered in 1889. Thus the French metre and kilogram passed into international control. International era (1875 onwards) During the early part of the twentieth century, the French introduced their own units of power – the , which was defined as being the power required to raise a mass of 100 kg against standard gravity with a velocity of 1 m/s, giving a value of 980.665 W. However, many other European countries defined their units of power (the in Germany, the in the Netherlands and the in Italy) using 75 kg rather than 100 kg, which gave a value of 735.49875 W (about 0.985 HP). Eventually, the was replaced with the , which was identical to equivalent units of measure in neighboring countries. In 1977, these units, along with the and the (and amongst others, the German ) were proscribed by EEC Directive 71/354/EEC which required EU member states to standardize on the International System of Units (SI) and therefore to use the watt and its multiples. See also International System of Units Jean-Antoine Chaptal Mansus Mesures usuelles Réaumur scale Systems of measurement Units of measurement Units of measurement in France before the French Revolution References Systems of units Science and technology in France France Metrication in France
French units of measurement
Mathematics
2,359
6,100,068
https://en.wikipedia.org/wiki/Italian%20Amateur%20Astronomers%20Union
The Italian Amateur Astronomers Union (; Unione degli Astrofili Italiani; UAI), also known as Union of Italian Amateur Astronomers, is an Italian organization active in astronomy research and outreach that was founded in 1967. Its members are both professional and amateur astronomers. The UAI claims more than two thousands members from the whole Italy and is one of the most important amateur astronomical associations in Europe. The main-belt asteroid 234026 Unioneastrofili, discovered by Luciano Tesi in 1998, was named in honor of the organization. References External links Official Site of the Unione Astrofili Italiani (in Italian) Astronomy organizations
Italian Amateur Astronomers Union
Astronomy
134
54,748,520
https://en.wikipedia.org/wiki/NGC%204477
NGC 4477 is a barred lenticular galaxy located about 55 million light-years away in the constellation of Coma Berenices. NGC 4477 is classified as a type 2 Seyfert galaxy. The galaxy was discovered by astronomer William Herschel on April 8, 1784. NGC 4477 is a member of Markarian's Chain which forms part of the larger Virgo Cluster. Physical characteristics NGC 4477 has a very well-defined bar which is imbedded within an extensive lens-like envelope. It has a fairly sharp edge and is slightly enhanced near the rim, and is classified as a ring-like feature. Surrounding the ring, two broad, diffuse incomplete arcs appear to bracket the galaxy around the bar. It is suggested that NGC 4477 has a highly evolved double ring morphology. Also, both ring features are exceedingly washed out. See also List of NGC objects (4001–5000) NGC 1291 NGC 6782 Gallery References External links Barred lenticular galaxies Seyfert galaxies Coma Berenices Virgo Cluster 4477 41260 7638 Astronomical objects discovered in 1784
NGC 4477
Astronomy
230
13,232,655
https://en.wikipedia.org/wiki/Contaflex%20SLR
The Contaflex series is a family of 35mm Single-lens reflex cameras (SLR) equipped with a leaf shutter, produced by Zeiss Ikon in the 1950s and 1960s. The name was first used by Zeiss Ikon in 1935 for a 35mm Twin-lens reflex camera, the Contaflex TLR; for the earlier TLR, the -flex suffix referred to the integral reflex mirror for the viewfinder. The first SLR models, the Contaflex I and II (introduced in 1953) have fixed lenses, while the later models have interchangeable lenses; eventually the Contaflexes became a camera system with a wide variety of accessories. History The Mecaflex was presented at photokina 1951 and launched two years later as one of the first SLRs, fitted with a leaf shutter behind the removable lens and a waist-level viewfinder with a reflex mirror that swings out of the way during the film exposure. Compared to twin-lens reflex cameras, the SLR offered several advantages: photographer would be able to view the scene exactly through the same lens that would be used to expose the film, and only a single lens was required, reducing costs. The later Hasselblad 500C, introduced in 1957, is a similar SLR design that uses leaf shutters; for the Hasselblad, each of its interchangeable lenses has a shutter. The first Contaflex SLRs were introduced in 1953, following the general design of the Mecaflex using a Compur leaf shutter and reflex mirror, but the Contaflex cameras were equipped with an integral eye-level finder and a fixed lens. The advantages of using the leaf shutter are low manufacturing costs, compactness, quieter operation, and flash synchronization at all shutter speeds. However, using a leaf shutter in an SLR requires additional mechanical complications to cock the shutter and return the mirror after the shutter is released and the film is wound; these were seen more as a challenge than a drawback at Zeiss Ikon, but no Contaflex model ever got a rapid return mirror. However, only a very limited range of interchangeable lenses became available. For the models I and II, having a fixed lens, only three add-on converters were offered using a slide-on adapter, but from models III and IV onwards interchangeable lenses from 35mm to 115mm focal length were provided; at the time regarded as quite sufficient, as most would only be used with the standard lens anyway. Three years later, during 1956, the Kodak Retina Reflex was launched, followed by the Voigtländer Bessamatic and the Ultramatic. The market soon flourished with leaf-shuttered SLR cameras. These mechanically complex cameras required precision assembly and high quality materials. More often than not many camera makes suffered from reliability issues, while the few better ones performed well, selling in quantity. Cameras Contaflex I and II The Contaflex I, launched in 1953, was equipped with a fixed Zeiss Tessar 45 mm lens with front-cell focusing. The earliest Contaflex I cameras had a Synchro-Compur shutter with the old scale of shutter speeds (1-2-5-10-25-50-100-250-500) and no self-timer, but very soon it adopted the new scale 1-2-4-8-15-30-60-125-250-500. The Contaflex II, introduced the following year, was the same camera with an uncoupled selenium meter added to one side of the front plate. For both the Teleskop 1.7× supplementary lens could be attached to the front of the fixed lens using an accessory carrier bracket; as the name suggests, this extended the focal length by 70% to approximately 75 mm. The same bracket could be used for the Steritar A attachment, which was used for stereo photography. Contaflex III and IV The Contaflex III, launched in 1956, was the same as the I, but equipped with a Zeiss Tessar 50mm with unit helical focusing. The Contaflex IV, introduced the same year, was the same camera with the uncoupled meter inherited from the Contaflex II. The III and IV were equipped with a convertible lens system branded Pro-Tessar, where the front element of the standard lens was removable and could be replaced by supplementary lenses, as discussed in the section Contaflex lenses, to create 35 mm and 80 mm lenses, both . Contaflex Alpha and Beta The Contaflex Alpha and Contaflex Beta, both introduced in 1957, were lower-cost versions of the convertible-lens Contaflex III and IV, respectively; to reduce costs, the lens was changed to a Rodenstock (Zeiss-branded) Pantar 45 mm triplet with front-element focusing and the Compur shutter was replaced by a Prontor Reflex shutter, with a slight reduction in minimum shutter speed to . The Alpha had no meter, like the I/III, and the Beta had the selenium meter of the II/IV. The front element of the lens could be interchanged with supplemental lenses to create 30 mm and 75 mm lenses, both . These supplemental lenses had been introduced and were shared with the earlier (1955) Contina III 35mm viewfinder camera. Contaflex Rapid and Super The Contaflex Rapid was introduced in 1958; compared to the III, which it replaced, the Rapid had a slightly longer body, a built-in accessory shoe, a winding lever and a rewind crank. It retained the 50 mm Tessar and convertible lens system from the III. The "Contaflex" name engraved on the front of the prism was changed to a script typeface instead of the sans-serif used on prior Contaflex cameras. It was the meterless version and was discontinued in 1960. The Contaflex Super, launched the following year, was based on the Rapid and had a coupled selenium exposure meter on the front side of the prism. It is easily recognized by the wheel on the front plate for the setting of the film speed (DIN). The meter needle was visible in the finder as well as on the top plate from the outside. It is sometimes referred to parenthetically as the Super (old style) to avoid confusion with the later Super (new). The major innovation for the Rapid/Super over the III/IV was the introduction of interchangeable film magazines, which permitted the photographer to swap emulsions mid-roll. The new body of the Rapid and Super allowed them to take magazine backs, interchangeable with a partly exposed film inside. Magazine backs, rare among 35mm cameras, also were supplied for the Contarex of Zeiss Ikon. The Rapid and Super (old style) could take the same supplementary 35 mm and 80 mm lenses as the III and IV, and newer Pro-Tessar supplementary lenses were available for the Rapid and Super to create 35 mm , 85 mm , and 115 mm lenses. Contaflex Prima The Contaflex Prima, launched in 1959 and sold until 1965, was based on the body of the Rapid, retaining the new film magazine and lever wind, but with costs reduced by fitting the Pantar triplet lens and the Prontor shutter like the Alpha and Beta. The Prima had a coupled exposure meter placed on the side of the front plate, similar to the Beta. The Prima could take the same Pantar supplementary lenses as the Alpha and Beta. Contaflex Super (new) and Super B The Contaflex Super (new) and Contaflex Super B are very similar cameras. Both have a new body design, being longer with added bulk. The information about which came first is a bit contradictory in some reference books, but it seems the Super (new) was launched in 1962, introducing the new body design and a new selenium exposure meter in a prominent rectangle marked Zeiss Ikon in front of the prism. The aperture wheel was replaced by a more traditional aperture command, and the meter read-out was visible both on the exterior and in the finder. The Super B was launched in 1963, and added a shutter-priority automatic aperture, and some other small changes. The Super B can be distinguished by the presence of an "A"utomatic setting for the shutter speed ring and an EV scale in the viewfinder. From the Super (new) and Super B, the Zeiss Tessar 50mm f:2.8 lens was recomputed and supposedly performed better. They could still take the same supplementary lenses, with one exception discussed in the relevant section. Contaflex Super BC and S The Contaflex Super BC was introduced in 1965, and was a Super B with the selenium meter replaced by a CdS through-the-lens exposure meter. It still had a black rectangle marked Zeiss Ikon on the front of the prism, but it was only decorative. It had a battery compartment at the bottom front. The Contaflex S was the last variant, introduced in 1968, and was simply a renamed Super BC, sold until Zeiss Ikon ceased production in 1972. It had a black rectangle marked Contaflex S on the front, and a different, newer Zeiss Ikon logo. It proudly sported the word Automatic on the front of the shutter. The Super BC and S could take the magazine backs, as well as the usual supplementary lenses. Both the Contaflex Super BC and S were, along with the 126-format Contaflex 126, available in chrome or black finish. Contaflex 126 The Contaflex 126 is related to the Contaflex SLR family primarily by its name and general appearance, as it takes a different film format (126 film) and uses a different shutter technology (focal plane shutter) than the rest of the family. Voigtländer had developed it as the Icarex 126, and it was released as a Zeiss Ikon camera after Voigtländer's operations were consolidated into its larger parent in the late 1960s. It was introduced in 1967 to accept Kodak 126 (Instamatic) cartridges. It was one of the very few SLRs taking 126 film, and one of the very few cameras using that film aimed at the premium market. Two other examples of 126 SLRs are the Rollei SL26 and Kodak Instamatic Reflex. Former Zeiss-Ikon chief designer Hubert Nerwin, who designed the famous CONTAX 2 and 3 rangefinder cameras and other cameras for Zeiss-Ikon, later invented the 126 film cassette. This was after he emigrated to the U.S. after World War 2 and was working for Kodak. The Contaflex 126 is an SLR with a focal-plane shutter and interchangeable lenses. It was available in chrome or black finish. The range of lenses was: Zeiss Distagon 25/4 Zeiss Distagon 32/2.8 Zeiss Color-Pantar 45/2.8, three-element, cheaper Zeiss Tessar 45/2.8, four-element, better Zeiss Sonnar 85/2.8 Zeiss Tele-Tessar 135/4 Zeiss Tele-Tessar 200/4 The Contaflex 126 lenses are often confused with other lenses by the sellers. They can only be used on the Contaflex 126 body, which can only accept the obsolete 126 film cartridge, so the value of these lenses is not very high, despite their famous names. Weber SL75 When Zeiss Ikon stopped making cameras in 1972, they had prototypes in various stages of development. One of them was the SL725, which would be a successor to the Contaflex line with an electronic shutter. The prototype ended in the hands of a company named Weber, which presented the camera at a photokina show under the name Weber SL75, but could not afford to put it into production, and did not find a partner to do so. The lens mount was a modification of the Contarex camera lens mount. Carl Zeiss advertised a range of lenses for the Weber SL75, all with the T* multicoating: 18/4 Distagon 25/2.8 Distagon 35/2.8 Distagon 50/1.4 Planar 85/2.8 Sonnar 135/2.8 Sonnar 200/3.5 Tele-Tessar An eBay seller seems to have uncovered a small stock of the Planar lens, and has recently sold a couple of them. Recently (2021), several of these lenses have surfaced again and were sold on eBay. No SL75 body seems to have surfaced so far, and the only picture found on the web is here and from an Italian photo magazine as a preview in their Nov. 1974 issue, as seen to the right. Contaflex lenses There are three classes of supplemental lenses available for Contaflex SLRs, which are not interchangeable between class: The Contaflex I and II could only take the Teleskop 1.7x supplementary lenses, and the Alpha, Beta and Prima had their own limited range of Pantar supplementary lenses. The models III, IV, Rapid, Super, Super (new), Super B, Super BC and S all have a Zeiss Tessar 50mm f:2.8 lens (27mm screw-in or 28.5mm push-on filters); the front element can be removed and replaced by a supplemental lens: Zeiss Pro-Tessar 35/4 (49mm filters), later replaced by the Pro-Tessar 35/3.2 (60mm screw-over filters) Zeiss Pro-Tessar 85/4 (60mm screw-over filters), later replaced by the Pro-Tessar 85/3.2 (60mm filters) Zeiss Pro-Tessar 115/4 (67mm filters) Monocular 8x30B, equivalent to a 400mm lens (attaches to the 50mm f/2.8 Tessar lens). There was also a Zeiss Pro-Tessar M 1:1 supplementary lens, that kept the focal length of 50mm but allowed 1:1 reproduction. The effective speed of the M 1:1 lens is f/5.6. The 50mm standard front elements, as well as the Pro-Tessar M 1:1 elements, were different between the early models III, IV, Rapid and Super with the old model of Tessar, and the later models Super (new), Super B, Super BC and S with the recomputed Tessar. It appears that the mount was very slightly modified, and it seems physically impossible to mismatch the elements as the journal diameter above the bayonet mount had been reduced by approximately .006" There were also stereo attachments: Steritar A for the Contaflex I and II Steritar B for the other Tessar-equipped models Near Steritar for close up stereo pictures .2 – 2.5 meters (Normally interchangeable with the older Tessar line of Steritar B camera lenses) Steritar D for the Pantar-equipped models A complete line of these Contaflex Steritar lenses can be seen at (https://www.flickr.com/photos/12670411@N02/) Zeiss Proxar for Contaflex: 1M,0.5M,0.3M,0.2M and 0.1M Accessories Slip on metal lens hood Screw in metal lens hood Film back Zeiss Proxar lens set References Bibliography Barringer, C. and Small, M. Zeiss Compendium East and West — 1940–1972. Small Dole, UK: Hove Books, 1999 (2nd edition). . External links Contaflex II and Contaflex S at La Chambre Claire Contaflex 126 at www.collection-appareils.com by Sylvain Halgand Contaflex II at www.collection-appareils.com by Sylvain Halgand User manuals, Ads about Contaflex at www.collection-appareils.com by Sylvain Halgand Single-lens reflex cameras
Contaflex SLR
Technology
3,414
10,370,626
https://en.wikipedia.org/wiki/South%20Dakota%20statistical%20areas
The U.S. currently has 14 statistical areas that have been delineated by the Office of Management and Budget (OMB). On July 21, 2023, the OMB delineated two combined statistical areas, three metropolitan statistical areas, and nine micropolitan statistical areas in . As of 2023, the largest of these is the Sioux Falls, SD-MN MSA, comprising the area around the state's largest city of Sioux Falls. Table Primary statistical areas Primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area. Of the 14 statistical areas of North Dakota, 11 are PSAs comprising two combined statistical areas, one metropolitan statistical area and eight micropolitan statistical areas. See also Geography of South Dakota Demographics of South Dakota Notes References External links Office of Management and Budget United States Census Bureau United States statistical areas Statistical Areas Of South Dakota Statistical Areas Of South Dakota
South Dakota statistical areas
Mathematics
198
13,341,876
https://en.wikipedia.org/wiki/Threat
A threat is a communication of intent to inflict harm or loss on another person. Intimidation is a tactic used between conflicting parties to make the other timid or psychologically insecure for coercion or control. The act of intimidation for coercion is considered a threat. Threatening or threatening behavior (or criminal threatening behavior) is the crime of intentionally or knowingly putting another person in fear of bodily injury. Some of the more common types of threats forbidden by law are those made with an intent to obtain a monetary advantage or to compel a person to act against their will. In most U.S. states, it is an offense to threaten to (1) use a deadly weapon on another person; (2) injure another's person or property; or (3) injure another's reputation. Law Brazil In Brazil, the crime of threatening someone, defined as a threat to cause unjust and grave harm, is punishable by a fine or three months to one year in prison, as described in the Brazilian Penal Code, article 147. Brazilian does not treat as a crime a threat that was proffered in a heated discussion. Germany The German Strafgesetzbuch § 241 punishes the crime of threat with a prison term for up to three years or a fine. United States In the United States, federal law criminalizes certain true threats transmitted via the U.S. mail or in interstate commerce. It also criminalizes threatening the government officials of the United States. Some U.S. states criminalize cyberbullying. Threats of bodily harm are considered assault. State of Texas In the state of Texas, it is not necessary that the person threatened actually perceive a threat for a threat to exist for legal purposes. True threat A true threat is threatening communication that can be prosecuted under the law. It is distinct from a threat that is made in jest. The U.S. Supreme Court has held that true threats are not protected under the U.S. Constitution based on three justifications: preventing fear, preventing the disruption that follows from that fear, and diminishing the likelihood that the threatened violence will occur. See also References Harassment and bullying Speech crimes Psychological abuse
Threat
Biology
448
4,591,070
https://en.wikipedia.org/wiki/Intestinal%20gland
In histology, an intestinal gland (also crypt of Lieberkühn and intestinal crypt) is a gland found in between villi in the intestinal epithelial lining of the small intestine and large intestine (or colon). The glands and intestinal villi are covered by epithelium, which contains multiple types of cells: enterocytes (absorbing water and electrolytes), goblet cells (secreting mucus), enteroendocrine cells (secreting hormones), cup cells, myofibroblast, tuft cells, and at the base of the gland, Paneth cells (secreting anti-microbial peptides) and stem cells. Structure Intestinal glands are found in the epithelia of the small intestine, namely the duodenum, jejunum, and ileum, and in the large intestine (colon), where they are sometimes called colonic crypts. Intestinal glands of the small intestine contain a base of replicating stem cells, Paneth cells of the innate immune system, and goblet cells, which produce mucus. In the colon, crypts do not have Paneth cells. Function The enterocytes in the small intestinal mucosa contain digestive enzymes that digest specific foods while they are being absorbed through the epithelium. These enzymes include peptidase, sucrase, maltase, lactase and intestinal lipase. This is in contrast to the gastric glands of the stomach where chief cells secrete pepsinogen. Also, new epithelium is formed here, which is important because the cells at this site are continuously worn away by the passing food. The basal (further from the intestinal lumen) portion of the crypt contains multipotent stem cells. During each mitosis, one of the two daughter cells remains in the crypt as a stem cell, while the other differentiates and migrates up the side of the crypt and eventually into the villus. These stem cells can differentiate into either an absorptive (enterocytes) or secretory (Goblet cells, Paneth cells, enteroendocrine cells) lineages. Both Wnt and Notch signaling pathways play a large role in regulating cell proliferation and in intestinal morphogenesis and homeostasis. Loss of proliferation control in the crypts is thought to lead to colorectal cancer. Intestinal juice Intestinal juice (also called succus entericus) refers to the clear to pale yellow watery secretions from the glands lining the small intestine walls. The Brunner's glands secrete large amounts of alkaline mucus in response to (1) tactile or irritating stimuli on the duodenal mucosa; (2) vagal stimulation, which increases Brunner's glands secretion concurrently with increase in stomach secretion; and (3) gastrointestinal hormones, especially secretin. Its function is to complete the process begun by pancreatic juice; the enzyme trypsin exists in pancreatic juice in the inactive form trypsinogen, it is activated by the intestinal enterokinase in intestinal juice. Trypsin can then activate other protease enzymes and catalyze the reaction pro-colipase → colipase. Colipase is necessary, along with bile salts, to enable lipase function. Intestinal juice also contains hormones, digestive enzymes, mucus, substances to neutralize hydrochloric acid coming from the stomach. Various exopeptidase which further digests polypeptides into amino acids complete the digestion of proteins. Colonic crypts The intestinal glands in the colon are often referred to as colonic crypts. The epithelial inner surface of the colon is punctuated by invaginations, the colonic crypts. The colon crypts are shaped like microscopic thick-walled test tubes with a central hole down the length of the tube (the crypt lumen). Four tissue sections are shown here, two (A and B) cut across the long axes of the crypts and two (C and D) cut parallel to the long axes. In these images the cells have been stained to show a brown-orange color if the cells produce a mitochondrial protein called cytochrome c oxidase subunit I (CCOI or COX-1). The nuclei of the cells (located at the outer edges of the cells lining the walls of the crypts) are stained blue-gray with haematoxylin. As seen in panels C and D, crypts are about 75 to about 110 cells long. The average crypt circumference is 23 cells. From the images, an average is shown to be about 1,725 to 2530 cells per colonic crypt. Another measure was attained giving a range of 1500 to 4900 cells per colonic crypt. Cells are produced at the crypt base and migrate upward along the crypt axis before being shed into the colonic lumen days later. There are 5 to 6 stem cells at the bases of the crypts. As estimated from the image in panel A, there are about 100 colonic crypts per square millimeter of the colonic epithelium. The length of the human colon is, on average 160.5 cm (measured from the bottom of the cecum to the colorectal junction) with a range of 80 cm to 313 cm. The average inner circumference of the colon is 6.2 cm. Thus, the inner surface epithelial area of the human colon has an area, on average, of about 995 cm2, which includes 9,950,000 (close to 10 million) crypts. In the four tissue sections shown here, many of the intestinal glands have cells with a mitochondrial DNA mutation in the CCOI gene and appear mostly white, with their main color being the blue-gray staining of the nuclei. As seen in panel B, a portion of the stem cells of three crypts appear to have a mutation in CCOI, so that 40% to 50% of the cells arising from those stem cells form a white segment in the cross cut area. Overall, the percentage of crypts deficient for CCOI is less than 1% before age 40, but then increases linearly with age. Colonic crypts deficient for CCOI reaches, on average, 18% in women and 23% in men, by 80–84 years of age. Crypts of the colon can reproduce by fission, as seen in panel C, where a crypt is dividing to form two crypts, and in panel B where at least one crypt appears to be fissioning. Most crypts deficient in CCOI are in clusters of crypts (clones of crypts) with two or more CCOI-deficient crypts adjacent to each other (see panel D). Clinical significance Crypt inflammation is known as cryptitis and characterized by the presence of neutrophils between the enterocytes. A severe cryptitis may lead to a crypt abscess. Pathologic processes that lead to Crohn's disease, i.e. progressive intestinal crypt destruction, are associated with branching of the crypts. Causes of crypt branching include: inflammatory bowel disease (e.g. ulcerative colitis, Crohn's disease), persistent infectious colitides, and ischemic colitis. Research Intestinal glands contain adult stem cells referred to as intestinal stem cells. These cells have been used in the field of stem biology to further understand stem cell niches, and to generate intestinal organoids. History The crypts of Lieberkühn are named after the eighteenth-century German anatomist Johann Nathanael Lieberkühn. References External links Illustration at trinity.edu Illustration at kumc.edu Illustration at uokhsc.edu Digestive system
Intestinal gland
Biology
1,644
66,315,644
https://en.wikipedia.org/wiki/Stipitatic%20acid
Stipitatic acid is a tropolone derivative isolated from Talaromyces stipitatus (Penicillium stipitatum). References Tropolones Tropones Aromatic compounds
Stipitatic acid
Chemistry
45
67,321,416
https://en.wikipedia.org/wiki/Besicovitch%20inequality
In mathematics, the Besicovitch inequality is a geometric inequality relating volume of a set and distances between certain subsets of its boundary. The inequality was first formulated by Abram Besicovitch. Consider the n-dimensional cube with a Riemannian metric . Let denote the distance between opposite faces of the cube. The Besicovitch inequality asserts that The inequality can be generalized in the following way. Given an n-dimensional Riemannian manifold M with connected boundary and a smooth map , such that the restriction of f to the boundary of M is a degree 1 map onto , define Then . The Besicovitch inequality was used to prove systolic inequalities on surfaces. Notes References Burago, Dmitri & Burago, Yuri & Ivanov, Sergei. (2001). A Course in Metric Geometry. Graduate Studies in Mathematics 33. Burago Yu. & Zalgaller, V. A. Geometric inequalities. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 285. Springer Series in Soviet Mathematics. Springer-Verlag, Berlin, 1988. Misha Gromov. Metric structures for Riemannian and non-Riemannian spaces. Based on the 1981 French original. With appendices by M. Katz, P. Pansu and S. Semmes. Translated from the French by Sean Michael Bates. Progress in Mathematics, 152. Birkhäuser Boston, Inc., Boston, MA, 1999. xx+585 pp. . Burago, D., & Ivanov, S. (2002). On Asymptotic Volume of Finsler Tori, Minimal Surfaces in Normed Spaces, and Symplectic Filling Volume. Annals of Mathematics, 156(3), second series, 891-914. doi:10.2307/3597285 Geometric inequalities
Besicovitch inequality
Mathematics
396
624,681
https://en.wikipedia.org/wiki/Lyudmila%20Zhuravleva
Lyudmila Vasilyevna Zhuravleva (, ; born 22 May 1946) is a Soviet, Russian and Ukrainian astronomer, who worked at the Crimean Astrophysical Observatory in Nauchnij, where she discovered 213 minor planets. She also serves as president of the Crimean branch of the "Prince Clarissimus Aleksandr Danilovich Menshikov Foundation" (which was founded in May 1995 in Berezovo, and is not the same as the "Menshikov Foundation" children's charity founded by Anthea Eno, the wife of Brian Eno). She has discovered a number of asteroids, including the Trojan asteroid 4086 Podalirius and asteroid 2374 Vladvysotskij. Zhuravleva is ranked 43 in the Minor Planet Center's list of those who have discovered minor planets. She is credited with having discovered 200, and co-discovered an additional 13 between 1972 and 1992. In the rating of minor planet discoveries, she is listed in 57th place out of 1,429 astronomers. The main-belt asteroid 26087 Zhuravleva, discovered by her colleague Lyudmila Karachkina at Nauchnij, was named in her honour. List of discovered minor planets References Discoverers of asteroids Living people 1946 births Soviet astronomers Ukrainian astronomers Women astronomers
Lyudmila Zhuravleva
Astronomy
275
54,117,020
https://en.wikipedia.org/wiki/Unrestricted%20algorithm
An unrestricted algorithm is an algorithm for the computation of a mathematical function that puts no restrictions on the range of the argument or on the precision that may be demanded in the result. The idea of such an algorithm was put forward by C. W. Clenshaw and F. W. J. Olver in a paper published in 1980. In the problem of developing algorithms for computing, as regards the values of a real-valued function of a real variable (e.g., g[x] in "restricted" algorithms), the error that can be tolerated in the result is specified in advance. An interval on the real line would also be specified for values when the values of a function are to be evaluated. Different algorithms may have to be applied for evaluating functions outside the interval. An unrestricted algorithm envisages a situation in which a user may stipulate the value of x and also the precision required in g(x) quite arbitrarily. The algorithm should then produce an acceptable result without failure. References Numerical analysis Theoretical computer science
Unrestricted algorithm
Mathematics
216
1,301,093
https://en.wikipedia.org/wiki/Two-port%20network
In electronics, a two-port network (a kind of four-terminal network or quadripole) is an electrical network (i.e. a circuit) or device with two pairs of terminals to connect to external circuits. Two terminals constitute a port if the currents applied to them satisfy the essential requirement known as the port condition: the current entering one terminal must equal the current emerging from the other terminal on the same port. The ports constitute interfaces where the network connects to other networks, the points where signals are applied or outputs are taken. In a two-port network, often port 1 is considered the input port and port 2 is considered the output port. It is commonly used in mathematical circuit analysis. Application The two-port network model is used in mathematical circuit analysis techniques to isolate portions of larger circuits. A two-port network is regarded as a "black box" with its properties specified by a matrix of numbers. This allows the response of the network to signals applied to the ports to be calculated easily, without solving for all the internal voltages and currents in the network. It also allows similar circuits or devices to be compared easily. For example, transistors are often regarded as two-ports, characterized by their -parameters (see below) which are listed by the manufacturer. Any linear circuit with four terminals can be regarded as a two-port network provided that it does not contain an independent source and satisfies the port conditions. Examples of circuits analyzed as two-ports are filters, matching networks, transmission lines, transformers, and small-signal models for transistors (such as the hybrid-pi model). The analysis of passive two-port networks is an outgrowth of reciprocity theorems first derived by Lorentz. In two-port mathematical models, the network is described by a 2 by 2 square matrix of complex numbers. The common models that are used are referred to as -parameters, -parameters, -parameters, -parameters, and -parameters, each described individually below. These are all limited to linear networks since an underlying assumption of their derivation is that any given circuit condition is a linear superposition of various short-circuit and open circuit conditions. They are usually expressed in matrix notation, and they establish relations between the variables , voltage across port 1 , current into port 1 , voltage across port 2 , current into port 2 which are shown in figure 1. The difference between the various models lies in which of these variables are regarded as the independent variables. These current and voltage variables are most useful at low-to-moderate frequencies. At high frequencies (e.g., microwave frequencies), the use of power and energy variables is more appropriate, and the two-port current–voltage approach is replaced by an approach based upon scattering parameters. General properties There are certain properties of two-ports that frequently occur in practical networks and can be used to greatly simplify the analysis. These include: Reciprocal networks A network is said to be reciprocal if the voltage appearing at port 2 due to a current applied at port 1 is the same as the voltage appearing at port 1 when the same current is applied to port 2. Exchanging voltage and current results in an equivalent definition of reciprocity. A network that consists entirely of linear passive components (that is, resistors, capacitors and inductors) is usually reciprocal, a notable exception being passive circulators and isolators that contain magnetized materials. In general, it will not be reciprocal if it contains active components such as generators or transistors. Symmetrical networks A network is symmetrical if its input impedance is equal to its output impedance. Most often, but not necessarily, symmetrical networks are also physically symmetrical. Sometimes also antimetrical networks are of interest. These are networks where the input and output impedances are the duals of each other. Lossless network A lossless network is one which contains no resistors or other dissipative elements. Impedance parameters (z-parameters) where All the -parameters have dimensions of ohms. For reciprocal networks . For symmetrical networks . For reciprocal lossless networks all the are purely imaginary. Example: bipolar current mirror with emitter degeneration Figure 3 shows a bipolar current mirror with emitter resistors to increase its output resistance. Transistor is diode connected, which is to say its collector-base voltage is zero. Figure 4 shows the small-signal circuit equivalent to Figure 3. Transistor is represented by its emitter resistance : a simplification made possible because the dependent current source in the hybrid-pi model for draws the same current as a resistor connected across . The second transistor is represented by its hybrid-pi model. Table 1 below shows the z-parameter expressions that make the z-equivalent circuit of Figure 2 electrically equivalent to the small-signal circuit of Figure 4. The negative feedback introduced by resistors can be seen in these parameters. For example, when used as an active load in a differential amplifier, , making the output impedance of the mirror approximately compared to only without feedback (that is with = 0Ω). At the same time, the impedance on the reference side of the mirror is approximately only a moderate value, but still larger than with no feedback. In the differential amplifier application, a large output resistance increases the difference-mode gain, a good thing, and a small mirror input resistance is desirable to avoid Miller effect. Admittance parameters (y-parameters) where All the Y-parameters have dimensions of siemens. For reciprocal networks . For symmetrical networks . For reciprocal lossless networks all the are purely imaginary. Hybrid parameters (h-parameters) where This circuit is often selected when a current amplifier is desired at the output. The resistors shown in the diagram can be general impedances instead. Off-diagonal -parameters are dimensionless, while diagonal members have dimensions the reciprocal of one another. For reciprocal networks . For symmetrical networks . For reciprocal lossless networks and are real, while and are purely imaginary. Example: common-base amplifier Note: Tabulated formulas in Table 2 make the -equivalent circuit of the transistor from Figure 6 agree with its small-signal low-frequency hybrid-pi model in Figure 7. Notation: is base resistance of transistor, is output resistance, and is mutual transconductance. The negative sign for reflects the convention that are positive when directed into the two-port. A non-zero value for means the output voltage affects the input voltage, that is, this amplifier is bilateral. If , the amplifier is unilateral. History The -parameters were initially called series-parallel parameters. The term hybrid to describe these parameters was coined by D. A. Alsberg in 1953 in "Transistor metrology". In 1954 a joint committee of the IRE and the AIEE adopted the term -parameters and recommended that these become the standard method of testing and characterising transistors because they were "peculiarly adaptable to the physical characteristics of transistors". In 1956, the recommendation became an issued standard; 56 IRE 28.S2. Following the merge of these two organisations as the IEEE, the standard became Std 218-1956 and was reaffirmed in 1980, but has now been withdrawn. Inverse hybrid parameters (g-parameters) where Often this circuit is selected when a voltage amplifier is wanted at the output. Off-diagonal g-parameters are dimensionless, while diagonal members have dimensions the reciprocal of one another. The resistors shown in the diagram can be general impedances instead. Example: common-base amplifier Note: Tabulated formulas in Table 3 make the -equivalent circuit of the transistor from Figure 8 agree with its small-signal low-frequency hybrid-pi model in Figure 9. Notation: is base resistance of transistor, is output resistance, and is mutual transconductance. The negative sign for reflects the convention that are positive when directed into the two-port. A non-zero value for means the output current affects the input current, that is, this amplifier is bilateral. If , the amplifier is unilateral. ABCD-parameters The -parameters are known variously as chain, cascade, or transmission parameters. There are a number of definitions given for parameters, the most common is, Note: Some authors chose to reverse the indicated direction of I2 and suppress the negative sign on I2. where For reciprocal networks . For symmetrical networks . For networks which are reciprocal and lossless, and are purely real while and are purely imaginary. This representation is preferred because when the parameters are used to represent a cascade of two-ports, the matrices are written in the same order that a network diagram would be drawn, that is, left to right. However, a variant definition is also in use, where The negative sign of arises to make the output current of one cascaded stage (as it appears in the matrix) equal to the input current of the next. Without the minus sign the two currents would have opposite senses because the positive direction of current, by convention, is taken as the current entering the port. Consequently, the input voltage/current matrix vector can be directly replaced with the matrix equation of the preceding cascaded stage to form a combined matrix. The terminology of representing the parameters as a matrix of elements designated etc. as adopted by some authors and the inverse parameters as a matrix of elements designated etc. is used here for both brevity and to avoid confusion with circuit elements. Table of transmission parameters The table below lists and inverse parameters for some simple network elements. Scattering parameters (S-parameters) The previous parameters are all defined in terms of voltages and currents at ports. -parameters are different, and are defined in terms of incident and reflected waves at ports. -parameters are used primarily at UHF and microwave frequencies where it becomes difficult to measure voltages and currents directly. On the other hand, incident and reflected power are easy to measure using directional couplers. The definition is, where the are the incident waves and the are the reflected waves at port . It is conventional to define the and in terms of the square root of power. Consequently, there is a relationship with the wave voltages (see main article for details). For reciprocal networks . For symmetrical networks . For antimetrical networks . For lossless reciprocal networks and Scattering transfer parameters (T-parameters) Scattering transfer parameters, like scattering parameters, are defined in terms of incident and reflected waves. The difference is that -parameters relate the waves at port 1 to the waves at port 2 whereas -parameters relate the reflected waves to the incident waves. In this respect -parameters fill the same role as parameters and allow the -parameters of cascaded networks to be calculated by matrix multiplication of the component networks. -parameters, like parameters, can also be called transmission parameters. The definition is, -parameters are not as easy to measure directly as -parameters. However, -parameters are easily converted to -parameters, see main article for details. Combinations of two-port networks When two or more two-port networks are connected, the two-port parameters of the combined network can be found by performing matrix algebra on the matrices of parameters for the component two-ports. The matrix operation can be made particularly simple with an appropriate choice of two-port parameters to match the form of connection of the two-ports. For instance, the -parameters are best for series connected ports. The combination rules need to be applied with care. Some connections (when dissimilar potentials are joined) result in the port condition being invalidated and the combination rule will no longer apply. A Brune test can be used to check the permissibility of the combination. This difficulty can be overcome by placing 1:1 ideal transformers on the outputs of the problem two-ports. This does not change the parameters of the two-ports, but does ensure that they will continue to meet the port condition when interconnected. An example of this problem is shown for series-series connections in figures 11 and 12 below. Series-series connection When two-ports are connected in a series-series configuration as shown in figure 10, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix addition of the two individual -parameter matrices. As mentioned above, there are some networks which will not yield directly to this analysis. A simple example is a two-port consisting of a -network of resistors and . The -parameters for this network are; Figure 11 shows two identical such networks connected in series-series. The total -parameters predicted by matrix addition are; However, direct analysis of the combined circuit shows that, The discrepancy is explained by observing that of the lower two-port has been by-passed by the short-circuit between two terminals of the output ports. This results in no current flowing through one terminal in each of the input ports of the two individual networks. Consequently, the port condition is broken for both the input ports of the original networks since current is still able to flow into the other terminal. This problem can be resolved by inserting an ideal transformer in the output port of at least one of the two-port networks. While this is a common text-book approach to presenting the theory of two-ports, the practicality of using transformers is a matter to be decided for each individual design. Parallel-parallel connection When two-ports are connected in a parallel-parallel configuration as shown in figure 13, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix addition of the two individual -parameter matrices. Series-parallel connection When two-ports are connected in a series-parallel configuration as shown in figure 14, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix addition of the two individual -parameter matrices. Parallel-series connection When two-ports are connected in a parallel-series configuration as shown in figure 15, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix addition of the two individual -parameter matrices. Cascade connection When two-ports are connected with the output port of the first connected to the input port of the second (a cascade connection) as shown in figure 16, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix multiplication of the two individual -parameter matrices. A chain of two-ports may be combined by matrix multiplication of the matrices. To combine a cascade of -parameter matrices, they are again multiplied, but the multiplication must be carried out in reverse order, so that; Example Suppose we have a two-port network consisting of a series resistor followed by a shunt capacitor . We can model the entire network as a cascade of two simpler networks: The transmission matrix for the entire network is simply the matrix multiplication of the transmission matrices for the two network elements: Thus: Interrelation of parameters Where is the determinant of . Certain pairs of matrices have a particularly simple relationship. The admittance parameters are the matrix inverse of the impedance parameters, the inverse hybrid parameters are the matrix inverse of the hybrid parameters, and the form of the -parameters is the matrix inverse of the form. That is, Networks with more than two ports While two port networks are very common (e.g., amplifiers and filters), other electrical networks such as directional couplers and circulators have more than 2 ports. The following representations are also applicable to networks with an arbitrary number of ports: Admittance () parameters Impedance () parameters Scattering () parameters For example, three-port impedance parameters result in the following relationship: However the following representations are necessarily limited to two-port devices: Hybrid () parameters Inverse hybrid () parameters Transmission () parameters Scattering transfer () parameters Collapsing a two-port to a one port A two-port network has four variables with two of them being independent. If one of the ports is terminated by a load with no independent sources, then the load enforces a relationship between the voltage and current of that port. A degree of freedom is lost. The circuit now has only one independent parameter. The two-port becomes a one-port impedance to the remaining independent variable. For example, consider impedance parameters Connecting a load, onto port 2 effectively adds the constraint The negative sign is because the positive direction for is directed into the two-port instead of into the load. The augmented equations become The second equation can be easily solved for as a function of and that expression can replace in the first equation leaving ( and and ) as functions of So, in effect, sees an input impedance and the two-port's effect on the input circuit has been effectively collapsed down to a one-port; i.e., a simple two terminal impedance. See also Admittance parameters Impedance parameters Scattering parameters Transfer-matrix method (optics) for reflection/transmission calculation of light waves in transparent layers Ray transfer matrix for calculation of paraxial propagation of a light ray Notes References Bibliography Carlin, HJ, Civalleri, PP, Wideband circuit design, CRC Press, 1998. . William F. Egan, Practical RF system design, Wiley-IEEE, 2003 . Farago, PS, An Introduction to Linear Network Analysis, The English Universities Press Ltd, 1961. Ghosh, Smarajit, Network Theory: Analysis and Synthesis, Prentice Hall of India . Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures, McGraw-Hill, 1964. Mahmood Nahvi, Joseph Edminister, Schaum's outline of theory and problems of electric circuits, McGraw-Hill Professional, 2002 . Dragica Vasileska, Stephen Marshall Goodnick, Computational electronics, Morgan & Claypool Publishers, 2006 . Clayton R. Paul, Analysis of Multiconductor Transmission Lines, John Wiley & Sons, 2008 , 9780470131541. h-parameters history D. A. Alsberg, "Transistor metrology", IRE Convention Record, part 9, pp. 39–44, 1953. also published as "Transistor metrology", Transactions of the IRE Professional Group on Electron Devices, vol. ED-1, iss. 3, pp. 12–17, August 1954. AIEE-IRE joint committee, "Proposed methods of testing transistors", Transactions of the American Institute of Electrical Engineers: Communications and Electronics, pp. 725–740, January 1955. "IRE Standards on solid-state devices: methods of testing transistors, 1956", Proceedings of the IRE, vol. 44, iss. 11, pp. 1542–1561, November, 1956. IEEE Standard Methods of Testing Transistors, IEEE Std 218-1956. Transfer functions
Two-port network
Engineering
3,880
73,026,681
https://en.wikipedia.org/wiki/Astronomers%20Monument
The Astronomers Monument in front of Griffith Observatory in Los Angeles, California is a New Deal artwork created under the auspices of the Public Works of Art Project. The large outdoor concrete sculpture honors the work of six great astronomers and is a Griffith Park landmark in its own right. History and design The Astronomers Monument pays homage to six of the greatest astronomers of all time: Hipparchus (), Nicolaus Copernicus (1473–1543), Galileo Galilei (1564–1642), Johannes Kepler (1571–1630), Isaac Newton (1642–1727), and William Herschel (1738–1822). In December 1933, the Los Angeles Park Commission and the Public Works of Art Project (PWAP) commissioned a sculpture project for the grounds of the under-construction Griffith Observatory. Using a design by local artist Archibald Garner and materials donated by the Women's Auxiliary of the Los Angeles Chamber of Commerce, six artistsGarner, Roger Noble Burnham (creator of USC's Tommy Trojan), Djey El Djey (1905-1980, real name Djey Owens), Gordon Newell (1905–1998), George Stanley (creator of the famous Oscar statuette presented at the Academy Awards), and Arnold Foerster (1878–1943)sculpted and cast the concrete monument and figures. Each artist was responsible for sculpting one astronomer: Stanley did Newton, Garner sculpted Copernicus, Newell was responsible for Kepler, etc. (Burnham may have created the depiction of Herschel; the authorship of the Hipparchus and Galileo figures is unclear.) According to the Los Angeles Times art critic Arthur Millier in 1934, the "original idea" was Foerster's, and he was "responsible for the delicate engineering entailed in pouring a forty-foot concrete shaft." The monument is topped with an armillary sphere, originally concrete, replaced with a bronze piece in 1991. On November 25, 1934, almost six months prior to the opening of the Observatory on May 14, 1935, a celebration took place to mark completion of the Astronomers Monument. The only "signature" on the Astronomers Monument is "PWAP 1934," referring to the program which funded the project and the year in which it was completed. See also List of New Deal sculpture List of public art in Los Angeles Isaac Newton in popular culture Santa Monica, another large cast-concrete PWAP sculpture in Los Angeles County References 1934 sculptures 1934 establishments in California Concrete sculptures in California Outdoor sculptures in Greater Los Angeles Griffith Park Public Works of Art Project Cultural depictions of Nicolaus Copernicus Cultural depictions of Galileo Galilei Cultural depictions of Johannes Kepler Cultural depictions of Isaac Newton
Astronomers Monument
Astronomy
547
63,632,458
https://en.wikipedia.org/wiki/Praseodymium%28III%29%20fluoride
Praseodymium(III) fluoride is an inorganic compound with the formula PrF3, being the most stable fluoride of praseodymium. Preparation The reaction between praseodymium(III) nitrate and sodium fluoride will obtain praseodymium(III) fluoride as a green crystalline solid: Pr(NO3)3 + 3 NaF → 3 NaNO3 + PrF3 There are also literature reports on the reaction between chlorine trifluoride and various oxides of praseodymium (Pr2O3, Pr6O11 and PrO2), where praseodymium(III) fluoride is the only product. The reaction between bromine trifluoride and praseodymium oxide left in the air for a period of time also produces praseodymium(III) fluoride, but the reaction is incomplete; the reaction between praseodymium(III) oxalate hydrate and bromine trifluoride can obtain praseodymium(III) fluoride, and carbon is also produced from this reaction. Praseodymium(III) fluoride can also be obtained by reacting praseodymium oxide and sulfur hexafluoride at 584 °C. Properties Physical Praseodymium(III) fluoride forms pale green crystals of trigonal system (or hexagonal system), space group P 3c1, (or P 6/mcm), cell parameters a = 0.7078 nm, c = 0.7239 nm, Z = 6, structure like cerium(III) fluoride (CeF3). Chemical Praseodymium(III) fluoride is a green, odourless, hygroscopic solid that is insoluble in water. Uses Praseodymium(III) fluoride is used as a doping material for laser crystals. See also Praseodymium(III) chloride Praseodymium(IV) fluoride References Fluorides Praseodymium(III) compounds Inorganic compounds Lanthanide halides
Praseodymium(III) fluoride
Chemistry
461
60,026,140
https://en.wikipedia.org/wiki/Alan%20Harris%20%28engineer%29
Sir Alan James Harris CBE (8 July 1916 – 26 December 2000) was a British civil and structural engineer. Early life and education Harris was born in 1916 in Plymouth, and started working at the age of 16 taking evening classes in engineering at Northampton Engineering College now City, University of London. Career From 1940 to 1946 Harris served with the Royal Engineers as an officer in a Port Construction and Repair Company, landing at Port-en-Bessin in Normandy on D-Day + 1. He was officer in command of diving on Mulberry B at Arromanches, working from a small fleet of French fishing boats, as a result of which he was awarded the Croix de Guerre. He later joined the Royal Engineers in the Territorial Army where he attained the rank of Colonel. After World War II Harris went to Paris to work for Eugène Freyssinet, the pioneer of prestressed and reinforced concrete and in 1949 became Freyssinet‘s representative in England. In 1955 Harris, his brother John, and James Sutherland set up the consulting business of Harris & Sutherland. Among other things, they designed aircraft hangars for Heathrow and Gatwick airports. Later, they expanded their work to infrastructure projects and had branches in Australia, Singapore and Hong Kong. Harris & Sutherland was acquired by Babtie, Shaw and Morton in 1997, and since 2004 has been part of Jacobs Engineering. Harris was a vice president of the Institution of Civil Engineers. He was President of the Institution of Structural Engineers in 1978-9 and was awarded its Gold Medal in 1984. Harris was appointed a professor of concrete structures at Imperial College London in 1973. Awards and honours Awarded CBE in 1968 Birthday Honours Knighted for services to Civil Engineering in 1980 Birthday Honours The Gold Medal of the Institution of Structural Engineers in 1984. Hon DSc by the University of Exeter in 1984. Ordre du Mérite of France in 1975 Selected projects Prestressed concrete hangar at Heathrow Airport for BOAC maintenance headquarters 1950–55 Spekeland Road Rail Depot References External links Institution of Structural Engineers Obituary in New Civil Engineer, 11 January 2001 Alumni of City, University of London Presidents of the Institution of Structural Engineers Structural engineers Fellows of the Royal Society Fellows of the Royal Academy of Engineering Commanders of the Order of the British Empire IStructE Gold Medal winners Knights Bachelor Engineering educators People from Plymouth, Devon 1916 births 2000 deaths
Alan Harris (engineer)
Engineering
479
27,630,728
https://en.wikipedia.org/wiki/Global%20Strategic%20Trends%20Programme
The Global Strategic Trends Programme was established in 2001 to research and forecast potential trends that shape and inform the future strategic context. It is published by the Development, Concepts and Doctrine Centre (DCDC) which is under the UK's Strategic Command based in Shrivenham, Wiltshire. One of the main findings of "Global Strategic Trends out to 2040" is that the era out to 2040 will be a time of transition, characterised by instability both in the relations between states, and in the relations between groups within states. During this timeframe significant global trends will include; climate change, rapid population growth, resource scarcity, a resurgence in ideology and a shift in global power from West to East. The struggle to establish an effective system of global governance, is likely to be a central theme of the era. Recent Reviews of Global Strategic Trends The analysis conducted in Global Strategic Trends was recently highlighted in the UK Public Administration Select Committee Report on Who does UK National Strategy? Comments included: Professor Peter Hennessy - "You have to have as a good a system for horizon scanning as you possibly can, with all the necessary caveats. For example, we haven’t talked about it yet, but the one that I find the most helpful was an institutionalisation of something that was done in the last defence review, the DCDC people at Shrivenham, the “Shrivenham Scans” as I call them, I find them absolutely fascinating... ...they produced a very good one [scan], the bulk of which was made public in time for this review and, as far as I can see, it's having no salience at all in the way the SDSR is being cut—yet another example of an own goal and being less than the sum of our parts. But I'm not defeatist in the way that you might—I suspect you're teasing me on this because you're not an opt out of the world man either, are you? It's not for me to ask you questions."'Professor Hew Strachan - "Strategic trends stress those things that are likely to happen to the world, but not much of what they do really focuses on what the United Kingdom is trying to do. It’s extraordinary that DCDC is at Shrivenham, at that distance, (quite apart from the other things that have happened to it), rather than in London and central to the processes that we’re talking about. Professor Hennessy mentioned just now the publication last year of a document called “The Future Character of Conflict”, which was designed to address precisely what its title says, but its arguments are nowhere evident in current thinking in relation to strategy, let alone in relation to the Strategic Defence and Security Review."Mr Tom Mckane (Director Strategy MOD) - "As to how these documents are produced, within the department we have the benefit of the Development Concepts and Doctrine Centre, who produce long range views of the world. Their document “Global Strategic Trends” I think you are familiar with. That type of document feeds into the work of the staff at the centre of the department who are responsible for assisting ministers and the Defence Board to think about defence strategy."Recommendation in Written evidence submitted by Professor Julien Lindley-French - '''"Cross-government structures under the NSC/Cabinet Office should ideally include a Strategy Group made up of both officials and non-government experts to build on the Strategic Trends work of DCDC with a specific remit to establish likely forecasts and context for Intelligence and Planning."'' See also US National Intelligence Council Global Trends Reports References External links Global Strategic Trends British defence policymaking Prediction Ministry of Defence (United Kingdom) Technology assessment Technology forecasting Theories
Global Strategic Trends Programme
Technology
764
2,853,291
https://en.wikipedia.org/wiki/Direct%20reduced%20iron
Direct reduced iron (DRI), also called sponge iron, is produced from the direct reduction of iron ore (in the form of lumps, pellets, or fines) into iron by a reducing gas which contains elemental carbon (produced from natural gas or coal) and/or hydrogen. When hydrogen is used as the reducing gas no carbon dioxide is produced. Many ores are suitable for direct reduction. Direct reduction refers to solid-state processes which reduce iron oxides to metallic iron at temperatures below the melting point of iron. Reduced iron derives its name from these processes, one example being heating iron ore in a furnace at a high temperature of in the presence of the reducing gas syngas, a mixture of hydrogen and carbon monoxide, or pure hydrogen. Process Direct reduction processes can be divided roughly into two categories: gas-based and coal-based. In both cases, the objective of the process is to remove the oxygen contained in various forms of iron ore (sized ore, concentrates, pellets, mill scale, furnace dust, etc.) in order to convert the ore to metallic iron, without melting it (below ). The direct reduction process is comparatively energy efficient. Steel made using DRI requires significantly less fuel, in that a traditional blast furnace is not needed. DRI is most commonly made into steel using electric arc furnaces to take advantage of the heat produced by the DRI product. Benefits Direct reduction processes were developed to overcome the difficulties of conventional blast furnaces. DRI plants need not be part of an integrated steel plant, as is characteristic of blast furnaces. The initial capital investment and operating costs of direct reduction plants are lower than integrated steel plants and are more suitable for developing countries where supplies of high grade coking coal are limited, but where steel scrap is generally available for recycling. Many other countries use variants of the process. Factors that help make DRI economical: Direct-reduced iron has about the same iron content as pig iron, typically 90–94% total iron (depending on the quality of the raw ore) so it is an excellent feedstock for the electric furnaces used by mini mills, allowing them to use lower grades of scrap for the rest of the charge or to produce higher grades of steel. Hot-briquetted iron (HBI) is a compacted form of DRI designed for ease of shipping, handling, and storage. Hot direct reduced iron (HDRI) is DRI that is transported hot, directly from the reduction furnace, into an electric arc furnace, thereby saving energy. The direct reduction process uses pelletized iron ore or natural "lump" ore. One exception is the fluidized bed process which requires sized iron ore particles. The direct reduction process can use natural gas contaminated with inert gases, avoiding the need to remove these gases for other use. However, any inert gas contamination of the reducing gas lowers the effect (quality) of that gas stream and the thermal efficiency of the process. The use of natural gas also produces greenhouse gases. Supplies of powdered ore and raw natural gas are both available in areas such as Northern Australia, avoiding transport costs for the gas. In most cases, the DRI plant is located near a natural gas source as it is more cost effective to ship the ore rather than the gas. To eliminate fossil fuel use in iron and steel making, renewable hydrogen gas can be used in place of syngas to produce DRI and eliminate production of greenhouse gases. Problems Direct reduced iron is highly susceptible to oxidation and rusting if left unprotected, and is normally quickly processed further to steel. The bulk iron can also catch fire (it is pyrophoric). Unlike blast furnace pig iron, which is almost pure metal, DRI contains some siliceous gangue (if made from scrap, not from new iron from direct reduced iron with natural gas), which needs to be removed in the steel-making process. History Producing sponge iron and then working it was the earliest method used to obtain iron in the Middle East, Egypt, and Europe, where it remained in use until at least the 16th century. The advantage of the bloomery technique is that iron can be obtained at a lower furnace temperature, only about 1,100 °C or so. The disadvantage, relative to a blast furnace, is that only small quantities can be made at a time. Chemistry The following reactions successively convert hematite (from iron ore) into magnetite, magnetite into ferrous oxide, and ferrous oxide into iron by reduction with carbon monoxide or hydrogen. Carburizing produces cementite (Fe3C): Economy India is the world’s largest producer of direct-reduced iron. Uses Sponge iron is not useful by itself, but can be processed to create wrought iron or steel. The sponge is removed from the furnace, called a bloomery, and repeatedly beaten with heavy hammers and folded over to remove the slag, oxidize any carbon or carbide, and weld the iron together. This treatment usually creates wrought iron with about three percent slag and a fraction of a percent of other impurities. Further treatment may add controlled amounts of carbon, allowing various kinds of heat treatment (e.g. "steeling"). Today, sponge iron is created by reducing iron ore without melting it. This makes for an energy-efficient feedstock for specialty steel manufacturers which used to rely upon scrap metal. Food Hydrogen-reduced iron is used as a source of food-grade iron powder, for food fortification and for oxygen scavenging. This elemental form is not absorbed as well as ferrous forms, but the oxygen-scavenging function keeps it attractive. Purity standards for this use were established in 1977. See also Krupp-Renn Process Blast furnace Pig iron Steel mill Direct reduction References Notes Bibliography Valipour MS, and Saboohi, Y, "Numerical investigation of nonisothermal reduction of hematite using Syngas: the shaft scale study", Modelling Simul. Mater. Sci. Eng. 15(5), p. 487, 2007. Grobler, F. and Minnitt, R.C.A "The increasing role of direct reduced iron in global steelmaking", The Australasian Institute of Mining and Metallurgy. External links Hydrogen technologies Iron Age Europe Iron Metallurgical processes
Direct reduced iron
Chemistry,Materials_science
1,303
64,020
https://en.wikipedia.org/wiki/Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.). According to some on-line dictionaries, a multiprocessor is a computer system having two or more processing units (multiple processors) each sharing main memory and peripherals, in order to simultaneously process programs. A 2009 textbook defined multiprocessor system similarly, but noted that the processors may share "some or all of the system’s memory and I/O facilities"; it also gave tightly coupled system as a synonymous term. At the operating system level, multiprocessing is sometimes used to refer to the execution of multiple concurrent processes in a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant. When used with this definition, multiprocessing is sometimes contrasted with multitasking, which may use just a single processor but switch it in time slices between tasks (i.e. a time-sharing system). Multiprocessing however means true parallel execution of multiple processes using more than one processor. Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the term parallel processing is generally used to denote that scenario. Other authors prefer to refer to the operating system techniques as multiprogramming and reserve the term multiprocessing for the hardware aspect of having more than one processor. The remainder of this article discusses multiprocessing only in this hardware sense. In Flynn's taxonomy, multiprocessors as defined above are MIMD machines. As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also contains message passing multicomputer systems. Key topics Processor symmetry In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating system software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized. Systems that treat all CPUs equally are called symmetric multiprocessing (SMP) systems. In systems where all CPUs are not equal, system resources may be divided in a number of ways, including asymmetric multiprocessing (ASMP), non-uniform memory access (NUMA) multiprocessing, and clustered multiprocessing. Master/slave multiprocessor system In a master/slave multiprocessor system, the master CPU is in control of the computer and the slave CPU(s) performs assigned tasks. The CPUs can be completely different in terms of speed and architecture. Some (or all) of the CPUs can share a common bus, each can also have a private bus (for private resources), or they may be isolated except for a common communications pathway. Likewise, the CPUs can share common RAM and/or have private RAM that the other processor(s) cannot access. The roles of master and slave can change from one CPU to another. Two early examples of a mainframe master/slave multiprocessor are the Bull Gamma 60 and the Burroughs B5000. An early example of a master/slave multiprocessor system of microprocessors is the Tandy/Radio Shack TRS-80 Model 16 desktop computer which came out in February 1982 and ran the multi-user/multi-tasking Xenix operating system, Microsoft's version of UNIX (called TRS-XENIX). The Model 16 has two microprocessors: an 8-bit Zilog Z80 CPU running at 4 MHz, and a 16-bit Motorola 68000 CPU running at 6 MHz. When the system is booted, the Z-80 is the master and the Xenix boot process initializes the slave 68000, and then transfers control to the 68000, whereupon the CPUs change roles and the Z-80 becomes a slave processor responsible for all I/O operations including disk, communications, printer and network, as well as the keyboard and integrated monitor, while the operating system and applications run on the 68000 CPU. The Z-80 can be used to do other tasks. The earlier TRS-80 Model II, which was released in 1979, could also be considered a multiprocessor system as it had both a Z-80 CPU and an Intel 8021 microcontroller in the keyboard. The 8021 made the Model II the first desktop computer system with a separate detachable lightweight keyboard connected with by a single thin flexible wire, and likely the first keyboard to use a dedicated microcontroller, both attributes that would later be copied years later by Apple and IBM. Instruction and data streams In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts (single instruction, multiple data or SIMD, often used in vector processing), multiple sequences of instructions in a single context (multiple instruction, single data or MISD, used for redundancy in fail-safe systems and sometimes applied to describe pipelined processors or hyper-threading), or multiple sequences of instructions in multiple contexts (multiple instruction, multiple data or MIMD). Processor coupling Tightly coupled multiprocessor system Tightly coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP or UMA), or may participate in a memory hierarchy with both local and shared memory (SM)(NUMA). The IBM p690 Regatta is an example of a high end SMP system. Intel Xeon processors dominated the multiprocessor market for business PCs and were the only major x86 option until the release of AMD's Opteron range of processors in 2004. Both ranges of processors had their own onboard cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system RAM. Chip multiprocessors, also known as multi-core computing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly coupled multiprocessing. Mainframe systems with multiple processors are often tightly coupled. Loosely coupled multiprocessor system Loosely coupled multiprocessor systems (often referred to as clusters) are based on multiple standalone relatively low processor count commodity computers interconnected via a high speed communication system (Gigabit Ethernet is common). A Linux Beowulf cluster is an example of a loosely coupled system. Tightly coupled systems perform better and are physically smaller than loosely coupled systems, but have historically required greater initial investments and may depreciate rapidly; nodes in a loosely coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster. Power consumption is also a consideration. Tightly coupled systems tend to be much more energy-efficient than clusters. This is because a considerable reduction in power consumption can be realized by designing components to work together from the beginning in tightly coupled systems, whereas loosely coupled systems use components that were not necessarily intended specifically for use in such systems. Loosely coupled systems have the ability to run different operating systems or OS versions on different systems. Disadvantages Merging data from multiple threads or processes may incur significant overhead due to conflict resolution, data consistency, versioning, and synchronization. See also Multiprocessor system architecture Symmetric multiprocessing Asymmetric multiprocessing Multi-core processor BMDFM – Binary Modular Dataflow Machine, a SMP MIMD runtime environment Software lockout OpenHMPP References Parallel computing Classes of computers Computing terminology
Multiprocessing
Technology
1,767
67,630,778
https://en.wikipedia.org/wiki/Generalized%20suffix%20array
In computer science, a generalized suffix array (GSA) is a suffix array containing all suffixes for a set of strings. Given the set of strings of total length , it is a lexicographically sorted array of all suffixes of each string in . It is primarily used in bioinformatics and string processing. Functionality The functionality of a generalized suffix array is as follows: For a collection or set of strings, . It is a lexicographically sorted array of all suffixes of each string in the set . In the array, each suffix is represented by an integer pair which denotes the suffix starting from position in . In the case where different strings in have identical suffixes, in the generalized suffix array, those suffixes will occupy consecutive positions. However, for convenience, the exception can be made where repeats will not be listed. A generalized suffix array can be generated for a generalized suffix tree. When compared to a generalized suffix tree, while the generalized suffix array will require more time to construct, it will use less space than the tree. Construction Algorithms and Implementations Algorithms and tools for constructing a generalized suffix array include: Fei Shi's (1996) algorithm which runs in worst case time and space, where is the sum of the lengths of all strings in and the length of the longest string in . This includes sorting, searching and finding the longest common prefixes. The external generalized enhanced suffix array, or eGSA, construction algorithm which specializes in external memory construction, is particularly useful when the size of the input collection or data structure is larger than the amount of available internal memory gsufsort is an open-source, fast, portable and lightweight tool for the construction of generalized suffix arrays and related data structures like Burrows–Wheeler transform or LCP Array) Mnemonist, a collection of data structures implemented in JavaScript contains an implementation for a generalized suffix tree and can be found publicly on npm and GitHub. Solving the Pattern Matching Problem Generalized suffix arrays can be used to solve the pattern matching problem: Given a pattern and a text , find all occurrences of in . Using the generalized suffix array of , then first, the suffixes that have as a prefix need to be found. Since is a lexicographically sorted array of the suffixes of , then all such suffixes will appear in consecutive positions within . Particularly important, since is sorted, it makes identification of suffixes possible and easy using binary search. Using binary search, first find the smallest index in such that contains as a prefix, or determine that no such suffix is present. In the case where the suffix is not found, does not occur in . Otherwise, find the largest index which contains as a prefix. The elements in the range indicate the starting positions of the occurrences of in . Binary search on takes comparisons. is compared with a suffix to determine their lexicographic order in each comparison that is done. Thus, this requires comparing at most characters. Note that a array is not required, but will offer the benefit of a lower running time. The runtime of the algorithm is . By comparison, solving this problem using suffix trees takes time. Note that with a generalized suffix array, the space required is smaller compared to a suffix tree, since the algorithm only requires space for words and the space to store the string. As mentioned above, by optionally keeping track of information which will use slightly more space, the running time of the algorithm can be improved to . Other Applications A generalized suffix array can be utilized to compute the longest common subsequence of all the strings in a set or collection. A naive implementation would compute the largest common subsequence of all the strings in the set in . A generalized suffix array can be utilized to find the longest previous factor array, a concept central to text compression techniques and in the detection of motifs and repeats See also Suffix Tree Suffix Array Generalized Suffix Tree Pattern matching problem Bioinformatics References External links Generalized enhanced suffix array construction in external memory Arrays Computer science suffixes Substring indices
Generalized suffix array
Technology
815
14,761,887
https://en.wikipedia.org/wiki/HOXA11
Homeobox protein Hox-A11 is a protein that in humans is encoded by the HOXA11 gene. Function In vertebrates, the genes encoding the class of transcription factors called homeobox genes are found in clusters named A, B, C, and D on four separate chromosomes. Expression of these proteins is spatially and temporally regulated during embryonic development. This gene is part of the A cluster on chromosome 7 and encodes a DNA-binding transcription factor which may regulate gene expression, morphogenesis, and differentiation. This gene is involved in the regulation of uterine development and is required for female fertility. Mutations in this gene can cause radioulnar synostosis with amegakaryocytic thrombocytopenia. See also Homeobox References Further reading External links Transcription factors
HOXA11
Chemistry,Biology
169
36,235,294
https://en.wikipedia.org/wiki/3G%20adoption
3G mobile telephony was relatively slow to be adopted globally. In some instances, 3G networks do not use the same radio frequencies as 2G so mobile operators must build entirely new networks and license entirely new frequencies, especially so to achieve high data transmission rates. Other delays were due to the expenses of upgrading transmission hardware, especially for UMTS, whose deployment required the replacement of most broadcast towers. Due to these issues and difficulties with deployment, many carriers delayed acquisition of these updated capabilities. In December 2007, 190 3G networks were operating in 40 countries and 154 HSDPA networks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada and the US, telecommunication companies use W-CDMA technology with the support of around 100 terminal designs to operate 3G mobile networks. Roll-out of 3G networks was delayed in some countries by the enormous costs of additional spectrum licensing fees. (See Telecoms crash.) The license fees in some European countries were particularly high, bolstered by government auctions of a limited number of licenses and sealed bid auctions, and initial excitement over 3G's potential. The 3G standard is perhaps well known because of a massive expansion of the mobile communications market post-2G and advances of the consumer mobile phone. An especially notable development during this time is the smartphone (for example, the iPhone, and the Android family), combining the abilities of a PDA with a mobile phone, leading to widespread demand for mobile internet connectivity. 3G has also introduced the term "mobile broadband" because its speed and capability make it a viable alternative for internet browsing, and USB Modems connecting to 3G networks are becoming increasingly common. Africa The first African use of 3G technology was a 3G video call made in Johannesburg on the Vodacom network in November 2004. The first commercial launch was by Emtel-ltd in Mauritius in 2004. In late March 2006, a 3G service was provided by the new company Wana in Morocco. In May 2007, Safaricom launched 3G services in Kenya while later that year Vodacom Tanzania also started providing services. In February 2012 Bharti Airtel Launched a 3.75G Network in selected cities in Kenya with a countrywide rollout planned for later in the year. In Egypt, Mobinil launched the service in 2008 and in Somaliland, Telesom started first 3G services on 3 July 2011, to both prepaid and postpaid subscription customers. Telecommunication networks in Nigeria like Globacom, Etisalat, Airtel and MTN provide 3G services to their numerous customers. Asia Asia is also using 3G services very well. A lot of companies like Dialog Axiata PLC Sri Lanka (First to serve 3G Service in South Asia in 2006), BSNL, WorldCall, PTCL, Mobilink, Zong, Ufone, Telenor PK, Maxis, Vodafone, Airtel, Idea Cellular, Aircel, Tata DoCoMo and Reliance have released their 3G services. Sri Lanka Sri Lanka's All Mobile Networks(Dialog, Mobitel, Etisalat, Hutch, Airtel,) And CDMA Network Providers (Lankabell, Dialog, Suntel, SLT) Launched 3G Services. Dialog, Mobitel launched 4G LTE Services In Sri Lanka. Not Only (Dialog CDMA, Lankabell CDMA Have 4G LTE Services. Sri Lanka Telecom Have 4G LTE, FTTX Services.. Afghanistan On March 19, 2012, Etisalat Afghanistan, the fastest growing telecommunications company in the country and part of Etisalat Group, announced the launch of 3G services in Afghanistan. between 2013 and 2014 all telecommunications company ( Afghan Wireless, Etisalat, Roshan, MTN and Salaam Network) provided 3G, 3.5G and 3.75G services and they are planning for 4G services in 2016–2017. Nepal Nepal was one of the first countries in southern Asia to launch 3G services. Nepal's first 3G company was NTC (Nepal Telecom Corporation) and the second was Ncell. Ncell also covered Mount Everest with 3G. NTC provides high speed video calling with other 3G services, as well as post-paid and pre-paid 3G SIM cards. Pakistan 3G and 4G was simultaneously launched in Pakistan on April 23, 2014, through a SMRA Auction. Three out of five Companies got a 3G licence i.e. Ufone, Mobilink and Telenor while China Mobile's Zong got 3G as well as a 4G licence. Whereas the fifth company, Warid Pakistan did not participate in the auction procedure. However, they launched 4G LTE services on their existing 2G 1800 MHz spectrum due to Technology neutral terms and became world's first Telecom Company to transform directly from 2G to 4G. With that Pakistan joined the 3G and 4G world. In the non-mobile sector, Pakistan's biggest telecommunication company PTCL launched its 3G network, EVO, in mid-2008 and has since then established itself in this sector. It provides 3G services in 105 cities across Pakistan. Omantel's WorldCall also provides 3G services in 50 cities Pakistan-wide. They provide mobile broadband service via dongles and modems. On 14 August 2010, Pakistan became the first country in the world to experience EVDO's RevB 3G technology that offers maximum speeds of 9.3 Mbit/s. At present the services of EVO Nitro (brand name) are available in Islamabad, Rawalpindi, Lahore and Karachi. The RevA network, with speeds if up to 3.1 Mbit/s is available in over 100 cities of the country. Bangladesh State-run mobile operator Teletalk Bangladesh limited and other GSM operators GrameenPhone, Banglalink, Robi and Airtel already started hi-speed 3G+ and 3.5G services using UMTS with HSDPA facilities. Grameenphone has a plan to launch 4G LTE services first time in Bangladesh using TD-LTE technology. Currently Grameenphone owned 10 MHz spectrum at 3G auction by BTRC.Robi and Airtel recently merged, newly merged company has a plan to introduce 4G operation soon. Two other data operators, Qubee and Banglalion, currently offer 4G Wimax services in Bangladesh. CityCell now switched off their operation by government order. 4G LTE services have already begun in Bangladesh through all mobile operators except Teletalk, the state run mobile operator. Bangladesh has a plan to introduce super speed 5G service soon. A test run will be conducted in the country in mid July 2018. China China announced in May 2008, that the telecoms sector was re-organized and three 3G networks would be allocated so that the largest mobile operator, China Mobile, would retain its GSM customer base. China Unicom would retain its GSM customer base but relinquish its CDMA2000 customer base, and launch 3G on the globally leading W-CDMA (UMTS) standard. The CDMA2000 customers of China Unicom would go to China Telecom, which would then launch 3G on the CDMA2000 1x EV-DO standard. This meant that China would have all three main cellular technology 3G standards in commercial use. Finally in January 2009, Ministry of industry and Information Technology of China awarded licenses of all three standards: TD-SCDMA to China Mobile, W-CDMA to China Unicom and CDMA2000 to China Telecom. The launch of 3G occurred on 1 October 2009, to coincide with the 60th Anniversary of the Founding of the People's Republic of China. By August 2011, China Telecom's 3G subscriber has exceeded 23 million . India On 11 December 2008, India entered the 3G arena with the launch of 3G enabled Mobile and Data services by Government owned Mahanagar Telephone Nigam Ltd MTNL in Delhi and later in Mumbai. MTNL becomes the first 3G Mobile service provider in India. After MTNL, another state operator Bharat Sanchar Nigam Ltd. (BSNL) launched 3G services on 22 Feb 2009 in Chennai and Kolkata and later launched 3G as Nationwide. The auction of 3G wireless spectrum was announced in April 2010 and 3G Spectrum allocated to all private operators on 1 September 2010. While 3G was embraced by several countries at the beginning of the millennium, it was introduced to India somewhat later. The first network to make mobile internet browsing hassle-free was 3G. Even though it is sluggish those days, it is still sufficient for watching films and streaming music. However, telecommunications companies like Airtel have begun to shut down 3G networks in a number of Indian regions. Maximum Speed: 384 kilobits/second North Korea North Korea has had a 3G network since 2008, which is called Koryolink, a joint venture between Egyptian company Orascom Telecom Holding and the state-owned Korea Post and Telecommunications Corporation (KPTC). It is North Korea's only 3G Mobile operator, and one of only two mobile companies in the country. According to Orascom quoted in BusinessWeek, the company had 125,661 subscribers in May 2010. The Egyptian company owns 75 percent of Koryolink, and is known to invest in infrastructure for mobile technology in developing nations. It covers Pyongyang, and five additional cities and eight highways and railways. Its only competitor, SunNet, uses GSM technology and suffers from poor call quality and disconnections. Phone numbers on the network are prefixed with +850 (0)192. Philippines 3G services were made available in the Philippines in December 2008. Singapore 3G services were made available in Singapore in October 2007. Widespread adoption of 3G began in January 2009, with the upgrading of phones to iPhone 3G and Android. Europe In Europe, mass market commercial 3G services were introduced starting in March 2003 by O2 in the UK and Italy. The European Union Council suggested that the 3G operators should cover 80% of the European national populations by the end of 2005. Canada In Canada, Bell Mobility, SaskTel and Telus launched a 3G EVDO network in 2005. Rogers Wireless was the first to implement UMTS technology, with HSDPA services in eastern Canada in late 2006. Realizing they would miss out on roaming revenue from the 2010 Winter Olympics, Bell and Telus formed a joint venture and rolled out a shared HSDPA network using Nokia Siemens technology. After the AWS spectrum in 2008, new entrants to the Canadian wireless markets including but not limited to Mobilicity, Wind Mobile and Vidéotron have deployed their own UMTS networks in Canada using the AWS spectrum. Middle East In Iran Rightel won the bid for the third Operator license. Rightel is the first 3G operator in Iran. Rightel has commercially launched in the last months of 2011. In Jordan, Orange is the first mobile 3G operator. Mobitel Iraq is the first mobile 3G operator in Iraq. It was launched commercially in February 2007. MTN Syria is the first mobile 3G operator in Syria. It was launched commercially in May 2010. In Lebanon Ministry of Telecoms launched a test period on September 20, 2011, where 4,000 smart-phone users were selected to enjoy 3G for one month and provide feedback. Currently, the test period is over, MTC Touch and Alfa began rolling out the new 3G services. Saudi Arabia has got 4G as well as 3G/HSPA With Zain KSA, Saudi Telecom, and Mobily KSA. Trinidad and Tobago In Trinidad and Tobago, Digicel was the first to implement UMTS services with the introduction of HSPA+ in May 2012. bmobile launched their 3G UMTS network in November 2012 with the implementation of HSPA+. Turkey Turkcell, Avea and Vodafone launched their 3G networks commercially on 30 July 2009 at the same time. Turkcell and Vodafone launched their 3G service on all provincial centres. Avea launched it on 16 provincial centres. It was after Turkey's monopoly mobile operator Turkcell accepted number portability, mobile operators attended frequency band auction and frequencies for 3G usage distributed around mobile operators. Turkcell got A band, Vodafone B and Avea C. Currently Turkcell and Vodafone have 3G networks on most of crowded cities and towns. Turkey has 3.9G networks now. New Zealand In late 2005, Vodafone NZ launched their 3G network, followed by Spark NZ's XT network in 2008, and newcomer 2degrees using a combination of Vodafone's 3G towers and their own in 2009. 2degrees has since built more towers, and is now self-sufficient in the major cities (Auckland, Hamilton, Wellington, Christchurch and Dunedin) but relies on a roaming agreement with Vodafone to cover the rest of the country. This gives it essentially the same footprint as Vodafone. References Mobile telecommunications Software-defined radio Technological change Videotelephony
3G adoption
Technology,Engineering
2,751
3,169,278
https://en.wikipedia.org/wiki/Phase%20angle%20%28astronomy%29
In observational astronomy, phase angle is the angle between the light incident onto an observed object and the light reflected from the object. In the context of astronomical observations, this is usually the angle Sun-object-observer. For terrestrial observations, "Sun–object–Earth" is often nearly the same thing as "Sun–object–observer", since the difference depends on the parallax, which in the case of observations of the Moon can be as much as 1°, or two full Moon diameters. With the development of space travel, as well as in hypothetical observations from other points in space, the notion of phase angle became independent of Sun and Earth. The etymology of the term is related to the notion of planetary phases, since the brightness of an object and its appearance as a "phase" is the function of the phase angle. The phase angle varies from 0° to 180°. The value of 0° corresponds to the position where the illuminator, the observer, and the object are collinear (all lying along the same line), with the illuminator and the observer on the same side of the object. The value of 180° is the position where the object is between the illuminator and the observer, known as inferior conjunction. Values less than 90° represent backscattering; values greater than 90° represent forward scattering. For some objects, such as the Moon (see lunar phases), Venus and Mercury the phase angle (as seen from the Earth) covers the full 0–180° range. The superior planets cover shorter ranges. For example, for Mars the maximum phase angle is about 45°. For Jupiter, the maximum is 11.1° and for Saturn 6°. The brightness of an object is a function of the phase angle, which is generally smooth, except for the so-called opposition spike near 0°, which does not affect gas giants or bodies with pronounced atmospheres, and when the object becomes fainter as the angle approaches 180°. This relationship is referred to as the phase curve. See also Illumination angle Incidence angle (optics) References External links Oxford dictionary definition Angle Observational astronomy Radiometry Scattering, absorption and radiative transfer (optics)
Phase angle (astronomy)
Physics,Chemistry,Astronomy,Engineering
450
31,642,265
https://en.wikipedia.org/wiki/Solar%20Building
The Solar Building, located in Albuquerque, New Mexico, was the world's first commercial building to be heated primarily by solar energy. It was built in 1956 to house the engineering firm of Bridgers & Paxton, who were responsible for the heating system design. The novel building received widespread attention, with articles in national publications like Life and Popular Mechanics, and was the subject of a National Science Foundation-funded research project in the 1970s. It was added to the New Mexico State Register of Cultural Properties in 1985 and the National Register of Historic Places in 1989, only 33 years after it was built. History The firm of Bridgers & Paxton Consulting Engineers was founded in 1951 by Frank Bridgers (1922–2005) and Donald Paxton (1912–2007), both of whom were interested in the potential applications of solar energy. Initially operating out of a garage behind Bridgers' house, the two men conceived a new office building for their firm which would include an experimental solar heating system. They believed such a system would not only save money, but would also allow them to collect valuable data for future projects. In 1954, they were able to put some of their ideas into practice with an innovative heating and cooling system for the Simms Building, which took advantage of the building's south-facing glass curtain wall to provide solar heating in winter. However, additional heating or cooling was still required under most conditions. Bridgers and Paxton began serious design work on the Solar Building in early 1954, and it was constructed between March and August 1956. Stanley & Wright were the architects for the building. Its total cost was $58,500, of which the heating and cooling system made up about $15,000—roughly twice the cost of a conventional system. However, Bridgers and Paxton believed the reduced operating costs would save money in the long run. The novel building attracted considerable attention, receiving write-ups in a number of national publications including Architectural Forum, Life, Architectural Record, Progressive Architecture, and Popular Mechanics, and directly inspired a number of subsequent active solar heating systems. Despite some minor problems, the building's heating system operated successfully for six years, even during the particularly cold and cloudy month of January 1957, which recorded only three sunny days. However, it was not as economical as Bridgers and Paxton had hoped, mainly due to the extremely low cost of fuel at the time. When the building was expanded in 1962, the solar collector was abandoned in favor of a conventional boiler system, though the equipment was left intact for possible future use. This decision paid off just a few years later, when the 1973 oil crisis caused a renewed interest in solar energy and brought fresh attention to the Solar Building. In early 1974, Penn State researcher Stanley Gilman received a National Science Foundation grant to restore the building's solar heating system and operate it as part of a multi-year field study intended to identify optimal design criteria for such systems. Following the conclusion of the project, the solar heating system remained in use. Bridgers & Paxton eventually outgrew the building, moving to a new location in 1985. The Solar Building was added to the New Mexico State Register of Cultural Properties in 1985 and the National Register of Historic Places in 1989. The building was considered "exceptionally significant", justifying its inclusion in the National Register even though it was only 33 years old at the time. Architecture The Solar Building is a one-story, International Style building consisting of two main sections. The north wing, containing the main drafting room as well as the solar heating equipment, made up the main portion of the original building. It has an irregular quadrilateral cross-section with the roof and south wall both angled (at 20 and 30 degrees, respectively) in order to provide a high southern exposure for the solar collectors. The wing is framed by seven structural steel bents, spaced apart and filled in with wooden ceiling joists and masonry. The north wall has a narrow, continuous band of windows running just below the roofline which light the drafting room, while the street-facing eastern elevation is windowless brick. The south wing is a low, flat-roofed structure containing office space. It is partially faced with brick, marking the original extent of the building; it was later extended with an addition in 1962. The main entrance is positioned at the intersection of the two wings. Heating system The building's active solar heating system employed an array of 56 solar thermal collectors with a total area of . The array was positioned on a south-facing exterior wall which was angled at 30 degrees to the vertical in order to catch the maximum amount of winter sunlight. The collectors were custom-fabricated aluminum panels with built-in flow channels for water to pass through. The surface of each collector was coated with low-reflectivity black paint and a layer of glass to capture the maximum amount of thermal energy. In sunny weather, water passing through the collectors would reach a maximum temperature of before being deposited in a 6,000-gallon insulated underground tank which provided a hot water reserve for up to three days of cloudy weather. Under normal conditions (about 90% of an average heating season), the water in the tank would be warm enough to directly heat the building by circulating it through radiant panels in the floor and ceiling. If the temperature in the tank dropped due to prolonged cloudy weather, a heat pump could be employed to maintain the hot water supply to the panels. The heat pump was a standard commercial water chiller unit, but with heating rather than cooling as its intended purpose—chilling the water in the tank and delivering the "waste" heat to the hot water stream. The heat pump could continue to function as long as the tank temperature remained above . In summer, the system could also provide cooling by circulating cold water through the building rather than hot water. In this mode, the storage tank became a reservoir for cold water, which allowed the system to save energy in milder weather by storing heat during the day and releasing it at night when the outside temperatures were lower. Most of the time, the water in the tank could be kept cool using only an evaporative cooler. If the water in the tank got too warm, the heat pump would go back into operation in order to continue transferring heat from the cold water stream into the tank. It was also possible to operate in cooling mode during the day while storing hot water from the solar collectors and heat pump to heat the building at night. Minor changes were made to the system during its operational life. One of the first problems that arose was corrosion of the collector panels, which originally had integral flow channels formed from two bonded sheets of aluminum. After leaks started to develop, the flow channels were replaced with copper tubing attached to the back of the panels. Gilman made additional modifications to the system in the 1970s, including changing the working fluid in the collector loop to ethylene glycol (in order to prevent freezing) and re-soldering the collector panels for better thermal contact. Gilman also installed an automated control system and upgraded the air handling equipment to allow individual temperature control for each office. Despite the modifications, the system remains mostly intact as originally designed. See also List of pioneering solar buildings References External links Office buildings completed in 1956 Office buildings in Albuquerque, New Mexico Commercial buildings on the National Register of Historic Places in New Mexico Solar design New Mexico State Register of Cultural Properties National Register of Historic Places in Albuquerque, New Mexico Modernist architecture in New Mexico
Solar Building
Engineering
1,506
26,355,609
https://en.wikipedia.org/wiki/Niall%20McCrudden
Niall McCrudden was a music manager, promoter, celebrity optician and socialite. He was co-founder of Insight, one of Ireland's foremost optician chains. He became known as the "optician to the stars" after selling a pair of sunglasses to Jim Corr. Career McCrudden initially worked for McNally Opticians for two years. Insight was founded by McCrudden and partner Graham Smithers in 1992, with its first practice located on top of a doctor on the Swords Road. He was considered the "unofficial optician to Ireland's trendy eyeware-sporting celebrities". They later located to Clane, Inchicore and Rathcoole for a time and tested the eyes of students in universities. In 2006 the company was involved in a dispute with Specsavers which used the word "insight" during an advertising campaign. McNally Opticians acquired Insight later that year. McCrudden's had an exhibition titled Stars in their Specs and an optical museum on Talbot Street, Dublin which featured on Mooney in May 2007 when reporter Brenda Donohue paid a visit. He next turned his eye to Sunglasses.ie, a new website he set up in August 2009. McCrudden was involved in other businesses. He also managed boybands. In 2005 he attended an international polo tournament. He was also reported as having attended other social events, such as important birthday parties. In September 2009 he was one of a team of Irish celebrities who spent a week climbing Machu Picchu in Peru in aid of Autism Action. Health He endured depression and had spent time receiving treatment in hospital. He discharged himself and was found dead in his Drumcondra home at the age of 45 on the evening of 20 February 2010. He was survived by his son, his parents, his twin brother and his sister. Many of Ireland's celebrities, including models, former Miss World Rosanna Davison, musicians, Eurovision Song Contest winners, television personalities, snooker players and the Lord Mayor of Dublin, attended McCrudden's funeral at the Corpus Christi Church in Griffith Avenue, Dublin on 24 February 2010. Boyzone member Keith Duffy was one of those who helped carry the coffin. Fellow Boyzone member and friend Mikey Graham tweeted his dismay at the news — "Very sad today. Lost a very close pal on Saturday and just found out before I went on ice. So damn tough, another friend gone. RIP pal." — he was unable to attend the funeral as he was in London rehearsing for a television show. Boyzone had known McCrudden since 1993. References 2010 deaths Irish socialites Music promoters Opticians Year of birth missing People from Drumcondra, Dublin
Niall McCrudden
Astronomy
554
31,278,066
https://en.wikipedia.org/wiki/Kemper%20Project
The Kemper Project, also called the Kemper County energy facility or Plant Ratcliffe, is a natural gas-fired electrical generating station currently under construction in Kemper County, Mississippi. Mississippi Power, a subsidiary of Southern Company, began construction of the plant in 2010. The initial, coal-fired project was central to President Obama's Climate Plan, as it was to be based on "clean coal" and was being considered for more support from the Congress and the incoming Trump Administration in late 2016. If it had become operational with coal, the Kemper Project would have been a first-of-its-kind electricity plant to employ gasification and carbon capture technologies at this scale. Project management problems had been noted at the Kemper Project. The plant was supposed to be in service by May 2014, at a cost of $2.4 billion. As of June 2017, the project was still not in service, and the cost had increased to $7.5 billion. According to a Sierra Club analysis, Kemper is the most expensive power plant ever built, based on its generating capacity. In June 2017, Southern Company and Mississippi Power announced that the Kemper project would switch to burning only natural gas in an effort to manage costs. Background Kemper County is a small county in eastern Mississippi, roughly 30 miles north of Meridian. Kemper County was chosen as the site for the plant to take advantage of local brown coal (lignite), an untapped natural resource, while providing geographic diversity to help balance the electric demand and strengthen electric reliability in Mississippi. Mississippi Power is a large energy company based in Gulfport, providing energy for Gulfport, Biloxi, Hattiesburg, Meridian, Pascagoula, Columbia, Laurel, Waveland, Lucedale and Picayune. Mississippi Power intended the Kemper Project to produce cleaner energy through the use of integrated gasification combined cycle (IGCC) and carbon capture technologies, eliminating the majority of emissions normally emitted by a traditional coal plant. A study conducted by Southern Company (parent of Mississippi Power) stated that the Kemper Project would have been "a large undertaking with high visibility and ... help set the stage for future coal-based power generation. On June 3, 2010, the Mississippi Public Service Commission certified the project and the ground-breaking ceremony took place. Governor Haley Barbour was present. Timeline 2008: Conceptual design initiated 2010: Mississippi Public Service Commission approves project 2010: Construction begins 2011: Building foundation begins 2012: Above ground construction started 2013 August: Connection of the site's 230 kilovolt transmission lines September: First firing of the plant's combustion turbine (CTs) achieved October: Combined cycle unit originally synchronized to the grid December: Final transmission line that will carry electricity was energized 2014 July: Pneumatic tests on gasifiers used to convert lignite to synthetic gas successfully tested July: Combined cycle unit responsible for generating electricity successfully tested August: Combined cycle unit in commercial operation and available to serve customers. Mississippi Power identified this milestone as the most significant to date. October: Delays postpone in-service date to first half of 2016, and increase estimated cost to $6.1 billion. December: 48 "steam blows" successfully completed. Steam blow is the process of blowing steam through pipes to ensure that they are clean, tight, and leak free. 2015 March: First fire of gasifiers successfully completed. The gasifiers, the centerpiece of the project, are designed to convert lignite coal to synthetic gas, or syngas, for use in power generation. May: The South Mississippi Electric Power Association decided not to purchase a 15 percent interest in the Kemper Project. September: Mississippi Power adjusted scheduled completion to a date after April 19, 2016. Because of this delay, the company will be required to pay back $234 million in investment tax credits to the Internal Revenue Service. 2016 March: Southern Co. reported to the U.S. Securities and Exchange Commission that the cost of the Kemper Project had increased due to "repairs and modifications". The updated cost of the project was $6.6 billion. July: First of two gasifiers produces syngas. September: Second of two gasifiers produces syngas. October: Plant produces electricity using syngas in first of two gasifiers. 2017 March: Southern Co. discovered leaks that will cause it to miss scheduled mid-March completion of the project. June: Kemper power plant suspends coal gasification. 2021, October: the gasification structure was demolished Lignite Lignite is a soft, brownish-black coal that has the lowest energy content of any type of coal. It is also very dirty when burned. According to the Lignite Energy Council about 79 percent of lignite coal is used to generate electricity, 13.5 percent to generate synthetic natural gas, and 7.5 percent to produce fertilizer products. Mississippi has an estimated five billion tons of coal reserves, consisting almost entirely of eocene lignite. The typical lignite beds that can be economically mined range from two to nine feet thick. Mississippi's lignite resources equal about 13 percent of total U.S. lignite reserves. The Kemper plant was expected to use about 375,000 tons of locally mined lignite per month or almost 185 million tons over the plant's expected 40-year life. TRIG technology can utilize lignite, which is also a driving factor of the technology. Technology Mississippi Power's Kemper plant was intended to be an integrated gasification combined cycle (IGCC) facility, utilizing a technology known as "transport integrated gasification" (TRIG) to convert lignite coal—mined on the Kemper site—into syngas. The natural gas would then have been used to power turbines to generate electricity. Mississippi Power stated that, by adding coal to its sources of power, it wished to add balance to its fuel-source choices, and be less reliant on any one form of energy. There is an estimated four billion tons of lignite available to be used. If successful, the Kemper Project would have been the second TRIG facility in the United States. Producing electricity from coal in this way produces tremendous amounts of carbon dioxide, and Mississippi Power hoped that 65 percent of the carbon dioxide would be captured and utilized in Enhanced Oil Recovery at neighboring oil fields. Transport integrated gasification technology TRIG was developed by the Department of Energy, Southern Company and KBR at the Power Systems Development Facility in Wilsonville, Alabama. Southern Company stated that TRIG is a superior coal-gasification method with low impacts to the environment. TRIG technology can utilize lignite, which accounts for more than half of the world's coal reserves and drove global interest in the plant. Power Magazine posted an article in April 2013, walking through the technology in technical detail. They say, "Commercial TRIG units can be designed to achieve high environmental standards for , NOx, dust emissions, mercury, and . Cost analysis based on extensive design has shown that the economic benefits offered by the air-blown transport gasifier relative to other systems are preserved even when capture and sequestration are incorporated into the design." Clean coal If the carbon, capture and sequestration technology used at the Kemper Project had been successful, it would have been the United States’ first clean coal plant. The need for this type of technology has come from decades of debate among energy leaders on how to minimize carbon dioxide emissions into the Earth's atmosphere. In 2013, the United States' coal use was 40%, dominating all other energy sources. Realizing the demand for coal was not decreasing, Mississippi Power, Southern Company, KBR, and the Department of Energy invested in technology to capture emissions from burning fossil fuels. The investing bodies argued the type of clean coal technology they claim are found at the Kemper Project will be adopted worldwide; bringing profits back to Mississippi customers. Environmentalists state that clean coal is not a possibility, as some emissions will still be emitted into the atmosphere. Carbon capture and sequestration Carbon capture and sequestration, also referred to as carbon capture and storage (CCS), is a technology that can capture up to 90% of the carbon dioxide () emissions. CCS uses a combination of technologies to capture the released in the combustion process, transport it to a suitable storage location and finally store it (typically deep underground) where it cannot enter the atmosphere and thus contribute to climate change. sequestration options include saline formations and oil wells, where captured can be utilized in enhanced oil recovery. Due to rising global demand for energy, the consumption of fossil fuels is expected to rise until 2035, leading to greater emissions. Carbon dioxide enhanced oil recovery Carbon dioxide enhanced oil recovery or -EOR increases the amount of oil recovered from an underground oil reservoir. By pumping into an oil reservoir, previously unrecoverable oil is pushed up to where the oil can be reached. The US Department of Energy states that this can produce an additional 30 to 60 percent of the original amount of recoverable oil. Once all of the recoverable oil has been reached, the depleted reservoir can act as a storage site for the . The Kemper Plant was planned to have 60 miles of pipeline to carry its captured to neighboring oil reserves for enhanced oil recovery. Each year, the plant will capture 3 million tons of . In March 2014, The Guardian published that the diverted will be pumped into two Mississippi companies for use in enhanced oil recovery. Research and development The Department of Energy, the Southern Company, and construction management firm KBR (Kellogg, Brown & Root) joined at the Power Systems Development Facility (PSDF) in Wilsonville, Alabama to develop a process known as Transport Integrated Gasification (TRIG). This development started in 1996, and the gasifier design of Southern Company's Kemper Coal Plant is based on this specific research and development. The technology is most cost-effective when using low-heat content, high moisture, or high-ash content coals, including lignite. According to the U.S. Department of Energy, coal gasification offers one of the most versatile and clean ways to convert coal into electricity, hydrogen, and other valuable energy products. Rather than burning coal directly, gasification (a thermo-chemical process) breaks down coal into its basic chemical constituents. The technology of processing coal to gas on a commercial scale has been in development since the 1970s, and it has been in use since the mid-1980s. The TRIG technology, derived from fluidized catalytic cracking units used in the petrochemical industry, uses a pressurized, circulating fluidized bed unit. The transport gasification system features higher efficiencies and is capable of processing low-rank coals, such as lignite. Additionally, commercial TRIG units can be designed to achieve high environmental standards for sulfur dioxide, nitrogen dioxide, dust emissions, mercury, and carbon dioxide. Cost analysis based on the Kemper Coal Plant's design has shown that the economic benefits offered by the air-blown transport gasifier, relative to other systems, are preserved even when carbon dioxide capture and sequestration methodologies are incorporated into the design. The largest transport gasifier built to date commenced operation in 1996 at Southern Company's PSDF. The gasifier and auxiliary equipment at the site were sized to provide reliable data for confident scale-up to commercial scale. The demonstration unit proved easy to operate and control, achieving more than 15,600 hours of gasification. The demonstration-scale gasifier successfully gasified high-moisture lignite from the Red Hills Mine in Mississippi in four separate test campaigns for more than 2,300 hours of operations. On lignite, the transport gasifier operated smoothly over a range of conditions, confirming the gasifier design for Kemper County. Legal issues In February 2015, the Mississippi Supreme Court ruled Mississippi Power must refund 186,000 South Mississippi ratepayers for rate increases related to the Kemper Project. These fees are derived from Mississippi's Baseload Act, allowing Mississippi Power to charge ratepayers for powerplants under construction. In May 2016, Southern Company and its subsidiary Mississippi Power announced they were being investigated by the Securities and Exchange Commission related to overruns at the Kemper Project. The project had been repeatedly delayed and costs increased from $2.88 billion to $6.7 billion. In June 2016, Mississippi Power was sued by Treetop Midstream Services over the cancellation of a contract to receive carbon dioxide from the Kemper Project as part of the carbon capture and storage design. Treetop had contracted to buy carbon dioxide from the Kemper plant and had built a pipeline in preparation to receive the gas. Treetop alleged Mississippi Power had fraudulently and "intentionally misrepresenting and concealing the start date" for the Kemper Project, though Mississippi Power stated the suit was without merit. The company was also found to have unlawfully fired a whistle-blower who had criticized alleged false statements by company management. Environmental controversies Environmental groups argue that the project is an expensive undertaking that offers only limited benefits. In 2011, the Sierra Club and Bridge the Gulf organizations spearheaded the effort to lobby the U.S. Army Corps of Engineers to deny the required wetland permits, which Mississippi Company would have to fill to build the plant's facilities. The Mississippi Chapter of the Sierra Club is arguing that the location where the facilities are planned to be built needs to be left alone. They argue that the position of the facilities on a wetland will pollute the environment with tainted water runoff. Also, they believe that the extraction of the lignite will erode the environment and force the relocation of many Mississippians. Mitigation construction activities included the enhancement of 31 acres of wetlands, 105 acres of riparian buffer, and approximately 3,000 linear feet of stream channel. In an agreement with the city of Meridian, the plant is using city wastewater as its only water source. Additionally, the Kemper Project site is a "zero" liquid discharge facility. Therefore, no processed water from the plant is discharged into rivers, creeks or streams. Political controversies Mississippi Governor Haley Barbour has praised the planned project's potential of placing Mississippi in national prominence, mostly because it would be the first U.S. commercial-scale power plant to capture carbon. Additionally, former Speaker of the House Newt Gingrich expressed his support for the Kemper Project, stating that in his opinion it had the potential to be the single most important experiment in developing electricity in the world today. Gingrich's closing words of encouragement for the Kemper Project and the state of Mississippi: "You have a chance to be a remarkable leader in the country in the next 10 to 20 years." The Kemper Project received an estimated $270 million in Department of Energy funds after the Southern Company's plan for the proposed Orlando Gasification Project bunked when Florida decided the state was not interested in more coal plants. These transferred funds were moved from Florida to Mississippi in December 2008, after Haley Barbour's Washington D.C. lobbying firm, the BGR Group, pushed for the reallocation. Southern Company has been a BGR client since 1999, having spent a total of $2.6 million with the firm, according to federal lobbying disclosure documents. Southern Company alleges that Governor Barbour did not help them receive any additional funding at all. The BGR Group website has deleted all connections with Southern Company from its website. Mississippi state law was changed to permit charging ratepayers for construction of the facility. In 2017 the Mississippi Public Service Commission recommended the facility burn natural gas rather than syngas from coal to avoid the risk of further consumer rate increases. The plant missed all its targets and plans for "clean coal" generation were abandoned in July 2017. The plant is expected to go ahead burning natural gas only. See also Petra Nova, a CCS project for the WA Parish Generating Station in Texas References Carbon capture and storage Buildings and structures in Kemper County, Mississippi Natural gas-fired power stations in Mississippi Coal-fired power stations in Mississippi Former coal gas-fired power stations Former coal-fired power stations in the United States Energy infrastructure completed in 2014 Southern Company
Kemper Project
Engineering
3,318
4,683,709
https://en.wikipedia.org/wiki/Tak%20%28function%29
In computer science, the Tak function is a recursive function, named after . It is defined as follows: def tak(x, y, z): if y < x: return tak( tak(x-1, y, z), tak(y-1, z, x), tak(z-1, x, y) ) else: return z This function is often used as a benchmark for languages with optimization for recursion. tak() vs. tarai() The original definition by Takeuchi was as follows: def tarai(x, y, z): if y < x: return tarai( tarai(x-1, y, z), tarai(y-1, z, x), tarai(z-1, x, y) ) else: return y # not z! tarai is short for in Japanese. John McCarthy named this function tak() after Takeuchi. However, in certain later references, the y somehow got turned into the z. This is a small, but significant difference because the original version benefits significantly from lazy evaluation. Though written in exactly the same manner as others, the Haskell code below runs much faster. tarai :: Int -> Int -> Int -> Int tarai x y z | x <= y = y | otherwise = tarai (tarai (x-1) y z) (tarai (y-1) z x) (tarai (z-1) x y) One can easily accelerate this function via memoization yet lazy evaluation still wins. The best known way to optimize tarai is to use a mutually recursive helper function as follows. def laziest_tarai(x, y, zx, zy, zz): if not y < x: return y else: return laziest_tarai( tarai(x-1, y, z), tarai(y-1, z, x), tarai(zx, zy, zz)-1, x, y) def tarai(x, y, z): if not y < x: return y else: return laziest_tarai( tarai(x-1, y, z), tarai(y-1, z, x), z-1, x, y) Here is an efficient implementation of tarai() in C: int tarai(int x, int y, int z) { while (x > y) { int oldx = x, oldy = y; x = tarai(x - 1, y, z); y = tarai(y - 1, z, oldx); if (x <= y) break; z = tarai(z - 1, oldx, oldy); } return y; } Note the additional check for (x <= y) before z (the third argument) is evaluated, avoiding unnecessary recursive evaluation. References External links TAK Function Functions and mappings Special functions
Tak (function)
Mathematics
644
1,762,360
https://en.wikipedia.org/wiki/Parabolic%20cylinder%20function
In mathematics, the parabolic cylinder functions are special functions defined as solutions to the differential equation This equation is found when the technique of separation of variables is used on Laplace's equation when expressed in parabolic cylindrical coordinates. The above equation may be brought into two distinct forms (A) and (B) by completing the square and rescaling , called H. F. Weber's equations: and If is a solution, then so are If is a solution of equation (), then is a solution of (), and, by symmetry, are also solutions of (). Solutions There are independent even and odd solutions of the form (). These are given by (following the notation of Abramowitz and Stegun (1965)): and where is the confluent hypergeometric function. Other pairs of independent solutions may be formed from linear combinations of the above solutions. One such pair is based upon their behavior at infinity: where The function approaches zero for large values of and , while diverges for large values of positive real . and For half-integer values of a, these (that is, U and V) can be re-expressed in terms of Hermite polynomials; alternatively, they can also be expressed in terms of Bessel functions. The functions U and V can also be related to the functions (a notation dating back to Whittaker (1902)) that are themselves sometimes called parabolic cylinder functions: Function was introduced by Whittaker and Watson as a solution of eq.~() with bounded at . It can be expressed in terms of confluent hypergeometric functions as Power series for this function have been obtained by Abadir (1993). Parabolic Cylinder U(a,z) function Integral representation Integrals along the real line, The fact that these integrals are solutions to equation () can be easily checked by direct substitution. Derivative Differentiating the integrals with respect to gives two expressions for , Adding the two gives another expression for the derivative, Recurrence relation Subtracting the first two expressions for the derivative gives the recurrence relation, Asymptotic expansion Expanding in the integrand of the integral representation gives the asymptotic expansion of , Power series Expanding the integral representation in powers of gives Values at z=0 From the power series one immediately gets Parabolic cylinder Dν(z) function Parabolic cylinder function is the solution to the Weber differential equation, that is regular at with the asymptotics It is thus given as and its properties then directly follow from those of the -function. Integral representation Asymptotic expansion If is a non-negative integer this series terminates and turns into a polynomial, namely the Hermite polynomial, Connection with quantum harmonic oscillator Parabolic cylinder function appears naturally in the Schrödinger equation for the one-dimensional quantum harmonic oscillator (a quantum particle in the oscillator potential), where is the reduced Planck constant, is the mass of the particle, is the coordinate of the particle, is the frequency of the oscillator, is the energy, and is the particle's wave-function. Indeed introducing the new quantities turns the above equation into the Weber's equation for the function , References Special hypergeometric functions Special functions
Parabolic cylinder function
Mathematics
680
15,811,818
https://en.wikipedia.org/wiki/Catatumbo%20lightning
Catatumbo lightning () is an atmospheric phenomenon that occurs over the mouth of the Catatumbo River where it empties into Lake Maracaibo in Venezuela. means "House of Thunder" in the language of the Bari people. It originates from a mass of storm clouds at an altitude of more than , and occurs for 140 to 160 nights a year, nine hours per day, and with lightning flashes from 16 to 40 times per minute. It occurs over and around Lake Maracaibo, typically over a bog area formed where the Catatumbo River flows into the lake. The phenomenon sees the highest density of lightning in the world, at 250 per km2. In summers, the phenomenon may even occur as dry lightning without rainfall. The lightning changes its flash frequency throughout the year, and it is different from year to year. For example, it ceased from January to March 2010, apparently due to drought, leading to speculation that it might have been extinguished permanently. Location and mechanism Catatumbo lightning usually develops between and , toward the west of Lake Maracaibo. The storms are thought to be the result of winds blowing across the lake and the surrounding swampy plains. These air masses meet the high mountain ridges of the Andes, the Perijá Mountains (), and Mérida Cordillera, enclosing the plain from three sides. The heat and moisture collected across the plains create electrical charges and, as the air masses are destabilized by the mountain ridges, result in thunderstorm activity. The phenomenon is characterized by almost continuous lightning, mostly within the clouds. The lightning produces a great quantity of ozone, though whether or not this contributes to the ozonosphere is a topic of disagreement, given the instability of the storm. Cause Russian researcher Andrei Zavrotsky investigated the area several times. He concluded that the lightning has several epicenters in the marshes of Juan Manuel de Aguas National Park, Claras Aguas Negras, and western Lake Maracaibo. In 1991, he suggested that the phenomenon occurred due to cold and warm air currents meeting around the area. The study also speculated that an isolated cause for the lightning might be the presence of uranium in the bedrock. Between 1997 and 2000, a series of four studies proposed that the methane produced by the swamps and the massive oil deposits in the area were a major cause of the phenomenon. The methane model is based on the symmetry properties of methane. Other studies have indicated that this model is contradicted by the observed behavior of the lightning, as it would predict that there would be more lightning in the dry season (January–February), and less in the wet season (April–May and September–October). A team from the Universidad del Zulia has investigated the impact of different atmospheric variables on Catatumbo lightning's daily, seasonal and year-to-year variability, finding relationships with the Inter-Tropical Convergence Zone (ITCZ), El Niño–Southern Oscillation (ENSO), the Caribbean Low-Level Jet, and the local winds and convective available potential energy (CAPE). Using satellite data, NASA counts that there are around 250 instances of lightning per km2. Predictability A 2016 study showed that it is possible to forecast lightning in the Lake Maracaibo basin up to a few months in advance, based in the variability of the Lake Maracaibo Low-Level Jet and its interactions with predictable climate modes like the ENSO and the Caribbean Low-Level Jet. The study also showed that the forecast accuracy is significantly higher when an index based on a combination of winds and convective available potential energy (CAPE) is used. The index seems to capture well the compound effect of multiple climate drivers. Historical references There are several references by colonial Portuguese and Spanish sources, that name this phenomenon as "Lanterns of Saint Anthony" or the "Lighthouse of Maracaibo", as also noted by Alexander Walker in 1822. Based on M. Palacios book "Viage de Varinas", Prussian naturalist and explorer Alexander von Humboldt described the lightning in 1826. Italian geographer Agustin Codazzi described it in 1841 as "like a continuous lightning, and its position such that, located almost on the meridian of the mouth of the lake, it directs the navigators as a lighthouse." Cultural impact The phenomenon is depicted on the flag and coat of arms of the state of Zulia, which also contains Lake Maracaibo, and is mentioned in the state's anthem. The phenomenon has been known for centuries as the "Lighthouse of Maracaibo", since it is visible for miles around Lake Maracaibo. Some authors have misinterpreted a reference to a glow in the night sky in Lope de Vega's description in his epic, "La Dragontea" of the attack against San Juan de Puerto Rico by Sir Francis Drake as an early literary allusion to the lightning (since in another verse the poet does mention Maracaibo), but it was actually a reference to the glow produced by burning ships during the battle. See also Hector (cloud) References External links World's first seasonal lightning forecast Storm Chaser George Kourounis Investigates the Catatumbo Lightning Phenomenon An Everlasting Lightning Storm, article at Slate.com WWLLN World Wide Lightning Location Network Geography of Zulia Lightning Anomalous weather Regional climate effects Climate of Venezuela
Catatumbo lightning
Physics
1,113
29,061,881
https://en.wikipedia.org/wiki/Monotube%20steam%20generator
A monotube steam generator is a type of steam generator consisting of a single tube, usually in a multi-layer spiral, that forms a once-through steam generator (OTSG). The first of these was the Herreshoff steam generator of 1873. Principles For the sake of efficiency, it is desirable to minimise the steam content of the generator. Heat can then be transferred efficiently into liquid water, rather than into low-density steam. Monotube steam generators may either boil gradually along their length, usually pumped circulation systems, but where this boiling does not disrupt the circulation. Otherwise they can use the Benson supercritical system, where the pressure is sufficient to prevent boiling (within the heated volume) altogether. Examples Examples of Monotube steam generators include: Industrial steam generators The water-tube boilers of the monotube type used in steam cars, such as: AMC Clayton Steam Generator Doble steam car Gardner-Serpollet Locomobile Company of America White Motor Company, US patent 659,837 of 1900 Flash boilers A flash boiler is a particular type of low-water-content monotube boiler. Modern use is confined to model steam boats but, historically, flash boilers were used in Gardner-Serpollet steam cars. See also List of boiler types, by manufacturer Steam generator (boiler) Steam generator (railroad) References Steam generators
Monotube steam generator
Engineering
278
76,767,723
https://en.wikipedia.org/wiki/Humidesulfovibrio
Humidesulfovibrio is a bacterium genus in the family Desulfovibrionaceae. Humidesulfovibrio arcticus Humidesulfovibrio idahonensis Humidesulfovibrio mexicanus References Bacteria described in 2020 Desulfovibrionales Bacteria genera
Humidesulfovibrio
Biology
60
3,800,142
https://en.wikipedia.org/wiki/Blotto%20%28biology%29
In biology, BLOTTO is a blocking reagent made from nonfat dry milk, phosphate buffered saline, and sodium azide. Its name is an almost-acronym of bovine lacto transfer technique optimizer. It constitutes an inexpensive source of nonspecific protein (milk casein) which blocks protein binding sites in a variety of experimental paradigms, notably Southern blots, Western blots, and ELISA. Its use was first reported in 1984 by Johnson and Elder's lab at Scripps. Prior to 1984, partially purified proteins such as bovine serum albumin, ovalbumin, or gelatin from various species had been used as blocking reagents but had the disadvantage of being expensive. References Immunology
Blotto (biology)
Chemistry,Biology
157
28,239,489
https://en.wikipedia.org/wiki/Lime%20softening
Lime softening (also known as lime buttering, lime-soda treatment, or Clark's process) is a type of water treatment used for water softening, which uses the addition of limewater (calcium hydroxide) to remove hardness (deposits of calcium and magnesium salts) by precipitation. The process is also effective at removing a variety of microorganisms and dissolved organic matter by flocculation. History Lime softening was first used in 1841 to treat Thames River water. The process expanded in use as the other benefits of the process were discovered. Lime softening greatly expanded in use during the early 1900s as industrial water use expanded. Lime softening provides soft water that can, in some cases, be used more effectively for heat transfer and various other industrial uses. Chemistry As lime in the form of limewater is added to raw water, the pH is raised and the equilibrium of carbonate species in the water is shifted. Dissolved carbon dioxide (CO2) is changed into bicarbonate (HCO) and then carbonate (CO). This action causes calcium carbonate to precipitate due to exceeding the solubility product. Additionally, magnesium can be precipitated as magnesium hydroxide in a double displacement reaction. In the process both the calcium (and to an extent magnesium) in the raw water as well as the calcium added with the lime are precipitated. This is in contrast to ion exchange softening where sodium is exchanged for calcium and magnesium ions. In lime softening, there is a substantial reduction in total dissolved solids (TDS) whereas in ion exchange softening (sometimes referred to as zeolite softening), there is no significant change in the level of TDS. Lime softening can also be used to remove iron, manganese, radium and arsenic from water. Future uses Lime softening is now often combined with newer membrane processes to reduce waste streams. Lime softening can be applied to the concentrate (or reject stream) of membrane processes, thereby providing a stream of substantially reduced hardness (and thus TDS), that may be used in the finished stream. Also, in cases with very hard source water (often the case in Midwestern USA ethanol production plants), lime softening can be used to pre-treat the membrane feed water. Waste products Lime softening produces large volumes of a mixture of calcium carbonate and magnesium hydroxide in a very finely divided white precipitate which may also contain some organic matter flocculated out of the raw water. Processing or disposal of this sludge material may be an additional cost to the process. Drying and re-calcining the waste allows the lime to be almost fully re-cycled, but drying and re-calcining is more expensive than producing new lime from limestone. References Water treatment
Lime softening
Chemistry,Engineering,Environmental_science
573