text
stringlengths 144
682k
|
|---|
1. /
2. SASA Science Focus...
3. /
4. Focus
5. /
6. Health
7. /
8. Health Challenges and...
Health Challenges and Opportunities across Africa
To conduct effective scientific research and hence advance science for improving medical care and public health, the context, disease burden and problems hindering good medical research must be clearly identified before designing and conducting any such research in an African community. Demographics and cultural factors are some of the critical factors that must be taken into account for any research data to be valid and hence helpful in advancing medical science in Africa. Disease burden in Africa has been well characterized by previous epidemiological and public health research over the past seventy years. In addition to infectious disease, non-communicable diseases such as Burkitt’s lymphoma, endomyocardial fibrosis, Buruli ulcer, the biomedical science of heart transplant, etc have been well characterized in previous studies in Africa. Despite these stellar works in disease characterization, ground breaking scientific advances have been few and far between and have been the exception, despite considerable funding. No science and research projects seem to have had an enduring impact or multiplier effect to alleviate disease burden and improve public health.
According to the United Nations World Population Prospect Report, in 2018, Africa’s population was estimated at 1.3 billion compared to 1.4 billion for India and 1.5 billion for China. In 2050 the continent is predicted to be home to 2.5 billion people and 1.7 billion people are expected to call India home, compared to 1-3 billion in China. Evidently Africa’s population is expected to double by 2050 while China will experience a population short fall and India would have made only a modest population increase. These demographic shifts will have a profound impact on health, economic and social conditions on the African continent.
Spectrum of Health Problems
While infectious disease is still the prepoderant public health problem deserving of heightened scientific effort, non-communicable diseases such as hypertension, heart disease, diabetes and malignancy are on the the ascendency and demand sustained interest from the African Scientific community. Even genetic and metabolic disorders hitherto considered esoteric on the continent are increasingly attracting considerable attention from the scientific community within and outside Africa. Further maternal, neonatal and infant mortality are still major public health problems that exert quite some strain on African health systems. Furthermore, injuries remain among the top causes of death on the African continent. These constellation of medical and public health problems require a comprehensive, concerted and coordinated pan African scientific, government and private enterprise effort for better understanding of the problems, and developing better strategies for decreasing disease burden to society. Wheb executed correctly, such effort will lead to lessening of disease burden and socioeconomic strains to individuals, families, communities, government and society as a whole.
Infectious Disease
HIV/AIDS continues to be a major public health problem on the African continent, which has 11% of the world’s population but reportedly 60% of the people with HIV/AIDS. The WHO states that HIV/AIDS remains the leading cause of death for adults, but progress in medical research and development of cheaper medications has made it possible for more and more people to receive life-saving treatment. It is reported that the number of HIV-positive people on antiretroviral medicines increased eight-fold, from 100 000 in December 2003 to over 1, 000 000 in December 2018.
Malaria afflicts 250-450 million Africans, mainly children under five years of age, yearly. Malaria is endemic in 42 African countries. Of these 33 have adopted artemisinin-based combination therapy—the most effective antimalarial medicines available today—as first-line treatment.
River blindness has been eliminated as a public health problem in most African countries, and guinea worm control efforts have resulted in a 97% reduction in cases since 1986. Leprosy is close to elimination—meaning there is less than one case per 10 000 people in the Africa region.
According to the WHO, most countries are making good progress on preventable childhood illness. Polio is close to eradication, and 37 countries are reaching 60% or more of their children with measles immunization. Overall measles deaths have declined by more than 50% since 1999. In 2005 alone 75 million children received measles vaccines.
Maternal and Childhealth
Maternal, newborn and infant mortality remains high overall in Africa. The WHO reports that of the 20 countries with the highest maternal mortality ratios worldwide, 19 are in Africa, and the region has the highest neonatal death rate in the world.
Noncommunicable Diseases
Noncommunicable diseases, such as hypertension, heart disease, diabetes and malignancies are on the rise. Even such esoteric disorders as genetic and metabolic disorders are now a major public health problem in certain areas. A recent report of an “outbreak” of sickle cell disease (SCD) “epidemic” in new born babies in several counties in the Acholi region of Northern Uganda is a unique public health problem. SCD is an autosomal recessive genetic disease, which means both parents must be carriers of the gene to conceive an affected offspring. SCD was the first genetic disease to be completely characterized at a molecular level. Being a carrier is of the SCD gene is protective against the deadly falciparum malaria. The prevalence of SCD (determined in the late 1950s and early 1960s) is high among the Acholi and Madi peoples of Northern Uganda. The most plausible explanation for the SCD “epidemic” in the Acholi region of Northern Uganda is that, over a generation or two, the poor and inadequate health services in the region including inadequate malaria control measures led to gradual depletion of the non-carrier population due to early death from malaria. This means that most of the survivors, and currently fecund 16 to 26 year olds, are carriers of the sickle cell gene and hence are likely to have an affected child. That is the explanation of the “epidemic” of sickle cell disease in newborn babies in the area – the vast majority of the reproductive segment of the population are carriers. This unique epidemiological phenomenon is a clear illustration of the impact of poor public health services on community health.
Then there is the strain on African health systems imposed by the high burden of life-threatening communicable diseases coupled with increasing rates of noncommunicable diseases such as hypertension and coronary heart disease. Basic sanitation needs remain unmet for many: only 58% of people living in sub-Saharan Africa have access to safe water supplies. Noncommunicable diseases, such as hypertension, heart disease, diabetes and are on the rise; and injuries remain among the top causes of death in the Region.
Medical Research in Africa: Challenges, Hurdles and Pitfalls
The challenges of doing research in Africa, in any discipline, is captured in the below article which appeared in the Bulletin of the Royal Society of Tropical Medicine and Hygiene, U.K. The article is reproduced with permission.
Although the comments focus on medical research, the gist of the authors views can be applied to research in Agriculture, Metallurgy and Mining, Basic and Applied research, Biomedical Science, Biotechnology, Engineering, Fisheries, the Environment etc.
This article appeared in the Bulletin of the Royal Society of Tropical Medicine and Hygiene, UK, and is reproduced with kind permission of the authors
It is close to 30 years since most countries in sub-Saharan Africa gained independence. During this period, some of these countries have put considerable financial investment into education and health, particularly the training of health workers and research scientists. Sadly, the impact of this investment on research productivity and overall improvement of health standards in these countries has been negligible. In terms of publications, for example, most of the significant contributions from sub-Saharan Africa come from either collaborative work with scientists from the ‘West’, or institutions with a large presence of scientists from the “West”.
While this in itself is not a bad trend, it is a worrying situation from the perspective of African scientists working in their own countries who are able to attract independent funding for research. When stripped of collaborations, Africa’s scientific ‘drought’ is evident. Clearly, something is wrong and needs to be addressed.
The problems of medical research in Africa can be broadly categorized as follows:
• Infrastructural (laboratories, equipment, etc.).
• Institutional, i.e., career structure for trained scientists or those wishing to go into medical research.
• Financial, i.e., research funds and personal remuneration.
• Educational, i.e., curricula for medical and allied health professions.
Some of these problems are closely related, although they are considered separately in this article.
Infrastructural problems such as lack of proper laboratories and equipment for research, and poor communication facilities are major factors hampering medical research in Africa, and are largely related to lack of available funds.
In many African countries, there are no proper career structures within medical schools or biomedical research institutions. Highly trained biomedical scientists find themselves doing routine administrative jobs, which have little or no bearing on their training. These scientists are unlikely to be productive in their research and this is a contributing factor to the never-ending brain-drain from Africa. At the same time, partly because of problems of infrastructure, the curricula for biomedical science courses in many African universities do not reflect recent advances in the field of medicine – not the best way to inspire students to consider a career in medical research.
Medical research scientists, like many other professionals in sub-Saharan Africa, are often poorly remunerated. After spending so many years in training, most are unlikely to be happy to spend the rest of their working life earning a salary hardly large enough to make ends meet. This is another contributing factor to the brain-drain from Africa.
The end result of these problems is that, despite years of investment in biomedical education and training in some African countries, the conditions on the ground have not changed. We suggest the following as some potential solutions.
Despite economic hardship, governments in Africa need to recognise the important role of medical research in the overall economic and social development of their respective countries, and thereby give special attention to increased allocation of funding, particularly for key basic research programmes. Governments can do so by part sponsorship and by sourcing funds from bilateral donors that are targeted specifically towards medical research and made open to competitive funding according to priority areas of research.
Although structures for funding exist on a limited basis, it would be helpful to strengthen further and expand funding in various categories to target scientists at different levels of career development. For example, there would be schemes targeting the training of scientists at Masters degree level, at PhD level, at post doctoral level and the more experienced scientist. Funding for research proposals should go hand in hand with the strengthening of government departments that deal with interpretation and implementation or research findings so that further funding may be justified.
Training curricula in colleges and universities need to re-emphasize the place of medical research in the career development of students who may be thinking about joining research later in life. Some students graduating from these institutions remain ignorant about careers in research. It would be a good idea for, example on university open days to invite prominent research scientists to talk about career opportunities in medical research.
There is a lack of African role models and the apparent disillusionment among role models may have led to some prospective candidates being discouraged from taking up research as a career. Again this is tied in with a lack of funding for most of those already in research which gives the impression that a career in research is not worthwhile.
It is vital that funding of research in Africa be tied to the improvement of remuneration of scientists to be equivalent to that of scientists from the developed countries if they are expected to develop and compete for funding at the international level. This would release more valuable time for dedicated research work, better research outcomes and better prospects at the international level.
Donors funding research in Africa should make available resources in research materials, including access to the internet in African universities and research institutions, so that scientists don’t lag behind developments in scientific research and funding opportunities. This would also include setting aside funds by major scientific conference organizers targeting support and scholarships to enable scientists from Africa to attend and present their research findings.
Active institutional collaboration between scientists from resource-rich countries and African scientists should be further strengthened in order to draw more research funding to the continent.
In summary, we believe that there is a great potential for the development and growth of scientific research in Africa by Africans and it is our sincere hope that all stakeholders will play their role in making this a reality.
Authors (Gilbert Kokwaro – Gkokwaro@wtnairobi.mimcom.net and Samuel KariukiSkariuki@wtrl.or.ke).
|
Experiments, Mathematics and Principles of Natural Philosophy in the Epistemology of Giovanni Battista Baliani
The epistemology of Giovanni Battista Baliani, a leading 17th century Italian scientist, is the focus of this article. In his treatise on motion, De motu naturali gravium solidorum (1638), Baliani’s epistemology was grounded on empirical principlesthe law of the pendulum essentiallyin the footsteps of the old mixed mathematics. In a new edition, De motu naturali gravium solidorum et liquidorum (1646), Baliani changed his approach and grounded his epistemology on principles of natural philosophy with a metaphysical and not empirical evidence. An analysis of Baliani’s writings reveals a tension between his empiricist philosophy, which he maintained throughout his life, and his mathematical approach, which was still based on experience; but on an experience derived from measurements and intrinsically affected by errors, it was hence uncertain. A comparison with Galileo’s epistemology is also made to better understand trends in mathematical physics at the end of the 1600’s.
Share and Cite:
Capecchi, D. (2017) Experiments, Mathematics and Principles of Natural Philosophy in the Epistemology of Giovanni Battista Baliani. Advances in Historical Studies, 6, 78-94. doi: 10.4236/ahs.2017.62006.
1. Introduction
Giovanni Battista Baliani (1582-1666), the son of a senator, was trained in the law and spent most of his adult life in public service. His scientific interests date to 1611 when he was prefect of the fortress at Savona. There he noted how cannon balls of different weights fall at the same speed, or so he says. In 1613 Filippo Salviati (1582-1614) met Baliani and wrote about him to Galileo (Galilei, 1890: vol. 11, p. 610) , thus, began a correspondence between the two concerning the experimental determination of the weight of air. In 1615 Baliani visited Galileo in Florence and also met Benedetto Castelli (1578?-1643). The intermittent correspondence with Galileo that lasted until 1639 shows Baliani to have been a talented experimentalist and an ingenious speculator. In 1630 he wrote to Galileo describing the failure of a siphon to lift water more than ten meters. Baliani blamed the atmospheric pressure for this, but expressed uncertainty regarding whether the total weight of a column of air many kilometers high was less than that of a ten meter column of water, at which height Galileo had already noted the failure of sucking pumps.
In astronomy, although Baliani preferred Tycho Brahe’s system to that of Copernicus, he speculated on the possibility of tides being caused by terrestrial motion. In 1632 in Genoa, Baliani met the Jesuit philosopher Niccolò Cabeo (1586- 1650) forming a lasting friendship. Baliani returned to Savona in 1647 as governor, a post he held until 1649. He was then elevated to membership of the principal governing body of Genoa, where he remained until his death.
In 1638 Baliani published the short treatise, the De motu naturali gravium solidorum (Baliani, 1638) , that preceded the publication of Galileo’s Discorsi e dimostrazioni matematiche of the same year. Both the content and conclusions were similar, but no one at the time accused Baliani of plagiarism, though Galileo believed that he had not been cited adequately. Baliani’s treatise was well- received and circulated widely even outside of Italy (Moscovici, 1967: pp. 18-21) . The 1638 edition was followed by a second enlarged edition in 1646, the De motu naturali gravium solidorum et liquidorum (Baliani, 1646) . In 1647 Baliani published a treatise on natural philosophy, entitled Trattato di Gio. Battista Baliano della pestilenza (Baliani, 1653) 1. In this work he also stated the principle
that population increase was related to the availability of arable land and food production would necessarily result in famine were it not for the occurrence of war and pestilence. The quantitative nature of his argument entitles him to be regarded as a predecessor Malthus (Drake, 1967: pp. 401-402) . Baliani’s Trattato appeared shortly after the publication of an important text on a pseudo-Aristo- telian philosophy of nature by his friend Cabeo (Cabeo, 1646) .
Baliani’s influence on other contemporary scientists was less than it should have been. This is mainly because he was an amateur in the field, moreover, Genoa and Liguria were not very receptive to matters of culture (Lavaggi, 2004: pp. 93-115) , and his opposition to Galileo did not make him popular with contemporary scientists. For reasons not covered here, his stance against Galileo most certainly and unjustly compromised his cultural (and moral) credibility (Baroncelli, 1998) . This apart, in the middle of the 17th century he actively participated in the international debate on various issues regarding mechanics (modern meaning). Baliani probably did not arrive at the laws of falling bodies independently of Galileo, but he was the first to introduce the substantially modern concept of inertial mass. His correct conception of atmospheric pressure remained unpublished, although Torricelli may have been aware of it from Galileo. Baliani’s most important contribution, the discussion on elastic shock, seems to have gone unnoticed until quite recently (Moscovici, 1967; Costantini, 1969; Baroncelli, 1998; Savelli, 1953; Drake, 1967: pp. 401-402; Maffioli, 2011: pp. 73-104; Moscovici, 1960; Moscovici, 1965; Nonnoi, 1988; Raymond Zouckermann, 1982; Capecchi, 2013; Capecchi, 2014; Capecchi, 2015; Somaglia, 1983) .
Serge Moscovici in 1967 was the first to analyse in depth the work of Baliani and restore some credibility to the scientist (Moscovici, 1967) . Claudio Costantini in 1969 analysed Baliani’s philosophical conceptions and the reactions of Jesuit scholars to the publication of Baliani’s treatises (Costantini, 1969) . Giovanna Baroncelli in 1998 translated into Italian the De motu naturali gravium solidorum et liquidorum, with a meagre but quite accurate comment (Baroncelli, 1988).
Baliani, however, in my opinion, deserves greater focus and the scope of this study is to further our knowledge of his contribution to science. Attention is centred on Baliani’s empirical epistemology that has received very little attention in the literature. His position on this point is very interesting as it casts light on the transition from the old approach to mixed mathematics in the Renaissance, largely biased by Aristotelian epistemology and involving only a few aspects of natural philosophy, to modern mathematical physics that attempts to embrace all the natural philosophy of inanimate beings. It may also further an understanding of Jesuit epistemology recently the focus of renewed interest (Dear, 1995) .
The following aspects will be examined:
1) Relations among Baliani’s natural philosophy, mathematics and empirical observations.
2) Trends from 1638 to 1646.
3) A comparison between Baliani’s and Galileo’s approach to mixed mathematics.
2. The De Motu Naturali Gravium Solidorum, 1638
The De motu naurali gravium solidorum of 1638, opened with a preface where Baliani talked about his experience with falling bodies since 1611. In particular, he stated to have verified that the speed of a falling body, or rather the time taken to fall from a fixed height, is independent of the weight of the body, thus claiming his priority over Galileo. He also made reference to pendulums of constant length, where the periods were found to be independent of the weights.
Baliani adopted a mixed mathematics approach where merging information acquired through experience and mathematical deduction was typical. In essence, the scholar of the mixed mathematics waives the search for the ultimate causes, especially efficient causes, to settle for more proximate causes and offer a simpler explanation of complex phenomena.
I premised some of the principles of nature, because I cannot see how otherwise to deduce the conclusions. I decided to call hypotheses (suppositiones) those [propositions] that derive from the mentioned experiments and to separate them from the other postulates (petitiones). I considered appropriate to ignore geometric postulates as they are easy to understand and thus superfluous (Baliani, 1638: pp. 5-6) 2.
The hypotheses assumed by Baliani are the observations confirmed by repeated experiences (Baliani, 1638: p. 5) , reported below:
1) Equal vibrations of equi-pendulums of any weight are equi-periodic.
2) Even if unequal, the vibrations of equi-pendulums are equi-periodic.
3) The lengths of pendulums with unequal lengths are in duplicate proportion with the periods of vibrations, that is as their squares.
4) Moment for a heavy body over an inclined plane is to its heaviness as the vertical is to inclined lines (Baliani, 1638: p. 8) .
The first three hypotheses appear in the form announced by Baliani; they could possibly be deduced from experimental observation of the pendulum3. The statement of the first hypothesis for which the period of oscillation is independent of the weight of the bob is justified here on an experimental basis. However, Baliani could also justify this on a metaphysical basis. In the preface he wrote:
I resolved to assign the role of the agent to gravity, and to matter, or if you prefer the material body, that of the patient, and therefore estimate that heavy bodies move according to the proportion between gravity and matter, consequently, as long as they move naturally along the vertical line without any impediment, they move equally, given that greater matter or material quantity corresponds to greater gravity (Baliani, 1638: 4)
Thus, because of the balance of the activity of gravity and the passivity of matter, both proportional to the quantity of matter, all bodies fall at the same speed, independently of weight.
The fourth hypothesis is quite different from the others and the status Baliani attributed to it seems quite unjustified. It does not express an empirical observation and introduces a concept crucial to Baliani, that of moment, as the inclination to motion of bodies due to active actions. This is assumed here to be proportional to the gravity of the body and the slope (ratio between height and length) of the plane4.
Other assumptions referred to by Baliani simply as postulates (petitiones) are as follows:
1) Similar portions of vibration of pendulums are to each other, regarding the period, as the whole vibrations.
2) Moments of heavy bodies are to each other as their speeds.
3) The minimal portions of a circle are similar to straight lines.
4) Given a straight line segment, we can conceive a circle so great that its arc, which in the opinion of the senses is equal to the assigned segment, could be assimilated to a straight line.
5) In free fall vertical motion, solids move with equal speed, and according to the proportion observed by pendulums that describe the first portion of the vibrations.
6) In natural motion along an inclined plane, solids move with equal speed and as if they were pendulums that describe that portion of vibrations that according to the judgment of the senses is equal and parallel to the line of the plane on which the said solids move (Baliani, 1638: pp. 10-12) .
It is hard to see any difference in status between some of the postulates and the hypotheses, apart from the third and fourth, that are geometrical in nature. For the other postulates the difference is only due to their lower evidence.
Baliani derived twenty-seven theorems and problems (propositiones) from his hypotheses and postulates. The most interesting of these is propositio 3, from which the law of square times is derived (Baliani 1638: p. 14) . The third hypothesis is crucial here. Using this the following law can be easily deduced: the space covered by a heavy body appended to a pendulum is in duplicate proportion with the time taken. Indeed, the space covered is proportional to the length of the pendulum, which in turn is in duplicate proportion with its period, that is the time taken for a full oscillation. However, the same ratio is maintained for any partial small oscillation (postulate 1), and thus also for a small oscillation starting from the horizon (pendulum at 90˚). This small oscillation, nonetheless, can be confused with a small vertical motion (postulates 4 and 5), hence the law of square time for the free fall. In proposito 6, the law of odd numbers is immediately proved (Baliani 1638: p. 17) . Some historians underline the weakness of some of Baliani’s proofs. This aspect is not relevant to this study and the inter-
ested reader can refer to the literature5.
Galileo’s Reception
Galileo, in his correspondence, even though he agreed in principle with Baliani’s approach, denied the possibility of using principles such as those chosen that seem too complex to enjoy the seal of certainty. In particular Galileo did not appreciate the postulate where the initial part of the motion of a pendulum, moving from the horizontal position, is the same as that occurring in a free vertical fall:
It is our intention to investigate and prove geometrically accidents and passions, which occur to heavy movables, naturally and freely descendants over rectilinear spaces, different or by length or by tilt, or by both together. Then in the choice of principles, on which science must be founded, you take as fair notice, some accidents, which have no connection with motions made above lines that are not straight, nor of assignable inclination, nor that in these the different lengths operate as they do in straight lines, but in all respects, are very different things, what looks a serious mistake to me, even more so as it drags along another no lesser [mistake] (Caverni, 1891: vol. 4, p. 313) 6.
Galileo considered “very hard” the assumption of confusing the vertical motions of a free body with the beginnings of the motion of a pendulum:
Very hard is the assumption, as we shall say hereinafter, that the motions for minimal parts of the arcs are as if they were for straight lines, assumption as I say very hard, thus the reader may ask with reason that the amount of arc, that V. S. called minimum, is such even to him, so that, for example, you intend the arc to be minimal if it is under one half of a degree. Furthermore, it would have been necessary to declare which of the straight lines should be taken for the minimum arcs, that is, that, which, departed from the same point of the arc touches the circumference, or as a cord of the minimal arc, or one of many others, that can be drawn from the same first point (Caverni, 1891: vol. 4, pp. 313-314) .
and proposed an alternative proof that, in my opinion, was not very convincing (Capecchi, 2014: pp. 190-191) . Galileo somewhat provocatively declared the empirical verification to be irrelevant.
But coming back to my treatise of motion, I argue ex suppositione on motion, defined in that way, so although the consequences do not respond to the accidents of natural motion of falling heavy bodies, it is of little matter to me, since nothing deviates from Archimedes’ demonstration, not being in nature any heavy body that moves into spiral lines. But in this I have been, so to speak, lucky, since the motion of heavy bodies and its accidents promptly respond to the accident I demonstrated about the motion I defined (Galilei, 1890: vol. 18, pp. 12-13) .
Baliani did not agree and replied7:
I thank V. S., also for the patience you had in reading my writings and the considerations that you made. I actually judged that experiences are to be assumed as the principles of sciences, when they are sure, and that from the things known to the senses science leads us towards a knowledge of the unknown […] and that the search for the cause is the object of another source, namely wisdom, as I mentioned in the preface of the book on Motions [De motu naturali gravium solidorum], and as the principles of science should be definitions, axioms, and petitions, which in these natural things are mostly experience, and astronomy [emphasis added], music, mechanics, perspective, and all the rest are based on this (Galilei, 1890: vol. 18, p. 69) .
Galileo’s objections are emphasised and uncritically subscribed to by some historians of science8. While I can accept Galileo’s claims for which some of Baliani’s postulates were all but evident at the time, I cannot accept other historians’ criticisms of Baliani whose approach they believe was less interesting than that of Galileo, especially when this judgement pretends to be based on the achievements of the modern (Newtonian) mechanics. A modern mathematician or physicist will find Baliani’s approach quite clever and simple. The law of square time is transferred from an ambit where it could be easily verified, to an ambit where it was nearly impossible to carry out reliable measurements. Any modern mathematician will accept the postulate for which the motion on straight lines could be assimilated to that on curves ones9. Even the hypothesis that the oscillations of the pendulum are not isochronous-which some historians anachronistically assume as a limit on the part of Baliani-does not falsify the law of square times10. The doubt thus could be raised that Galileo’s objections to Baliani were partly due to his self-seeking desire to defend the priority and superiority of his own approach.
A fundamental difference can be noted between the empirical approaches of Baliani and Galileo. Baliani followed the epistemology shared by Aristotelian philosophers for mixed mathematics. They assumed as principles of a science some empirical statements, which were judged sure, though not logically necessary; the mixed mathematics (sciences) of reference could be optics and science of weights. Differently from these sciences, where the empirical principles were derived from every day experience, such as for instance the fall of heavy bodies, in Baliani’s science principles were empirical laws derived from repeated experiments carried out in laboratories using measuring instruments such as clocks and graduated rules. Galileo instead assumed as a hypothesis a congectural principle, the proportionality of speed and time, which is suggested neither by every day experience nor by a laboratory. The mixed science of reference here is astronomy, in the footsteps of Ptolemy, who assumed the hypotheses of eccentrics and epicycles, which in no way could directly be gathered from observational astronomical data11. According to this approach the hypothesis is accepted (considered true) if it fits with the observations.
The particular role Baliani attributed to epistemology of astronomy is clearly exemplified in the Dialogo secondo of his Operediverse (Baliani, 1666) . Here astronomy is considered as the example of conjectural knowledge to confront with certain knowledge. In the Dialogo, Baliani asserted that there are three routes to proceed towards knowledge:
The first when, once the effects are known, it is recognized that there are causes for them, though unknown. The second when a thing is imagined and it is recognized that if it were true, effects would necessarily occur. The third when from known and not imagined things it is recognized that effects necessarily occur (Baliani 1666: p. 53) .
The first route towards knowledge is the most simple, but it is incomplete. Regarding the second route Baliani referred to Democritus’ atomism, Aristotle’s hylomorphism and astronomy:
I suppose it [the second route towards knowledge] is when philosophers, imagining either atoms of matter and form in the world, believe that from these necessarily derives all what their sense suggests […]. I estimate this is a form of knowledge of causes of natural effects, similar to how astronomers explain heavenly motions by imagining epicycles in the heavens, of which they have no sensible knowledge, and maintain that, if they existed, heavenly motions would occur such those the sense represents to us (Baliani, 1666: p. 54) .
The third route towards knowledge that starts directly from empirical experience, and thus of which there are NO doubts, is that used by Baliani in the De motu naurali gravium solidorum.
The first “modern” writer to stress the difference between Galileo and Baliani, to my knowledge, is the Jesuit Vincenzo Riccati in his letter to Salvatore Corticellio:
Galileo’s method, if I well understood, follows this approach […] he puts before the congruous hypothesis according to which for equal small times equal degrees of speed are acquired […] this hypothesis cannot be judged at the moment as true, but is considered as a proposed cause that can be the object of analysis and subjected to experience. […] I pass now to Baliani’s method, whose reasoning starts from experience, which in mathematical physical investigations should be considered both as a unique and universal principle (Riccati, 1757: p. 139) .
Riccati, substantially faithful to the Aristotelian epistemology of the Jesuits, expresses his preference for Baliani’s approach:
Galileo whose prudence, by submitting his hypothesis to experiment, is to be praised, but Baliani’s methods are more exact and apt, in my opinion (Riccati, 1757: p. 140) .
Since 1632, in a letter dated 23rd April, Baliani had begun to question Galileo about the possibility of experimental measurements of the law of falling bodies (Galilei, 1890: vol. 14, pp. 343-344) . The problem seems to be not so much if the falling bodies follows the law of odd numbers, or if they move with space in duplicate proportion with time but, rather, to precise quantitatively the law (Koyré, 1953) ; for instance to respond the question: in how many seconds will it take to pass through a given space? The experiment that Galileo is said to have performed, was not convincing12. Baliani realized that the main difficulty was to
find an accurate clock, and he identified this clock in a simple pendulum that can beat the second13. Moscovici summarises the events that eventually led to the definition of this pendulum. First, confirm that you have the correct template for your paper size. This template has been tailored for output on the custom paper size (21 cm × 28.5 cm).
3. The De Motu Naturali Gravium Solidorum and Liquidorum of 1646
In 1646 Baliani published a new enlarged edition of his book, entitled De motu naturali gravium solidorum and liquidorum. At a first glance the main difference between the two editions is that the later edition is much longer; five additional books were added. The first book was nearly the same as the 1638 volume, with a few substantial corrections (Baroncelli, 1998) . The second book contained what, to my opinion, is the most important technical achievement of the treatise; in propositio 3 Baliani showed that impetus (speed) increased proportionally with time. Thus, he inverted Galileo’s approach. Galileo, from the proportionality of speed and time had derived the law of square times (in modern terms he carried out an integration); Baliani from the law of square times derived the proportionality of speed and time (in modern terms he carried out a derivative). The third book of Baliani’s treatise dealt with the motion of bodies over inclined planes; the remaining books concerned fluids, a subject which Baliani had long doubted could be treated with the same laws of falling bodies, as testified by the correspondence with Castelli (Cardinali, 1823) .
Also the epistemological approach was overturned. According to Baliani the science of motion could no longer be based on empirical principles, but, rather on principles of natural philosophy. Based on these principles, he arrived at a law of motion that contradicted that of 1638. Then falling bodies followed the law of odd numbers, now that of natural numbers. Baliani noted that these two laws if well interpreted differed only slightly, and are indistinguishable basing on experiments, but it was the first to be merely approximate. Hence the conclusion: one cannot decide the correctness of a law only on the basis of experience, which can be deceiving. Thus a physical law must be deduced from general principles of natural philosophy not grounded on experience.
Of course, one wonders what had produced this change and why Baliani, who until then had behaved essentially as a mathematician, edged nearer to the philosophy of nature. Biographical information on Baliani and his correspondence are little known, so one can only guess.
Any hypothesis, however, should take into account the following two points:
1) Baliani, as illustrated in the next section, already showed an interest in natural philosophy before 1638 and maintained throughout his life the same empiricist position: for proven and not conjectural knowledge, philosophy should be based on principles that are immediate consequences of empirical observations.
2) Baliani knew the impetus theory by his contact with Fabri, who in 1646, the same year as the publication of the De motu naturali gravium solidorum et liquidorum, published the Tractatus physicus de motu locali, very close to Baliani’s treatise in the use of the concept of impetus. Baliani’s contact with Fabri is only documented after 164614, apparently too late to justify any influence. Considering however that, notwithstanding the official date of publication, Baliani’s treatise was actually published only in 1647 (Galluzzi, 2001: p. 267) , there would have been time to produce all the necessary changes to the text.
On the basis of these considerations, I believe that there are substantially only two hypotheses to justify the apparently irrational stance on Baliani’s part.
1) Baliani was sincerely attracted by the impetus theory and adopted it despite the contraposition with his empiricist theory of knowledge. Perhaps he hoped, to solve the conflict later, although there is no documentation with regard to this.
2) Baliani strongly wished to establish his originality and priority over Galileo. His implementation of the impetus theory, besides expressing its own originality, showed that Galileo’s law of falling bodies was wrong and challenged his (Galileo’s) epistemological approach. This thesis is an adjustment of that proposed by (Costantini, 1969) and (Galluzzi, 2001) , who also assumed socio-political reasons, connected to Baliani’s contact with Jesuit scientists (besides Cabeo and Fabri, also Orazio Grassi at least). Baliani would have been convinced by the Jesuits to contrast Galileo and the change in his epistemological position would have been simply a pretext.
Baliani’s New Epistemology
Baliani’s epistemological positions are reported as a premise in each of the six books of the second edition. First he declared that he was following in the footsteps of mixed mathematics, music, mechanics and optics:
So far I have dealt according to my ability with the science of natural motion of solids, arguing and making manifest many of their unknown characteristics from some properties known to the senses. In only this, moreover, any science consists, at least according to Aristotle and the practice by which it can be deduced from the work of Euclid and those involved in pure science, according to which it is not the responsibility of the Geometer to investigate the nature of the quantity, nor the musician that of sound, nor the student of perspective the nature of the light, nor the mechanic the essence of weight (Baliani, 1646: p. 97) .
Note he did not mention astronomy, and this cannot be by chance if one recalls the previous section of the present paper15. He was however not satisfied by
the empirical evidence and wanted to deal with the first causes, for which he could equally refer to Aristotle:
However, my mind is not satisfied, I do not say if it does not understand completely, but even if it does not investigate the first causes from which these effects derive, looking for the nature of mobile, or the bodies as movable, even though this examination does not concern the science of motion, but a higher level of wisdom, through which we arrive not so much at the effects but at the essence and the principles of things (Baliani, 1646: pp. 97-98) .
Baliani maintained he could find a physical foundation for the law of falling bodies, that is the first causes of motion, with two principles of natural philosophy, which he considered incontestable. The first principle concerned the passivity of matter and the activity of gravity―considered as an action or virtue, which acts continuously and regularly―and that balance each other. Thus, if one imagines time divided into small constant intervals, gravity/passivity always causes the same downward displacement in each of these intervals as the same cause must produce the same effect.
The second principle was that of the preservation and accumulation of the impetus, a word and a concept not used in the 1638 edition. Impetus is generated by gravity in each interval Δt, and its function is to maintain constant the acquired motion. This second principle is nothing but a revival of the medieval theory of impetus from the school of Jean Buridan.
For many years now, I think I have penetrated the cause of the acceleration of motion, in the case in which the movable is constantly pushed by an engine; while in motion, in fact, an impetus is impressed in the movable which in turn causes the subsequent motion, for which in the second interval of time there are two engines, which make the motion faster and the impetus greater, in a third interval there are still both engines, but as the impetus is different, and stronger, the motion is even faster, and so on within the following intervals. […]. These considerations suggested to me the idea that the essence of the mobiles is indifferent either to rest or movement, so that, whenever it is given a movement, and whatever cause it comes from, natural or violent, a similar movement follows, or the same movement of before perseveres, at the same speed that it had assumed at any instant, until it is not constrained (Baliani, 1646: pp. 99-101) .
With these two principles it is easy to find/explain the motion of falling bodies. The first principle allows a proof that all bodies fall with the same speed (in the void). The second principle furnishes the temporal law of the fall.
To obtain the temporal law Baliani considered a sequence of time intervals Δt; in the first time interval there is a displacement Δs due to gravity that generates a certain speed and a certain impetus. In the second time interval there is still a displacement Δs due to gravity and another displacement that Baliani assumed equal to Δs, due to the impetus associated with the speed acquired at the end of the interval Δt. In total there is therefore a displacement of 2Δs. In the third interval there is a displacement Δs due to gravity and a displacement 2Δs due to the impetus acquired at the end of the second time interval, equal to the sum of the impetus acquired in the first interval and that acquired in the second interval, thus arriving at a displacement 3Δs. Going forward with the other intervals a progression of spaces is generated which follow that of natural numbers.
At this point Baliani realized that the law that he had derived from ‘indubitable’ physical principles was not in agreement with the law he had derived from the ‘indubitable’ empirical principles he had assumed in the 1638 edition, according to which the fall of bodies followed the time squares and hence the progression of odd numbers. He did not seem embarrassed by this fact and postulated, showing himself to be a discrete mathematician, that the law of natural numbers, for very small time intervals, or equivalently a large number of intervals, approximates that of odd numbers. So, of course, there was a discrepancy between experience and theory, but this discrepancy, although not eliminable, could be made small at will. And the defect for Baliani is in the experiment that cannot yield results with the due precision:
Though not being completely exact, indeed such a [experimental] law is so close to the true one to be indistinguishable from it to the judgement of sense and the scrutiny of accurate and targeted experimental observations, and so whoever retained it correct would be justified (Baliani, 1646: p. 113) .
Paolo Galluzzi maintained that Baliani was not particularly original in his approach that could be completely derived from that of Fabri (Galluzzi, 2001: pp. 265-270) . Actually it must be said that on this matter it is very difficult to pose a question of priority. Fabri was not the first to propose a mathematical law based on the impetus; he was preceded at least by Descartes and Beeckman in 1618 (Koyré, 1966: 111; Damerow et al., 1991: pp. 28-29) . So any differences between the Tractatus physicus de motu locali and the De motu naturali gravium solidorum et liquidorum, may be so slight as to exonerate Baliani from the accusation of plagiarism.
The modern reader should also to query the originality of Fabri and Baliani with respect to the medieval theory of impetus, supposing they were acquainted with it. The answer is not easy when physical aspects are in question. From a technical point of view, however, the difference is quite clear. Medieval theory were not explicit regarding whether impetus should accumulate with time or with space and the difference between the two cases was not clear. Fabri and Baliani, as they came after Galileo, clearly assumed that impetus accumulates with time. Moreover, the development of mathematics with the emerging concept of indivisibles and infinitesimals furnished a powerful formal apparatus.
4. Writings on the Philosophy of Nature
Not many writings of Baliani remain. From his correspondence with the Jesuit mathematician Gio Luigi Confalonieri (c. 1600-1653) we know, however, that Baliani expressed his interest in this area much earlier than the publication of the De motu gravium solidorum of 1638. In January 1639 he wrote a letter to Confalonieri stating that he prepared a note on the nature of light “many years before” (Costantini, 1969: p. 54) . Still in September 1639, writing to Bonaventura Cavalieri (1598-1647), Baliani wrote:
Though I made some study in mathematics, my interest is rather in finding effects and causes of natural things, of which I always thought that we know little if we do not have the support of mathematics, which guarantees the truth. Thus I tried to use it. Anyway I never estimated that philosophical matters do not depend on philosophical principles (Moscovici, 1967: p. 204).
Baliani did not find time to publish a full treatise of natural philosophy. He limited himself, in 1647, to write the less demanding Trattato di Gio. Battista Baliano della pestilenza: ove si adducono pensieri nuovi in più materie (herein after Trattato della pestilenza), that although officially devoted to the plague, dealt with arguments of natural philosophy, and given its nature permitted a less rigorous treatment of philosophical matters.
The treatise was divided into two books; the first book is titled: Of the nature of the plague (pp. 1-153), the second book is titled: It is likely that contagion only cannot cause the plague (pp. 155-198). From the general title and those of the two books, he would appear to be dealing with a medical textbook. Actually, it was not the case, and could not be so, because Baliani was not a physician. Certainly, the topic of the plague was quite central, and particularly so in the second book, and it was also the one which ensured the success of the text, because the subject of the plague was faced for the first time with scientific method and reference to modern conceptions of natural philosophy. However, most of the text, and almost all of the first book, covered topics of natural philosophy. In particular, they dealt with topics which were then classified as meteorological, in line with the Aristotelian tradition of Meteorologica, along with elements of biology and botany. The treatise was equipped with a good index.
The most general aspects of Baliani’s epistemology expressed in the Trattato della pestilenza are the superiority of the method more geometrico and the refusal of authority. In the preface of his treatise Baliani talked at length on the superiority of the application of the method more geometrico to any subject:
Indeed, in my youth, spending most of my time in my study, I read many books, in any field, but without remaining completely satisfied in most cases. Devoting myself more carefully to mathematics, I felt I began to understand and know what knowledge actually is and how the intellect is delighted less with the opinion and more with the science (Baliani, 1653: Preface, not numbered pages) .
The adoption of the method more geometrico for Baliani did not however imply applying mathematics to philosophy. Baliani here followed Aristotelian epistemology, by adapting it to his purpose, according to which any science has its own principles:
Based on this truth I strived to distinguish, to the extent that I could, in any branch of learning, certain things from uncertain ones […] By reducing the discourse to syllogisms, in any of every [discourse] the major premise will be one of such propositions that are naturally known by everybody, the medium premise depends on postulates in mathematics, revelation in theology and experience in philosophy (Baliani, 1653: Preface, not numbered pages) .
where “philosophy” should be intended as natural philosophy; in this field the philosopher should only base on experience and deduction.
The rejection of the appeal to authority is expressed effectively in the first book:
It may seem that I was not told that to prove my argument I did not refer to the authority of great men, who stated this before me; and whether I knew [the authority], I would have not expressed it, estimating as an abuse, to prove [something] with other than by reason, which in natural things is based on experience; and to whom has some authority on his side, should suffice to have someone who furnishes further arguments (Baliani, 1653: p. 97) .
Baliani at no time made any reference to the approach of mixed sciences. This appears surprising considered that in his books on motion and in other writings he wrote on various matters such as those regarding atmospheric pressure, the mechanism of tides, the astronomical hypotheses where he sought to follow in Galileo’s footsteps in reading the mathematical characters of the book of nature.
Even a superficial reading of the Trattato suggests analogies with Baliani’s treatise of the plague and Cabeo’s commentary on Metereologica. The subject matter is similar, though the treatment of particular aspects is different. There is the same disdain toward authority, the corpuscular conception of matter, the search for efficient causes of all phenomena, the lack of any use of mathematics.
Baliani better specified his ideas on natural philosophy in the Dialogo secondo of his Operediverse (Baliani, 1666: pp. 39-57) . Here he maintained that the approach of mathematics could be of some help to philosophy not only given its deductive arguing, but also because it requires a rigorous analysis of its principles. These should be absolutely sure and derived from empirical evidence, he stated. In this way, any controversy among philosophers could be avoided and philosophy would cease to be conjecture and become exact as mathematics.
To this purpose, he criticized the Aristotelian theory of elements, as it is not directly derivable from the experience, and suggested the approach followed by chemists (chimici) that made the “autopsy” to the matter. He also specified that the first principles of matter are water, earth and light and declared that he could base a philosophy on them which had the same certitude as mathematics, where consequences are deduced by means of syllogisms from certain principles.
Various are the casts of mind, so that one considers as true for a reason, another [considers] as false for another [reason]. […] if, instead, there were greater moderation and people accustomed to know with more moderation and to distinguish what is known from what is unknown, it is certain that wise men would be in agreement with things of which there would be exact knowledge (Baliani, 1666: pp. 43-44) .
It is thus clear that Baliani has maintained throughout his life the same basic epistemological assumption: the principles of philosophy should be based on experience and experiments, which are infallible in themselves.
5. Conclusions
Baliani had a significant role in the theory of mechanics during the second half of the 17th century. The scope of this article is to highlight his epistemology. It should be placed in the transition period between the old mixed mathematics, that for philosophers harked back to Aristotle’s foundational theories of epistemology, involving confined areas of the philosophy of nature, and modern mathematical physics that tries to comprehend the philosophy of nature as a whole. Any study of Baliani is interesting in that, as a fairly good philosopher, he explicitly expressed his epistemological conceptions, not without some contradictions, however. The 1638 text in my opinion is Baliani’s most interesting writing, positioned within the framework of classical mixed mathematics, containing among the principles propositions belonging to natural philosophy that should be seen as undoubtedly resulting from sense observations. Baliani introduced an important variant: the undeniable truths of the propositions of the philosophy of nature should not be derived from daily observations, as suggested by Aristotelian epistemology and partly attested by the practice of mixed mathematics, astronomy excluded. The propositions considered as indubitable should stem from careful observations conducted in the laboratory with the aid of measuring instruments, clocks and graduated rulers.
The results reported in De motu gravium solidorum of 1638 were well received in the international arena, in a period when the Galilean theory of falling bodies was not universally shared. Certainly, the simultaneous issue of the Discorsi e dimostrazioni matematiche sopra due nuove scienze by Galileo overshadowed Baliani’s merit. He discussed his epistemological approach and his results with Galileo, defending the originality and merits of the former and priority of the latter.
As often happens in the history of science, in order to generalize and improve his exposures, in 1646 Baliani published an expanded edition of the 1638 text, De motu gravium solidorum et liquidorum, where he reported a very interesting discussion of the motion of fluids, that, to my knowledge, remains largely unexamined by historians of science. In this text Baliani overturned his empiricist approach, asserting that the observations based on experimental measurements could not be taken as principles of science affected as they are by measurement errors and, therefore, not indubitable. Thus other principles should be searched for that belong to natural philosophy, but have a metaphysical and not an empirical derivation. Baliani’s principles were quite simple: the constancy of the action of gravity and the persistence of acquired motion. Perhaps these too could be derived from the senses, but Baliani did not and left the modern reader to face the contradictions. On the one hand, he supported empiricist philosophy that required the assumption of certain principles derived from experience. He expressed this epistemological position both in his approach to mixed mathematics in 1638 and in his writings on the philosophy of nature found in the treatise on the plague and in his philosophical dialogues. On the other hand, though limited to the laws of motion, he declared the groundlessness, at least from a theoretical point of view, of a purely empirical approach and wanted to assume propositions not directly derivable from empirical observations as the principles of the philosophy of nature.
1I had no access to the 1647edition so referenceis to Giovanni Battista Baliani, Trattato di Gio. Battista Baliano della pestilenza: ove si adducono pensieri nuovi in più materie (Genova, 1653).
2This quotation and all the remaining are translated into English by the author.
3For a modern reader, it is easy to criticize Baliani’s hypotheses as it is now known that the oscillations of pendulums are not synchronous. However, then, the situation was different; in particular Galileo himself was convinced of the isochronism. Among the supporters of the opposite thesis we find Mersenne and Descartes. See for instance (Matthews, 2000: p. 177) .
4Baliani attributed hypothesis 4 to both Galileo and Stevin (Baliani, 1638: p. 5) .
5See for instance (Moscovici 1967: Chapter 1) ; (Baroncelli, 1998: Introduction) ; (Capecchi, 2014: Chapter 4) .
6Here Caverni refers to a lost letter from Galileo to Vincenzo Renieri (1606-1647), reported by Vincenzo Viviani.
7Baliani’s reply is not to Galileo’s letter of 7th January 1639, but to a Galileo’s lost letter of 20th June 1639; (Moscovici, 1967: p. 141) .
8See for instance (Moscovici, 1967: pp. 32-36) .
9Cavalieri seemed to appreciate Baliani’s mathematical approach (Baliani, 1792: pp. 34-35).
10It can be shown that for a fixed angular amplitude the period of oscillation is still proportional to the square root of the length of the pendulum. The constant of proportionality however depends on the angular amplitude.
11The similarity of the approaches of Galileo and Ptolemy is suggested for instance in: (Drake, 1978); (De Pace, 1993: pp. 318-336) .
12For instance, Galileo in his Dialogosoprai due massimisistemi declared that a heavy body will pass 100 braccia in 5 seconds, in a free fall (Galileo, Opere, vol. 7, p. 250). Assuming that 1 braccio is about 0.55 m (this is a current estimate), the acceleration of gravity (modern term) would be about 4.5 m/s2, much lower than the true value (9.8 m/s2) and of the value found by other experimentalists such as Mersenne, for example (Koyré, 1953) .
13Letter to Galileo 19th August 1639 (Galile1, 1890: vol. 18, p. 87).
14Contacts between Baliani and Fabri are documented by Baliani’s correspondence with Grassi (Moscovici, 1967: pp. 256-261) , with Mersenne and of Mersenne with Fabri’s pupil, Pierre Mousnier (Galluzzi, 2001: p. 267) .
15To note that in the already quoted letter to Galileo, 1 July 1639, Baliani listed the components of mixed mathematics, including astronomy.
Conflicts of Interest
The authors declare no conflicts of interest.
[1] Baliani, G. B. (1638) De motugraviumsolidorum. Geonoa: Farroni et al.
[2] Baliani, G. B. (1646). De motu gravium solidorum et liquidorum. Geonoa: Farroni et al.
[3] Baliani, G. B. (1653). Trattato di Gio. Battista Baliano della pestilenza: Ove si adducono pensieri nuovi in più materie. Genoa: Guasco.
[4] Baliani,G. B. (1666). Di Gio. Battista Baliani opere diverse. Geonoa: Calenzani.
[5] Baroncelli, G. (1998). De motu naturali gravium solidorum et liquidorum. Florence: Giunti.
[6] Cabeo, N. (1646). In quatuor libros meteorologicorum Aristotelis commentaria. Rome: Corbelletti.
[7] Capecchi, D. (2013). Experiments and Principles of Natural Philosophy in Baliani’s Epistemology. Proceedings of 33rd SISFA. Acireale, in Press.
[8] Capecchi, D. (2014). A Treatise on the Plague by Giovanni Battista Baliani (pp. 47-55). Proceedings of 34th SISFA. Florence.
[9] Capecchi, D. (2015). The Problem of Motion of Bodies. Dordrecht: Springer.
[10] Cardinali, F. (1823). Nuova raccolta d’autori italiani che trattano del moto dell’acque. In G. Galilei (1890) (Ed.). Le opere di Galileo Galilei (Vol. 13, pp. 348-349).
[11] Caverni, R. (1891). Il metodo sperimentale in Italia (6 vols). Florence: Civelli.
[12] Costantini, C. (1969). Baliani e i Gesuiti. Florence: Giunti.
[13] Damerow, P., Freudenthal, G., Mclaughlin, P., & Renn, J. (1991). Exploring the Limits of Pre-Classical Mechanics. New York, NY: Springer.
[14] De Pace, A. (1993). Le matematiche e il mondo. Milan: Francoangeli.
[15] Dear, P. (1995). Discipline & Experience. Chicago, IL: The University of Chicago Press.
[16] Drake, S. (1967). A Seventeenth-Century Malthusian. Isis, 58, 401-402.
[17] Galilei, G. (1890). Le opere di Galileo Galilei (National ed., 20 vols). Florence: Barbera.
[18] Galluzzi, P. (2001). Gassendi and l’Affaire Galilée of the laws of motion. Science in Context, 14, 239-275.
[19] Koyré, A. (1953). An Experiment in Measurement. Proceedings of the American Philosophical Society, 97, 222-237.
[20] Koyré, A. (1966). études Galiléennes. Paris: Hermann.
[21] Lavaggi, A. (2004). Attività e propensioni scientifiche in Liguria nei secoli XVI e XVII. Balbisei. Ricerche Storiche Genovesi, 1, 93-115.
[22] Maffioli, C. (2011). La ragione del vacuo: Why and how Galileo measured the resistance of vacuum. Galileana, 8, 73-100.
[23] Matthews, M. R. (2000). Time for Science Education. New York, NY: Kluwer.
[24] Moscovici, S. (1960). Sur l’incertitude des rapports entre experience et theorie au 17e siècle: La loi de Baliani. Physis, 2, 14-43.
[25] Moscovici, S. (1965). Les développements historiques de la théorie galiléenne des marées. Revue d’Histoire des Sciences et de leurs Applications, 18, 129-240.
[26] Moscovici, S. (1967). L’expérience du mouvement. Jean-Baptiste Baliani disciple et critique de Galilée. Paris: Hermann.
[27] Nonnoi, G. (1988). Il pelago d’aria. Rome: Bulzoni.
[28] Riccati, V. (1757). Opusculorum ad res physicas, & mathematicas pertinentium. Bologna: Volpe.
[29] Savelli, R. (1953). Giovan Battista Baliani e la natura della luce. Bologna: Tipografia Compositori.
[30] Somaglia, A. (1983). Il lume di G.B. Baliani. In Storia delle matematiche in Italia. Istituti di matematica delle facoltà di scienze e ingegneria (pp. 395-403). Cagliar: Università di Cagliari.
[31] Zouckermann, R. (1982). Poids de l’air et pression atmosphérique. Physis, 24, 133-156.
Copyright © 2021 by authors and Scientific Research Publishing Inc.
Creative Commons License
|
I have heard of the term "short-circuiting" being used in C, C++, C#, Java, and many others. What does this mean and in what scenario would it be used?
• 6
There is a Wikipedia Article about the concept: en.wikipedia.org/wiki/Short-circuit_evaluation It is an optimization in the evaluation of the && operator.
– wirrbel
Jun 18 '13 at 13:56
• 1
@wirrbel I believe it applies to || as well... at least it should. Jun 18 '13 at 18:36
• 1
@RaduMurzea Indeed. Contrast || and && to & and | to see the subtle difference. Have a simple program evaluate 1 || printf("yay"); vs 0 || printf("yay"); and 1 | printf("yay"); vs 0 | printf("yay"); to see the differeces
– wirrbel
Jun 18 '13 at 20:25
Short circuiting in C is when a logical operator doesn't evaluate all its arguments.
Take for example and &&, it's pretty obvious that 0 && WhoCares is going to be false no matter what WhoCares is. Because of this, C just skips evaluating WhoCares. Same goes for 1 || WhoCares, it'll always be true. Because of this, we can write code like
CanFireMissiles && FireMissiles()
This way we avoid doing some potentially impossible operation. If we can't fire the missiles we certainly don't want to try to. This is commonly used with pointers, especially file pointers.
bool isN(int* ptr, int n){
return ptr && *ptr == n;
This plays out in lots of other useful ways to avoid unnecessary computing
isFileReady() || getFileReady()
This avoids doing extra work if we don't need to.
• 1
Anytime, if I've answered your question you can check the checkbox beside it to mark your question as answered Jun 18 '13 at 14:16
• 8
I don't love CanFireMissiles && FireMissiles(), as it makes me suspect you're abusing short-circuiting to trigger side effects. I feel like you're hiding actions in a conditional. Such code is better written as if(CanFireMissiles){FireMissiles();} or if(CanFireMissles){didFireMissiles = TryFireMissiles(); if(didFireMissiles){...}}.
– Brian
Jun 18 '13 at 20:51
• 2
I'd argue that the only use is to hide side effects. Usually not the the "Blowing up a city" sort but things like dereferencing a pointer or using system resources are also done in this manner in C quite often. See the wikipedia page, the whole section under usage is "Hiding side effects" Jun 18 '13 at 20:58
• 2
@jozefg you can also use it to prevent doing expensive operations like IsInCache(value) || IsInDatabase(value), where IsInDatabase might take time (especially if using a mobile device and network latency is an issue).
– mgw854
Nov 12 '15 at 18:00
"Short Circuiting" typically refers to "Short Circuit Evaluation" which is a general concept, not just C specific.
Boolean operators evaluation left to right, so any terms that will render the other terms unnecessary are useful. So you might check for a condition that excludes other conditions later on, thus allowing a partial evaluation of the logical operations rather than evaluating the whole thing.
while((x && y) == 1) {
//This bit will not execute if x is 0 or y is 0 but y won't even be
//evaluated due to short circuit evaluation if x is 0.
A more complex example:
if((a || b || c || d || e || f || g || h || i || j || k) == 1) {
/* If any of these are equal to 1 the whole expression is equal to 1,
* thus doesn't it make sense to short circuit evaluate this?
* Saves a bunch of time.
• 8
Short circuiting is less about saving time but more about not being evaluated. A function not being evaluated will also not have the side effect would it be evaluated.
– Pieter B
Jun 18 '13 at 14:14
• You know, the == 0 is not only unneccessary, it might actually confuse some people. Nov 13 '15 at 3:55
Short ciruit evaluation can lead to some parts of a condition not be evaluated.
For example:
if (true || f()) { ... }
will not exectue f.
|
Where Do Files Go When Deleted? NOWHERE.
Everyone deletes files constantly. Manually, automatically, the action has almost become machine-like. But, where do files go when deleted? Do they still exist? Can they be recovered? Well, in fact even if you delete files they are still on your hard drive…
What if you learn that your files are actually never deleted? But then, what really happens to them? where do files go when deleted? Well, just because you move files to the recycle bin doesn’t mean they get deleted. Rather, it waits patiently in a kind of purgatory for you to decide what to do with it. And even if you empty your recycle garbage can, the file doesn’t get deleted completely, its location is simply marked as empty by the operating system.
Where Do Files Go When Deleted?
As everybody knows, when you delete files on your hard drive, they first go to the recycle bin of the operating system while disappearing from the location they were before. You can still access and see your files in the recycle bin, you can even restore them. The recycle bin is actually just a folder like any other folder on your computer. The only difference is that it’s hidden. there is a hidden folder named $RECYCLE.BIN on every partition of your Windows where do files go when deleted.
Where Do Files Go When Deleted From Recycle bin ?
Windows 10 tracks files using either a File Allocation Table or a Master File Table. Those are databases that tell the computer where a file begins and ends. When you remove a file from your system, the location of these files becomes “free” but the files themselves haven’t moved. only their pointers are removed. The OS removes the pointers to the file which then becomes inaccessible. The computer doesn’t actually delete the data of the file which in fact remains on the hard drive until it is overwritten by different data.
But what is a pointer? It is a reference that indicates a specific location in your computer’s memory where your referenced files can be found. It’s a kind of summary of your computer files.
Deleting a file marks a location as free and ready to store new data. But that doesn’t change the fact that the data in your old file is… still there!
This is where the secret of file recovery software resides. These softwares that allow you to restore your deleted pictures work by simply marking the location of the deleted file as “unavailable”, allowing the user to access it again.
How Do I REALLY Delete A file?
For a deletion to be effective, you need to overwrite your files rather than just delete them. To do this, you must make sure that it is no longer recognizable by overwriting it with new data.
Then, recovery software has a more difficult job to recover such files since they are often corrupted and mixed with other data.
However, the process doesn’t always work as you don’t really know “where” is your file on the hard drive memory. a solution would be to fill ALL the remaining space on your hard drive or flash drive to make sure that the data of the deleted file has been overwritten, but that can be really a hassle to copy hundreds of gigaoctets of files on the hard drive just overwrite some deleted files…
Some free softwares like Eraser offer a solution to completely erase your files’ data from the hard drive when deleting them. They work by simply overwriting the file’s data BEFORE removing it to prevent any possible recovery of the original file’s data.
So, concretely, no file is actually deleted. And the best way to ensure the safety of your data is to keep your hard drive safe before you throw away, donate or recycle your computer. That way, you know your files won’t end up anywhere. If you really want to get rid of them, use a software that overwrites everything on the hard drive before throwing it. If you still want to confirm that your files are inaccessible, a hammer or a barrel of gas and a match will do the trick.
See also: How To Recover Deleted Apps On Android
Related Posts
Leave a Comment
|
Question: How Does Government Regulation Affect Advertising?
Who can I report false advertising to?
If you wish to make a complaint about an advertisement you have seen or heard in NSW, you can contact Ad Standards by phone (02) 6173 1500 or make a complaint online..
What are the issues in advertising?
Top Advertising Problems TodayBudget limits. Unless you’re a multinational company or a global brand, almost every business out there goes by on a limited budget, especially for advertising. … Compelling content. … Choosing the right method. … Measuring effectiveness. … Rising through the competition.
How does government regulation affect business?
Governments issue regulations related to environmental practices, employee practices, advertising practices, and much more. Furthermore, government regulations affect how companies structure their businesses, where companies decide to locate, how they classify their employees, and thousands of other things.
When advertising, you should should keep legal and ethical considerations in mind.False, Not Misleading. The basic legal standard for advertising is that ads must be truthful and not misleading. … Evidence for Claims. … Ethical Considerations. … Advertising Regulation.
Why is government regulation bad?
Poorly designed regulations may cause more harm than good; stifle innovation, growth, and job creation; waste limited resources; undermine sustainable development; inadvertently harm the people they are supposed to protect; and erode the public’s confidence in our government.
Why do we need government regulation?
Regulations are indispensable to the proper function of economies and societies. They create the “rules of the game” for citizens, business, government and civil society. They underpin markets, protect the rights and safety of citizens and ensure the delivery of public goods and services.
How does advertising standards authority affect a business?
Its role is to “regulate the content of advertisements, sales promotions and direct marketing in the UK” by investigating “complaints made about ads, sales promotions or direct marketing”, and deciding whether such advertising complies with its advertising standards codes.
What is the CAP Code in advertising?
UK Code of Non-broadcast Advertising and Direct & Promotional Marketing (CAP Code) is the rule book for non-broadcast advertisements, sales promotions and direct marketing communications (marketing communications). This Code must be followed by all advertisers, agencies and media.
Who investigates false advertising?
The FTC has primary responsibility for determining whether specific advertising is false or misleading, and for taking action against the sponsors of such material. You can file a complaint with the FTC online or call toll-free 1-877-FTC-HELP (1-877-382-4357).
What are some ethical issues in advertising?
When it gives false or misleading information about the product When it fails to give relevant information. When it is immoral. 4. MISLEADING CLAIMS In this ads, company claim false regarding the quality, style, history of a product or service etc.
Who is responsible for regulating advertising?
The Federal Trade Commission (FTC) was established in 1914 to promote “consumer protection” and to monitor “anticompetitive” business practices. Within the FTC, the Bureau of Consumer Protection works to protect against abuses in advertising as well as other areas such as telemarketing fraud and identity theft.
What are the advertising codes of practice?
The Code of Advertising Practice (Botanicals) is comprehensive Code that sets out to help business operators, whether small or large, to understand and ensure that the appropriate quality standards are applied throughout the supply chain to the products for which they are responsible.
Why does the government regulate advertising?
Advertising control is used by federal and state governments to regulate the use of advertising around cities and roadways. Advertising control prevents businesses from presenting false information, placing billboards in illegal locations and other prohibited actions.
How does the federal government regulate advertising?
Federal communications commission (the FCC) is another authorized body, which regulates advertising by mass-media. The FCC controls TV and radio broadcast advertising by resolving consumer claims about the content and timing of advertisements. … There are some other government agencies which regulate advertising market.
How are ads regulated?
Advertising in the UK is governed by a mixture of legislation, common law, regulatory control, and self-regulation. … Generally speaking, legal restrictions are enforced through court action whereas regulatory restrictions are enforced by the regulatory bodies.
What is the purpose of government regulation?
The purpose of much federal regulation is to provide protection, either to individuals, or to the environment. Whether the topic is environmental protection, safety and health in the home or workplace, or consumption of goods and services, regulations can have far reaching effects.
How do I advertise in the UK?
4 Popular Ways of Advertising Your Business in the UKYellow Pages & Thomson Local. The large yellow book that used to be hurled on your doorstep? … The Internet – Social Media, SEO, Online Directories. Internet, or the World Wide Web is proving to be one of the fastest growing advertising placements for businesses. … Flyers & Posters. … Promotional Products.
What are the unethical practices in advertising?
Forms of Unethical advertisingThe use of sex, especially the use of women as sex objects. … Alcohol Advertising. … Tobacco Advertising. … False Claims. … Exaggerated Claims. … Unverified claims.
What is the advertising law?
Advertising and Marketing Law refers to the body of laws related to the means and methods of communicating information about a product or service to the public. … Some of the principle concepts of advertising and marketing law include truth in advertising and unfair trade practices.
What can the FTC do to stop false advertising?
These legally-binding orders require companies to stop running the deceptive ad or engaging in the deceptive practice, to have substantiation for claims in future ads, to report periodically to FTC staff about the substantiation they have for claims in new ads, and to pay a fine of $43,280 per day per ad if the company …
|
Series post: the Decline of Britain’s Working Class- Deindustrialization
So this is a series post about the demoralization of Britain’s Working Class; relating to deindustrialization, class politics and social mobility which all play significant roles in the decline of Britain’s working class. But before we get to grasps with how this has happened, lets take a quick look at how the working class become almost romanticized in the 20th century.
During the industrial revolution, the working class began to develop a strong culture and identity, as they began to be regarded with respect for the work they carried out. This was in stark opposition to being treated as “peasants” during previous times. A lot of this culture was taken on from the “localized folk culture”. This led on to TV programs which were (and some still are) centered around working class life such as the Simpsons, Family Guy and Only Fools and Horses. This happened around the same time as the emergence of the Labour Party, which at the time strongly represented the working class. This came about partly due to the widening of the enfranchise meaning that those working class people who had previously been unable to vote, needed a party to represent their interests. This party was also closely affiliated with trade unions, and was in-fact created by these unions as well as socialist societies, which were there to speak up for the rights of workers. Therefore the working class were becoming more prominant within both culture and politics. Subsequently it was harder for the middle and upper classes to dismiss them, and in-fact working class culture was almost romanticised within the middle and upper classes. Until around the 1960s, this respect affiliated with the working class remained strong, however from the mid 60s-early 70s, the journey towards the new essence in hw the working class see themselves, and how they are portrayed by others began. So between the industrial revolution when the working class earned their dignity, and now where the attitude toward the working class is at best dismissal and at worst; dispute, what exactly went wrong, and why?
See the source image
The cast of “only fools and horses”, TV comedy popular around the 1980s, centered around the lives of working class people
The Decline of industry
It was from the beginning of the 1960s when industry begun going into decline. This was partly as a result of arising globalization meaning it was cheaper for Britain to import goods like iron and steel from abroad due to lower costs, compared to continue to produce these goods at home. This was not just the case within Britain, but many European countries as well as America saw a decline in industry for this very reason. Now all countries which have experienced this have suffered on varying degrees, and I won’t go into detail and compare these with the UK. However it is clear that deindustrialization has had a devastating affect on many areas within the UK, which is very visible even to this day.
Now whether you are for or against globalization, it is more than rather clear that policies of the government during this time (old Maggie), didn’t at all help in mitigating some of the issues which would arise from industry going abroad. Rather she almost enhanced the suffering. By privatizing and neglecting many industries such as the coal mines, it meant that the government couldn’t help them so couldn’t help to hinder the affects of deindustrialization. Meaning that the hit on certain areas and workers within them, would not have been so drastic. Another hit was the so called “help to buy scheme” put in place during her era, where millions of initially council homes were sold off to private developers meaning that most of those who would have previously been elidable to attain a council property, now are most likely to have to put aside their scarce money, in order to save for a “partly owned property.” Unsurprisingly she also put extra emphasis on the myth that if left to the free market, any level of work put in by an individual equates to what they get out of it, for themselves. Clearly a myth, because how on earth are people in areas with few jobs, little money, and poor schooling meant to “work themselves up”, while it is ok for someone of a higher class to basically inherit wealth and have all of their educational and occupational opportunities handed over to them on a plate. It’s bullshit! But this might explain why subsequent governments, including the most recent Labour one, tended to “brush off” the struggles now facing the working class, because now opportunities are clearly so equal that there is no need to worry too much… But we need to keep in mind that the numbers made unemployed within the UK happened at a faster rate than in any European country, therefore I would guess that the government did actually make a difference…
So now we know what happened with regards to jobs for the working class at the time, but how come 40 years later, the effects of this are still so apparent? And what is it about the previous jobs owned by the working class, that makes them so difficult to replace? Well back in 1950 the work place was dominated by jobs in manufacturing including an affluent coal and steel industry. Plus many of these jobs were based in areas now experiencing some of the highest rates of deprivation, such as in the North east and west. However as we probably know these jobs have now been replaced by those in the service sector including “Public admin, education and health” which accounts for 30.2% of the workforce and “distribution, hotels and restaurants” which accounts for 18.2% of the workforce. Meanwhile a mere 9% of the workforce are in manufacturing. This clearly emphasises the shift in types of occupations avaliable. But we know too that the UK now is far more known for it’s industry in finance, compared to its hospitality industry or its public services. Because while not many are actually working in high paid jobs in the financial industry, the financial sector contributed £132 billion to the UK economy, which accounts for 6.9% of the economic output, meanwhile although 7% of total jobs within the UK are within the financial sector, many of these are low paid admin roles such as working in call centers. Whilst there is a small number of people working in finance, who are earning immense amounts of money. This basically suggests to us that unlike before when the UK was partly known for its good number of industrial jobs dominated by the workers, it has been replaced by a small number of people on very high salaries in the affluent capital. Whereby 49% of the sectors output being generated there (source, House of Commons Library). Thus suggesting that the economy has reverted to revolve around those on the very highest salaries, while the working class are no longer deemed as being at all significant.
This may seem rather far fetched, but the income inequality illustrated by the gini coefficient has significantly risen, being at 0.26 in 1961, and at 0.34 ending 2014. Not only this, but it looks like the very rich in Britain are getting far far richer, as a report conducted by the the economic policy institute last year shows that in 1969 the CEO to average worker earnings ratio stood at 20-1 in 1969, meanwhile last year it was at 271-1. I would argue this is partly due to the decline of trade unions, who stand up for worker’s rights, which peaked back at the start of the 20th century.
How the UK Gini coefficient has changed over time (the higher it is, the worse the inequality)
Contrary to this notion, some may believe that the “working class” are better off now, compared to Pre-deindustrialization times. Because lets face it, coal mines were smelly and many ex-coal workers now suffer with persistent lung, heart and goodness knows what other problems; while most of these manufacturing jobs were very physically demanding. While nowadays you are unlikely to see many 13 year old’s working at a till, let alone down a mine, though it does happen… However while these were hard going, tedious and rather dangerous, we cannot deny that there was a strong sense of identity and community attached to this kind of work. Whereas within the current job sectors, which would be aimed at the working class, there just isn’t the same kind of feeling of belonging. But what is it about the “working class jobs” out here now which has lead to a loss in pride? Or would there be other factors too going alongside this issue?
By looking at the recent rates of unemployment, being at just 3.9% as of 2019, it would be difficult to understand why the working class are viewed in such a bad way. Because this suggests that the majority of the workforce are job secure and therefore why would they or others perceive their job in such a bad way? However we need to recognize that it is far harder to claim unemployment benefit nowadays. Previously claiming this benefit was based upon amount of National Insurance paid throughout working without the need to “prove” that you are looking for work. However now to claim “jobseekers allowance” you need to be taking “reasonable steps to look for work“, to have worked as an employee as well as having paid “class 1 National insurance contributions usually within the last 2 to 3 years.” This means that the unemployment rate is likely to hugely underestimated in areas of high structural unemployment due to the sheer difficulty of finding work. This therefore shows that there is more of a problem than what it might seem, when regarding unemployment. And because our politicians like to boast about the really low unemployment rate, it may lead to some people questioning the morality of those people who are unemployed- and as we can guess, right wing mainstream media is going to parrot these views.
The harsh reality of Australian youth unemployment - MacroBusiness
Just an example of the many newspapers echoing hostile fears toward those who are unemployed
If this profound difficulty in claiming unemployment benefit is not enough, nowadays a lot of these jobs are temporary, part time or are based on a zero hour contract. Someone on a zero hour contract basically doesn’t know whether they will get any work at all, or 7 long days of work on a weekly basis, meanwhile they are not entitled to some of the worker’s rights like sick pay. Firms as big as Sports Direct and McDonalds are notorious for having their workers on zero hour contracts. But this hasn’t always been the case, because until very recent years, the number of workers estimated to be on zero hour contracts stayed well below 300,000 while now it is estimated that as many as 1 in 40 workers are on them, and many more are at risk. Furthermore there are many roles now which only offer temporary contracts. As many local areas are more heavily reliant on the tourism sector now than they would have been say 50 years ago, due to decline of industry. This has been seen in large parts of South Wales, which was once home to coal mines which accounted for “a third of the total world exports of coal“, contrary to now where if you live in an area where there are high levels of tourism, you are rather likely to be hopeful that you can find work within that industry. However the tourism industry within all parts of the UK (and the world coming to think of it) is heavily dependent on external factors such as the weather, the strength of the economy, and now the coronavirus. And while tourism supports so much of the workforce, being 10% within Wales, people employed under it are not necessarily guaranteed alternative work during winter months, while there is high insecurity regarding whether they will still have their jobs come the following summer… Therefore due to the high insecurity and the most likely higher turnover rate, it is going to be difficult to replicate the same sense of pride and community, and sadly this trickles down to perceptions . Now it seems that it isn’t just workers who have lost this sense of pride, but the jobs for “the new working class” are not really regarded with the same essence. But how about work in retail, especially within supermarkets, surely this is just essential if not more so?
For instance as of 2019, there were “2.9 million UK workers in retail“. However as mentioned already in a previous blog post, we know that these retail workers are not very well treated with the levels of abuse that they face, whilst they are not well paid either. For instance with Tesco, the UK’s biggest supermarket, pays all workers a mere £9 an hour. This is just above the minimum wage, of £8.72 for over 25s. Now £9 an hour might therefore seem like a reasonable level of pay, nevertheless work in retail, especially in supermarkets can be exhausting. Many workers at Lidl have acknowledged that they receive good pay, however many also state that “hours and pay are not constant and reliable” where it is not uncommon to go well over your shift. There have also been “poor management”, and “safety concerns.” While if you think the standard of this is poor, there are many well known companies which exploit especially young workers massively, where they receive undeniably low pay correlating with the various low minimum wages for under 25s. For under 18s this is only £4.50 while if you are under 21, it is just £6.45.
It is most likely that there are going to be more of these jobs available in the more well off and highly populated areas, simply because where there are more jobs for the working class, there are more people of all classes dependent on the work which needs to be done. Therefore if you happen to be working class and living somewhere where there are wealthy people, then you are lucky in the essence that if you hunt for a job for long enough you are probably going to be able to find one, even if it is horrendously paid and you don’t know whether you are coming or going with the management.
Redcar, in the North East of England, is a perfect anicdote of a region where this is not the case, with there being nothing really for those living within the area. This is where though tourism plays a small role in the economy, it’s main industry was steel. But now due to it’s decline, it is one of the Country’s most deprived regions. Here as many as four of the wards have over 40.8% of children living in workless households. Meanwhile this decline has evidently affected some of the other jobs on go in this area, where it saw the loss of their “Regent Cinema” due to structural defects. Showing that this decline has been a spiral, whereby the effect of the steel industry has affected a loss in other jobs. Furthermore those steel workers who have found work since, are most likely to have experienced “big wage cuts”, therefore their standard off living has declined.
See the source image
We can see that Redcar is a nice place, however if you live here now you are very likely to be reliant on this factor. As opposed to the steel works which can be seen in the background
This is just an example of one of the many regions within the UK which has been a victim of such sudden decline, and now neglect. And if you happen to live in a region like there, it can genuinely be extremely difficult to find work, let alone work which would be regarded as “dignified”.
So while the old “working class” might have encountered significant difficulties, even during the era when the British working class was romanticized, in many ways life is just as hard if not harder for the working class as it was. With short lived occupations, where the worker doesn’t often know what days they will be needed and when, it makes it almost it almost impossible for these people to find a sense of pride in what they do. While now the areas once home to the UK’s heavy weight industry, are now nothing more than pockets of decline. Working class jobs today tend to be behind the scenes work, work which actually fulfills everyone’s needs yet most still don’t regard it with much respect or work to serve people’s desires. This is instead of working within an industry at the “heart of a Nation”, so it is rather clear why other’s no longer perceive the working class in this strong willed manner. Because let’s face it; there is nothing very romantic about the smell of KFC’s chicken and chips making it’s way out of the shop and onto the streets. Meanwhile because unemployment statistics like to make the country look good, it makes those who are not able to find work look especially bad. As if it is the fault of those people, rather than down to the failings of the state… Of course we know that this is far from the truth, if we choose to be aware off the reasons why some areas seem to have particularly high numbers of those with seemingly nothing to do, yet these areas seem to just be filled with those too lazy to work? Anyway rant over for now, but you can see how this all contributes to the decline of the working class today…
Of course I have not managed to explain all factors which have affected the loss of dignity within jobs for the working class. And while you see people of all ages working in establishments like supermarkets, restaurants and retail outlets, you have probably noticed that a good number of these workers do however tend to be young, and probably won’t be working there all throughout their lives. Therefore it looks like something happens with some young people, meaning that although they do have to work, they are able to make further progress throughout life; in other words they have become socially mobilized. Though as good as it sounds, like with anything, the concept of social mobility is incredibly flawed as it stands. Therefore the next post in this series will explore the positives and the negatives of this concept, how class constructs are not as rigid as they used to be, as well as about how this slight change has caused much stigma to arise by the mainstream media.
Published by victoriarose002
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
%d bloggers like this:
|
How To 2012 Apush Dbq
4099 Words17 Pages
CHAPTER 16 SOLUTIONS TO MULTIPLE CHOICE QUESTIONS, EXERCISES AND PROBLEMS MULTIPLE CHOICE QUESTIONS 1. d 2. c 3. b 4. b 5. d 6. b 7. a 8. c 9. a 10. d EXERCISES E16.1 Multiple Choice—Securities Laws and SEC Functions a. 2 b. 1 c. 4 d. 3 E16.2 Multiple Choice—SEC Reporting Requirements a. 1 b. 5 c. 3 d. 3 E16.3 Multiple Choice—SEC Reporting Requirements a. 4 b. 5 c. 5 E16.4 Multiple Choice—SEC Reporting Requirements a. 4 b. 1 c. 5 d. 4 E16.5 Multiple Choice—Corporate Governance a. 3 b. 5 c. 2 E16.6 Multiple Choice—SEC and Accounting Standards a. 3 b. 4 E16.7 Multiple Choice—Registration of Securities a. 5 b. 1 c. 4 d. 3…show more content…
Duties which may be assigned to the audit committee by the board of directors other than those associated with the annual audit, may include: • monitoring the activities of the internal audit staff. • seeing that any recommendations made by the external auditor are acted upon by the internal auditors. • reviewing the design of the company's control systems. c. The audit committee should act as an overseer of the company's internal audit staff. The audit committee would be concerned with such matters as the scope of internal audits, the completion of assignments, and discussion of the results of reviews conducted by the internal audit staff. d. Members appointed to serve on the audit committee should be outside board members (independent of management) because the NYSE specifies that members be independent of management, outside members would be free from bias or conflicts of interest, and outside members would be more objective in settling disputes between management and the external auditor. P16.6 Proxy Statements a. The purpose of proxy statements is to provide full and fair disclosure of significant events in order to allow shareholders to exercise a more informed judgment before voting on corporate
Open Document
|
Report broken link
Thank you for helping us keep up to date!
A broken link notification has been sent to our admins.
Overview: Arterial ischemic stroke (AIS) occurs when the blood flow to the brain is blocked. Anticoagulant therapy (ACT), which helps prevent blood clots, is often used to treat adults with AIS to prevent recurrent strokes.
|
39 of 66
Although fearsome looking, cape water buffaloes are primarily grazers and dependent on water. They are gregarious, living in herds of 20 to 40 animals on average. Their eyesight is rather poor but their sense of smell is acute. Both males and females have heavy, ridged horns that grow straight out from the head or curve downward and then up. The horns are formidable weapons against predators and are used when jostling for space within the herd; males use the horns in fights for dominance.
|
In this short video, learn about actions that humans can take to mitigate climate change and adapt to its impacts. Use this resource to stimulate thinking and questions about climate change and to provide opportunities for students to design solutions and communicate information.
This resource is a high quality video with a an engaging narrative discussing the need to cut carbon dioxide emissions in order to reduce the concentration in the atmosphere.
This interactive module allows students and educators to build models that explain how the Earth system works. The Click and Learn application can be used to show how Earth is affected by human activities and natural phenomena.
|
Storm whips up sea foam on Catalan coast
Ocean foam on Tuesday covered parts of the Spanish seaside town Tossa de Mar.
It comes after Storm Gloria caused high waves and seawater agitation, which can cause the creation of foam.
Footage showed waves crashing in over Tossa de Mar's beach, with floodwater moving past shoreline restaurants and businesses on its way to the old town centre.
Sea foam is normally formed when the ocean churns up high concentrations of organic matter, such as algal blooms forming foaming agents.
Although often naturally occurring, pollution from contaminants found in fossil fuel, sewage and detergents can contribute to the creation of foam.
Authorities in Tossa de Mar described the situation as complicated, however, informed that the foam did not pose an immediate threat.
|
The Impact of Market Valuation on Returns
A market commentary or analysis generally summarizes the quarter’s news events that are perceived to be the cause of the markets ups or downs during that period. It can be difficult to write a market analysis since the direction the market takes as the result of a news item (positive or negative) is actually not as important as the relative difference from what the market had anticipated. Meaning, the market does not react to news, but rather to surprises in the news that it had not already accurately anticipated. If you think it through, this creates quite the challenge as an effective market analysis should guide us to what will happen in the future. By definition this is unknowable, so why do we write market analyses?
Rather than focus on predicting the future, let’s look at what the market is telling us today about its valuation. From there we can better speculate what unknown news is already priced into the market valuation. Better yet, let’s just look at what is known and see if it looks like we will make money in the future on a real risk adjusted basis. Not as exciting, but really what we need to know.
Valuation Table
The above valuation oriented data points are extremely well behaved. Separately, and in total, they tell a story that the market is slightly overvalued (relative to their 25 year average) unless we adjust for the fact that yields are low. In which case the markets are still relatively inexpensive. A deeper look at each component may help us reach a more refined conclusion.
FORWARD P/E = Current market index value (P) divided by the next 12 months estimated earnings per share (E)
If we are in the middle of an economic earnings cycle and we have a forward P/E that is similar to the average of the last 25 years then we might assume the market is properly valued (if we can accept the returns of the last 25 years). We could probably assume that we are well beyond the midpoint based on the record breaking duration of this market cycle (11 years and counting). If we look though at the strength of the expansion in terms of real GDP growth, we lag the real growth of prior shorter expansions (See Chart Below). If a recession is to make up for the excess of the prior recovery we may be early stages given the paltry real growth over such an extended period of time.
Strength of economic chart
SHILLER P/E = Inflation adjusted average of the last 10 years earnings divided into the current index price
At a half a standard deviation above its average the Shiller P/E is a bit high. If we look though at the trough that was created in 2008 we can see from the above chart that it was not until almost a year and a half into the recovery that we actually achieved any GDP growth. This means that the denominator is still being pulled down by one of the worst earnings declines in history. As we get into 2020 and beyond we have constantly improving earnings throughout most the subsequent 10 years. This will bring the Shiller PE down (lower PE) without any corresponding increase in earnings.
DIVIDEND YIELD = Collective dividends paid out by stocks held in the index, dividend by the market price of the index
The dividend yield is just slightly below the long term average. What is not in this number is the other source of repayment to shareholders that take the form of net share repurchases. If we analyze net shares outstanding we find that corporations are using more of their free cash flow to buy back shares than they are to increase dividends. Cash returned to shareholders in combination with dividends is going to be higher than the 25 year average.
PRICE-TO-BOOK = Current index market price dividend by the aggregate book value of the underlying holdings
Price to Book is just slightly above the 25 year average. What is not covered is the more aggressive nature of writing down and depreciating book value. This was particularly apparent during the Great Recession. Accounting standards have changed over time and corporations oblige with more rapid and severe write downs when earnings are impaired, possibly inflating current values when compared to historical averages.
PRICE-TO-CASH FLOW = Current index market price dividend by the aggregate operating cash flow of the underlying holdings
The high price-to-cash-flow would mean valuations are high if we are again at cycle midpoints for profits and capital investments. This is the best of the indicators that valuations may be running high. If we couple this with our tight labor market we could conclude that improvements in Price-To-Cash-Flow will be hard to achieve. A more accommodative Fed may reduce borrowing costs but typically cannot stop late cycle erosion in corporate cash flows.
EARNINGS YIELD SPREAD = Earnings yield is the inverse of the P/E ratio and is calculated by dividing earnings per share into the current price. This yield then subtracts the yield on a Baa rated bond
Based on this measure the market is inexpensive. If we look below we see that the price level of the market has not doubled in over 19 years. This means less than a 4% price return. At the same time your compensation for government fixed income returns have declined from 6.2% to 2.0%.
Characteristics Table
We have shown that the market may be more fairly valued than would be assumed from a weighting of valuation measures. The one true measure of value is discounting the future cash flows to determine a present value. In this equation the discount rate will move valuation up or down with the veraciousness of a hungry lion. In other words, a 1% change in the discount rate will drive valuations up or down 20%. If we are going to stay with low rates for a long time then it is hard to see how the market can stay low for any length of time absent some outside force driving earnings down. If rates return to the levels of 2000 we would undoubtedly have severe downside pressure. Since rates do not typically move that quickly it is hard to see that occurring. As we might have guessed the market is telling us it is fairly valued for what it knows. As for what it does not know we have to admit that it is impossible to say. For as Denny Green once said – “They are who we thought they were”. Denny was talking about blowing a 20 to nothing half time lead, we are implying that it is rare when the market is not properly valued for what we know about it. Other than March of 2000, which you can see from above was a screaming sale situation and ended badly, markets can correct from almost any level at any time. If that were not the case we would not earn the extra return it has historically delivered.
|
8 years ago
Jammu to be a Separate State?
IndiatimesUpdated on Aug 03, 2013, 06:00 IST
Would the creation of India's 29th state open the Pandora's box for the country's only Muslim majority state? Jammu and Kashsmir Chief Minister Omar Abdullah believes it would.
Omar Abdullah
Reacting to the Congress' decision to carve out Telangana from Andhra Pradesh, Abdullah said doing so without a States Reorganization Commission would mean creation of a new state in the country by succumbing to agitational pressure.
"This is definitely going to encourage those seeking statehood for the Jammu region. The decision has proved that an agitation for eight or nine years is all you need to make the centre bend", Abdullah had told mediapersons in summer capital Srinagar immediately after the Congress decision on Telangana was announced.
The chief minister had also clarified that his government's administrative concessions to the far-flung and cold Ladakh desert region did not mean steps were being taken to grant it union territory status.
The demand for such a status has been gaining ground in the Buddhist-majority region for over four decades.
It was, in fact, to address the regional aspirations of the people of Ladakh, which comprises the Leh and Kargil districts, that autonomous hill councils were formed there to deal with the growing demand for union territory status. The demand for a separate Jammu state is as old as Kashmir's accession to India in 1947.
The last autocratic Dogra maharajas belonged to Jammu. Ever since political power shifted to the Muslim-majority Valley after 1947, voices for a separate Vishal Dogra state started becoming stronger in Jammu.
A day after the Telangana decision, Bhim Singh of the Jammu-based National Panthers Party issued a statement seeking statehood for the Jammu region.
While many argue that Abdullah's statement was a genuine expression of his concern, some maintain that inadvertently he has echoed aspirations of those demanding statehood for Jammu.
There are many constitutional and legal implications that cannot be brushed aside while agitating either for statehood for the Jammu region or union territory status for Ladakh. A debate is also doing the rounds in the state whether the bifurcation of Jammu and Kashmir would stand the constitutional test because the state's accession is determined by article 370 of the statute and one entity can't simply be removed.
Its legality apart, the demands for statehood for Jammu or union territory status for the Ladakh region would make the task of the separatist Hurriyat leaders easier.
Although the separatists seek the secession of the entire state from India, what they are actually aiming at is the Muslim-majority Valley where they wield political influence.
"Giving in to the demand of a separate state or union territory status for any region of the state would focus and concentrate the separatist campaign for secession in the Valley," said a senior ruling National Conference leader.
If Abdullah made his anti-Telangana statement with political hindsight, few should grudge him this.
The Conversation (0)
Start a conversation, not a fire. Post with kindness.
|
7 years ago
Wirelessly-Charged Pacemakers A Reality
IndiatimesUpdated on Jun 12, 2014, 16:45 IST
Washington: Scientists have developed a way to wirelessly transfer power to medical devices deep inside the body - a breakthrough that could lead to novel forms of pacemakers, nerve stimulators and other life-altering gadgets.
Pacemakers That Could be Wirelessly Charged
The wireless system developed by Stanford University Assistant Professor Ada Poon uses the same power as a cell phone to safely transmit energy to chips the size of a grain of rice. The technology paves the way for new "electroceutical" devices to treat illness or alleviate pain.
The discoveries culminate years of efforts by Poon, assistant professor of electrical engineering, to eliminate the bulky batteries and clumsy recharging systems that prevent medical devices from being more widely used.
The technology could provide a path towards a new type of medicine that allows physicians to treat diseases with electronics rather than drugs, researchers said. "We need to make these devices as small as possible to more easily implant them deep in the body and create new ways to treat illness and alleviate pain," said Poon.
Poon's team built an electronic device smaller than a grain of rice that acts as a pacemaker. It can be powered or recharged wirelessly by holding a power source about the size of a credit card above the device, outside the body.
The central discovery is an engineering breakthrough that creates a new type of wireless power transfer - using roughly the same power as a cell phone - that can safely penetrate deep inside the body, said researchers.
An independent laboratory that tests cell phones found that her system fell well below the danger exposure levels for human safety. Her lab has tested this wireless charging system in a pig and used it to power a tiny pacemaker in a rabbit. She is currently preparing the system for testing in humans.
Poon believes this discovery will spawn a new generation of programmable microimplants - sensors to monitor vital functions deep inside the body; electrostimulators to change neural signals in the brain; and drug delivery systems to apply medicines directly to affected areas.
The Conversation (0)
Start a conversation, not a fire. Post with kindness.
|
Punishments and their Purposes in US
By | July 29, 2021
Punishments and their Purposes in US
This article on ‘Punishments and their Purposes in US’ written by Nilanjana Banerjee focuses on the purpose of the punishments, the philosophy behind it and the consequences.
Crimes ( as defined by Oxford Dictionary) are ‘an action or omission which constitutes an offence and is punishable by law’[1]. The definition itself includes the role of punishment and thus signifies that in criminal justice system, it plays a very vital role. There are various ways to classify crimes like violent, non-violent, crimes against humans, etc.
The most basic and common classification is on the basis of its severity. In US, it is done in four different parts and their respective punishments are decided. It becomes pertinent to note that punishments are decided based on the purpose they serve. For a minor crime like jaywalking, there is no violent punishment given.
All these will be discussed in detail in this article. Moreover, it will elaborate on the purpose of the punishments, the philosophy behind it and the consequences.
The topic of Crimes under US law is very wide as its states have their own system along with the feudal system. But what is common to every place is their division into four categories and the purpose of the punishment as elaborated hereinafter. This article deals with various divisions of crimes and associates them with their seriousness. Further, this article aims at finding out the purpose of different punishments.
There are two primary sources of statistics of crimes in US i.e. FBI (Federal Bureau of Investigation) and BJS ( Bureau of Justice System). Together, they paint a not very clear picture of the crimes. FBI collects the data of reported crimes only, while BJS aims at collecting reported as well as unreported crimes. But before all, it is pertinent to know about crimes and the punishments.
What is Punishment?
Oxford Dictionary defines punishment as ‘ to make somebody suffer because they have broken the law or done something wrong and punishment is ‘an act of punishing somebody.[2]
While Black’s law dictionary defines punishment as ‘a pain, penalty, suffering or confinement inflicted upon a person by the authority of law and the judgment and sentence of a court for some crime or offence committed by him or for his omission of any duty enjoyed by law.’[3]
From these aforementioned definitions, it becomes clear that punishment is given in return for either ‘commission of any act which is forbidden by law’ or ‘omission of any act which was a duty. Unlike the goal of civil litigation which aims at compensating the victims, in a criminal prosecution the goal is to punish the wrongdoer (the person who committed the crime).
Determining the Categories of Punishment
There is no abstract rule of punishment for every crime. As per US criminal Justice system, the severity of crimes determine the intensity of the punishment. For that very purpose, crimes under US law are categorized into four different types. They are:
1. Felonies
2. Misdemeanor
3. Felony-misdemeanor
4. Infractions
(Top to bottom- highest to least severity)
The system of categorization (as called in US) is known as grading. This substantive classification further depends on the ‘intent’ element in crime. ‘Malum in se crimes’ are evil and more dangerous in nature than the ‘malum prohibitum crimes’. However, ‘malum in se crimes’ are evil in nature as adjudged by the civilians too. ‘Malum prohibitum’ is a wrong because the statute regulating the concerned act makes it so.
Examples of Malum in se crimes are murder, rape, arson, while malum prohibitum includes evasion of tax.
Felonies are the most serious crimes accompanied by heinous intent and mostly the consequences are grave like loss of life or grievous injury. It can also cause destruction of property. Punishment options in felonies include execution, prison time, fine. It can even include alternative sentences like rehabilitation or probation etc.
The second-grade serious crime is misdemeanor. The intent required is comparatively less heinous than felonies and consequently, the result is less grave. Such crimes are punishable with jail time instead of prison time, rehabilitation, community service. The prisons are operated by the state or federal government and are more restrictive in nature.
The third is felony misdemeanor which can be prosecuted either as a felony or even as a misdemeanor. It depends absolutely on the circumstances of the case. The discretion of prosecuting which way is on the judge who is hearing the concerned case. As already mentioned that it can be prosecuted as felony or misdemeanour, therefore the punishment solely depends on it.
The least serious category of crimes is infractions, otherwise known as violations. It includes minor offenses like jaywalking, simple motor vehicle accidents. As these offenses are the least serious, the punishment concerned is also minor. Mostly, the punishments in such cases include fines.
Purpose of Punishment
The punishment as rewarded with certain offenses is not randomly decided. There is an underlying purpose behind awarding punishments of various times.
In short, punishments are determined by the severity of crimes or the ways either to correct the wrongdoer or make him go through same pain.
There are several theories of punishments that describe the punishment, their underlying aim, the philosophy behind it and the consequences too. But the US Criminal Law justice recognizes specifically five purposes. They are-
1. Deterrence (general and specific)
2. Incapacitation
3. Rehabilitation
4. Retribution
5. Restitution
Each of these five purposes is described in detail hereinafter.
In layman’s terms, deterrence is the discouragement of any action by instilling fear of consequences in minds of the wrongdoer in specific or public at large in general. The deterrent theory of punishment aims to act on the motive of the offender and creates fear in his or her mind. It does so by providing exemplary punishment which keeps them away from doing the criminal act.
It is based on the belief that if the offender is not punished, then the number of crimes will experience a drastic surge. Deterrence function of punishment seems to have lesser relevance while dealing with crimes that are the conduct of one or a few deviant individuals and not those which are extraordinary conflicts. These aims at eliminating or at least reducing the gravity of the origin of the crime.[4]
Deterrence can be aimed at any single individual to make sure that the particular individual is less likely to commit such crimes in the future. In such cases, it is specific deterrence. While at times it is applied to the public at large that is to create a fear in minds of others that consequences of committing such crime can cause draconian results. The situation where a wrongdoer is used as an example for others is called general deterrence.
The general meaning of incapacitation is ‘to make an individual incapable of committing a crime[5]. Historically, it was done by execution or banishment. It applies to recidivists i.e. the offenders who have committed repeated offences or are called habitual offenders. In 1996, in US state of California, a form of incapacitation was used to punish sex offenders.
It was called ‘chemical castration’ where the offenders were given certain hormonal drugs to eliminate or reduce their sex drive. But it proved effective only when they were taken voluntarily, not under compulsion, along with some psychological treatments. Primarily, these hormonal drugs alone could not make the offender incapable of committing sex crimes.
Basically, these punishments are utilized for such offenders who are dangerous and are likely to commit such grave crimes unless restrained.
The principle of incapacitation is controversial as at times it becomes difficult to identify such violent and habitual offenders.
As the term suggests for itself, rehabilitation is restoring to normalcy by training, etc. after any deviant act. It prevents future crime by altering the wrongdoer’s behavior. It can be done by therapy, counseling, vocational or educational training. The philosophy behind rehabilitation is to make the offender capable of returning back to society and live there as a law-abiding member of the community.
It is a humane alternative to violent forms of punishment under deterrence, incapacitation. In some cases of rehabilitation, the offender is released under some conditions while in others they serve for a longer time as a part of training. Rehabilitation was widely criticized in US in 1970s but it gained acceptance later in 1980s – 1990s.
Moreover, until the final decades of 20th century, the primary goal was rehabilitation. As per the researcher, it was proven effective in reducing recidivism. One of the most widely used instruments of incapacitation was indeterminate sentences governed by the degree of reform the offender exhibited.
Critics still object to it as it gives significant discretion to the prison administrator in dealing with the situation.
It aims at preventing any future crime by eliminating the desire for personal avengement by the victim towards the wrongdoer. It is based on the concept of ‘lex talionis’ i.e. law of retaliation and as expressed in Exodus, it is an eye for an eye. In general, the severity of punishment is proportionate to the seriousness of the crime done.
It was in 1970- 80s that the criminal justice system shaping US shifted to retribution. Under retribution, it is seriously significant that the offenders actually be guilty of the crime for which the punishment is given. It is believed that it would be improper to allow the victims to go unpunished.
Punishing offenders restores the balance in society and satisfies the sense of avengement by the victims. Moreover, it believes that offenders have misused society’s benefit and therefore they should not be allowed to gain any unethical advantage by doing so. They need to be restricted and the person suffering the loss should be satiated.
It has been criticised by remaking that ‘satisfaction cannot be scale to determine the punishment’. While the other critics say that doing what others have done to us is fair prima facie.
This is the final one which, prevents further crime by punishing the defendant financially. Court orders the wrongdoer to pay for the harm done to the victims, which resembles the compensation in civil litigation. The Mandatory Restitution Act of 1996 was established for the purpose of determining the value to be given in restitution.
If the defendant is found guilty of the charges during the trial, United States Attorney will provide the information concerning the loss and the amount to be compensated. Generally, the court orders restitution when it finds that it is necessary to make the victim whole as the victim has suffered some financial loss.
The direct victims are entitled to restitution, however, the indirect victims are eligible in certain cases too. Like the surviving members of the murdered person’s family. The amount to be covered depends on the loss suffered, severity of the crime, defendant’s ability to pay.
However, restitution amount is not the same as the fine. Fines are predetermined and specific penalties are paid to the court as a part of punishment. Whereas restitution is determined from case to case and aimed towards compensating the victim.
In this way different punishments serve different purposes and they are not any random assortments of penalties.
Crimes are deviant activities that cause no good to society. Therefore, it is vital to curb them and prevent the offenders from committing such crimes.
For this purpose, punishments are framed keeping in mind the gravity of the crime, loss caused, nature of offender, an example to society. The United States has four different categories of crimes classified on the basis of seriousness. The highest being a felony and the least being infractions. Depending on these, the punishments also vary.
Basically, there are five purposes (as acknowledged by US law). The deterrence aims at frightening individuals or society at large, while incapacitation does so by removing the offender from the society.
Rehabilitation aims at altering the defendant’s behavior. Retribution prevents offence by giving the victim’s feeling of avengement a sense of satisfaction. But restitution punishes the defendant financially.
[1] Oxford Dictionary
[2] Oxford Dictionary
[3] Black’s Law Dictionary
[4] Elena Maculan & Alicia Gill, The rationale and purpose of criminal law and punishment in transitional context, 40 OXFORD JLS 132 (2020).
[5] Britannica
1. Law Library: Notes and Study Material for LLB, LLM, Judiciary and Entrance Exams
2. Legal Bites Academy – Ultimate Test Prep Destination
Leave a Reply
|
Criminal Offenses are defined as outlined by the U.S. Department of Justice, FBI National IncidentBased Reporting System. For the purposes of complying with the requirements of 34 CFR 668.41,
and incident meeting these definitions is considered a crime for the purpose of Clery Act reporting.
1. Murder & Non-Negligent Manslaughter: The willful (non- negligent) killing of one human being by another. Any death caused by injuries received in a fight, argument, quarrel, assault, or commission of a crime is classified as murder and non- negligent manslaughter.
2. Manslaughter by Negligence: Is defined as the killing of another person through gross negligence. Deaths of persons due to their own negligence, accidental deaths not resulting from gross negligence, and traffic fatalities, are not included in the category Manslaughter by Negligence.
3. Sexual Assault: An offense that meets the definition of rape, fondling, incest, or statutory rape as used in the FBI’s Uniform Crime Reporting (UCR) program. Per the National Incident-Based Reporting System User Manual from the FBI UCR Program, a sex offense is “any sexual act directed against another person, without the consent of the victim, including instances where the victim if incapable of giving consent.”
2. Fondling: The touching of the private body parts of another person for the purpose of sexual gratification, without the consent of the victim, including instances where the victim is incapable of giving consent because of his/her age or because of his/her temporary or permanent mental or physical incapacity. (Because there is no penetration in fondling, this offense will not convert to the SRS as Rape)
3. Incest: sexual intercourse between persons who are related to each other within the degrees wherein marriage is prohibited by law.
4. Statutory Rape: sexual intercourse with a person who is under the statutory age of consent.
4. Robbery: The taking, or attempted taking, of anything of value from one person by another, in which the offender uses force or the threat of violence.
5. Aggravated Assault: Aggravated assault is an unlawful attack by one person upon another
7. Motor-Vehicle Theft: The theft or attempted theft of a motor vehicle, including automobiles, trucks, motorcycles, and mopeds.
8. Arson: The willful or malicious burning or attempt to burn, with or without intent to defraud, a dwelling house, public building, motor vehicle, or aircraft, personal property of another, etc.
9. Domestic Violence: Includes felony or misdemeanor crimes of violence committed by a current or former spouse of the victim, by a person with whom the victim shares a child in common, by a person who is cohabitating with or has cohabitated with the victim as a spouse or intimate partner, by a person similarly situated to a spouse of the victim under the domestic or family violence laws of the jurisdiction in which the crime of violence occurred, or by any other person against an adult or youth victim who is protected from that persons acts under the domestic or family violence laws of the jurisdiction in which the crime of violence occurred.
10. Dating Violence: Violence committed by a person who is or has been in a social relationship of a romantic or intimate nature with the victim; and, where the existence of such a relationship shall be determined by the victim with consideration of the following factors: (1) The length of the relationship, (2) The type of relationship, (3) The frequency of the interaction between the persons involved in the relationship.
11. Stalking: Engaging in a course of conduct directed at a specific person that would cause a reasonable person to fear for the person’s safety or the safety of others; or to suffer substantial emotional distress. Course of conduct means two or more acts, including, but not limited
12. Liquor-Law Violations: The violation of laws or ordinances prohibiting: the manufacture, sale, transporting, furnishing, possessing of intoxicating liquor; maintaining unlawful drinking places; bootlegging; operating a still; furnishing liquor to a minor or intemperate person; underage possession; using a vehicle for illegal transportation of liquor; drinking on a train or public conveyance; and all attempts to commit any of the aforementioned offenses.
• Drunkenness and driving under the influence are not included in this definition.
13. Drug-Law Violations: Violations of State and local laws relating to the unlawful possession, sale, use, growing, manufacturing, and making of narcotic drugs. The relevant substances include: opium or cocaine and their derivatives (morphine, heroin, codeine); marijuana; synthetic narcotics (Demerol, methadone); and dangerous non-narcotic drugs (barbiturates, Benzedrine).
14. Weapons-Law Violations: The violation of laws or ordinances dealing with weapon offenses, regulatory in nature, such as: manufacture, sale, or possession of deadly weapons; carrying deadly weapons, concealed or openly; furnishing deadly weapons to minors; aliens possessing deadly weapons; and all attempts to commit any of the aforementioned offenses.
15. Larceny: The unlawful taking, carrying, leading, or riding away of property from the possession or constructive possession of another.
16. Vandalism: To willfully or maliciously destroy, injure, disfigure, or deface any public or private property, real or pesronal, without the consent of the owner or person having custody or control by cutting, tearing, breaking, marking, painting, drawing, covering with filth, or any other such means as may be specified by local law.
17. Intimidation: To unlawfully place another person in reasonable fear of bodily harm through the use of threatening words and/or other conduct, but without displaying a weapon or subjecting the victim to actual physical attach.
Categories of Prejudice
Hate Crime is defined as a criminal offense committed against a person or property that is motivated, in whole or in part, by the offender’s bias. Bias is a preformed negative opinion or attitude toward a group of persons based on their race, gender, religion, national origin, sexual orientation, gender identity, ethnicity or disability.
For Clery Act reporting purposes, hate crimes include any offense in the following list that is motivated by bias:
• Murder and Non-negligent manslaughter
• Sex Offense
• Robbery
• Aggravated Assault
• Burglary
• Motor Vehicle Theft
• Arson
• Destruction/Damage/Vandalism to Property
• Intimidation
• Larceny/Theft
• Simple Assault
Hate Crime Bias:
• Race
• Gender
• eligion
• National Origin
• Sexual Orientation
• Gender Identity
• Ethnicity
• Disability
NOTE: Additions from 2014 VAWA Negotiated Rulemaking Final Consensus Language
|
How Different Countries are Treating Cryptocurrency
Cryptocurrency has become one of the most popular investments in the world. However, different countries have different rules in place when it comes to cryptocurrency. Some countries are more welcoming than others, and some have not made a clear decision yet. The following article explains how different countries are treating cryptocurrencies. If you are interested in crypto trading, it is imperative to know the position of your country.
Let us take a look at some of the countries and their rules in regard to cryptocurrency.
The Russian government has been working since 2011 to create a new legal framework for cryptocurrency. So far, the main regulation that is in place is about ICOs and not Bitcoin. There are regulations regarding how much money you can use to invest in an ICO and there are certain limits to prevent scams. Russia also has its own cryptocurrency called CryptoRuble. The currency will be launched on March 1st of this year and will be used by future state projects such as Ethereum-based smart contracts or registers, voting, tax collection, public services, etc. The currency will be held by the Central Bank of Russia and will be accepted by legal entities, individuals, or organizations.
Russia is supposed to have its own exchange where you can transfer your cryptocurrency into regular money. The Russian authorities see cryptocurrencies as a way for businesses to pay lower taxes. Russia also has some very good mining facilities that tend to produce around 5% of the world’s Bitcoin supply on average making it one of the biggest Bitcoin producers in the world.
The United States
In 2014, America had started paying more attention to cryptocurrencies since they saw how fast it was moving overseas and wanted to get involved themselves. In 2014, The US passed an anti-money laundering act where all exchanges needed proper government identification for accounts. This is one of the regions where cryptocurrency has not been fully accepted due to the fact that it is a decentralized financial system.
In 2017, the American Securities and Exchange Commission rejected a Bitcoin ETF but then in March 2018, they allowed it to happen. The US government is definitely warming up to cryptocurrency, thanks to secure platforms like the Bitcoin System app. However, they want to make sure that their citizens are safeguarded which is brilliant of them since there were several huge cryptocurrency hacks earlier this year.
Mars once had rivers before its water supply dried out later on in its life cycle. Cryptocurrency seems like mars when we look at how far along different countries are with technology. Even though some companies have been hacked and people’s money was lost, does not mean that there cannot be proper security set up for all exchanges using blockchain technology so nobody can hack into anything even if they wanted to.
China, on the other hand, made the news when they banned all ICO’s and asked their citizens to make sure that they do not participate in anything like this which is a genius move because there have been several cryptocurrencies created with the purpose of making quick bucks. China implemented the ban way before other countries thought about doing so and it looks like if you want to invest in an ICO, you will be looking at Chinese companies only.
China has made huge moves toward regulating cryptocurrency exchanges and restricting how people can invest using them. The government has also decided to forbid its citizens from trading on foreign exchanges. It looks like Crypto will thrive as usual in China.
It goes without saying that cryptocurrency is gaining traction all over the world. Most countries are working on policies and legal frameworks for regulating the industry.
|
Recent content by nizama
1. N
Last geometry challenge (very difficult)
nah Nah.. I think the answer is R/sqrt2...because you have x^2 = (R/2)^2 +(R/2)^2 and that is simply x^2 =(2R^2/4) and then x^2=R^2/ x=R/sqrt2 :)
2. N
Please help pascal
I've hot two assignments to do in pascal and i have no idea how to make it :( i dont even have anything similar and i dont have time to search the web for answers now I really need help and i need it as fast as it can be so please if anyone can help me with this i would really reallllyyy...
3. N
Fourier series
hm Not really :frown: the so on my exam..they strictly want it with fourier ... bcs we do also taylor..and in task they mention which one they want..
4. N
Fourier series
aha Yes i just got it.. it is when you use integr. in series so then it would be ((-1)^n +1) thanx :smile:
5. N
Fourier series
Well hmm...not really :rolleyes: I did get almoust correct answer BUT... you see b_n is zero but a_n is not...and this is where i stuck ...i cant get correct a_n by the way...can you tell me is this true? cos(npi) + cos(0pi) = (-1)^(n+1) thanx again :smile:
6. N
Fourier series
Hi there!! Can anyone please help me with this one..? Find the Fourier Sine series of f(x)=sin(x/2) for interval (0,pi) thanx a lot :)
7. N
Please help - integration
Well first thing i always do is when i have any function given like this is to rationalize it ..meaning multiply all of this with (x+sqrt(x+2))/(x+sqrt(x+2)) You will get then this sqrt up... (x+sqrt(x+2)) / ((x^2)-x-2) then you have (x+sqrt(x+2)) / (x-2)(x+1) Then you can simply seperate...
8. N
Find the line integral help
Thank you so much i will try to work from there :)
9. N
Find the line integral help
Hi!! I had exam today and i got one task that i am not sure how i should have made it so i hope you can help me with this one It goes : Find the line integral (i'm not good with using symbols so i'll do my best here) integral by line C from (y-z)dx+(z-x)dy+(x-y)dz int_C (...
|
"Receive the children in reverence, educate them in love, send them forth in freedom."
- RUDOLF STEINER, founder of Waldorf Education
Strong and Confident Leaders:
Your eighth grader is ready to expand beyond what is comfortable and known and find their own unique path forward. This significant year is used to refine the strong sense of self already emerging and to recognize the students as important leaders in our school community. As the students transition out of middle school, high school readiness is exhibited in the students' confidence when presented with new challenges, ability to take initiative, and willingness to take intellectual risks while pursuing their passions.
Our curriculum responds to the student's desire to improve the world, and to leave their mark upon it as a way of defining who they are. The personalities of the great industrial, scientific and political revolutions from the 1700s to the present day speak to the students and show them how individuals can effect change in the world. The fight for human freedom echoes the growing feeling of independence within the students.
Even as the students stand more strongly as individuals, it remains vital that they continue to work together and resolve any conflict with compassion. As a class, they continue to refine communication skills and to speak and act without polarizing others. They learn to honor and celebrate the uniqueness of themselves and others. By being a Buddy to a first grader, they serve as a positive and loving role model for the younger students.
Your eighth grader will learn:
Mathematics: Algebra, geometry, including platonic solids
Language Arts: Short story writing, creative writing, composition, grammar, speech, presentation
Literature: Shakespeare
Science: Physics, including hydraulics and aerodynamics, chemistry, meteorology, human reproduction
Geography: Antarctica, Australia, Asia, global geography
History: Industrial Revolution, American history, human rights movements
Technology: Research skills, typing, programs, digital presentations
Electives: Students study various topics in art and humanities not already covered in the curriculum
Handwork: Machine sewing pillowcases, pajamas pants, shirts and potholders
Woodwork: Stools, benches, wooden boxes
Art: Acrylic painting, clay modeling, charcoal and form drawing
Theater: Class play complementing eighth grade curriculum
Music: Chorus, string ensemble, community-wide concerts
Additional information:
Field trips, including a culminating trip, are used to enrich classroom learning. Prior Waldorf experience is required to attend eighth grade. Classes are filled on a first come/first serve basis.
Schedule: (M, T, W, F) 8:00 a.m. to 3:15 p.m & (Thursdays) 8:00 a.m. to 2:45 p.m.
Drop-off opens at 7:30 a.m. & After Care Available
For more infomation please contact our Admissions Coordinator at 262-646-7497 or [email protected]
Go To Admissions
|
Your microbiome—the diverse population of microbes (bacteria) that live in your gastrointestinal (GI) tract—plays an important role in the health of your gut, and in other aspects of your physical health, from inflammatory skin disorders to obesity.1 Researchers now say that this role of promoting good health may extend to include the health of your brain and neurological systems.
What’s the Connection?
The thousands of different types of both “good” and “bad” bacteria that populate the microbiome normally exist in a balance in favor of beneficial bacteria that help prevent overgrowth of bad bacteria that can harm your heath. Studies have shown there is potential harm associated with an imbalance in the microbiome due to inflammation, intestinal permeability or lack of bacterial diversity, any of which may be associated with an overgrowth of unhealthy bacteria. In some cases, researchers are confronted with the classic “chicken or egg” question with respect to the association between gut bacteria and poor health, in terms of which comes first. Does an overgrowth cause the disorder or does the disorder cause an overgrowth of bad bacteria?
Bacteria on the Brain
Current thinking in the field of neuropsychology and the study of mental health problems includes strong speculation that bipolar disorder, schizophrenia, and other psychological or neurological problems may also be associated with alternations in the microbiome. Researchers speculate that any disruption to the normal, healthful balance of bacteria in the microbiome can cause the immune system to overreact and contribute to inflammation of the GI tract, in turn leading to the development of symptoms of disease that occur not only throughout your body, but also in your brain.2,3,4
This system of connections and communication between the gastrointestinal tract and the brain is referred to as the “gut-brain axis.” Some researchers speculate that infections occurring in early life could negatively affect the mucosal membrane in the GI tract, disrupting the gut-brain axis, and interfering with normal brain development. The mucosal membrane can also be altered in other ways, such as through poor diet choices, radiation treatment, antibiotic use, and chemotherapy.3,4
Article continues below
Worried you may be suffering from a mental health disorder?
Take one of our 2-minute mental health quizzes to see if you could benefit from further diagnosis and treatment.
Take a Mental Health Quiz
What You Can Do
To maintain or restore the health of your microbiome and support good overall health, it is important to maintain a strong balance in favor of beneficial bacteria in your digestive tract. The first step is to eat a well-balanced diet that includes foods with probiotic or prebiotic ingredients that support microbial health by helping to restore balance to the gut microbiome.3,4 These are foods that contain live beneficial (probiotic) bacteria and, in the case of prebiotics, contain substances like specific types of fiber that nurture the growth of probiotic bacteria.
Probiotic Foods
Until more is known, look to a variety of readily available probiotic foods that supply varying amounts beneficial live bacteria that grow during carefully controlled fermentation processes. Some of these are common foods you may already be including in diet, while others may seem a bit more exotic, though they are readily available in supermarkets. Probiotic foods and beverages include plain yogurt, kefir, cottage cheese, fresh sauerkraut, kimchi, kombucha, apple cider vinegar, and miso. Keep in mind that the probiotic effects of these foods are destroyed by cooking, processing, or preserving at high temperatures.
Prebiotic Foods
Unlike probiotic foods, prebiotic foods do not contain living organisms. They contribute to the health of the microbiome because they contain indigestible fibers that ferment in the GI tract, where they are consumed by probiotic bacteria and converted into other healthful substances. Prebiotic foods include artichokes, leeks, onions, garlic, chicory, cabbage, asparagus, legumes, and oats.
Commercial Supplements
While probiotic supplements have been shown to improve symptoms of depression, anxiety, obsessive-compulsive disorder and other psychological and neurological conditions, their use should be discussed with a physician or mental health care provider. Currently, there are no standardized recommendations because researchers have yet to determine which bacterial species or combination of species, doses and delivery systems can best help treat specific symptoms and maintain overall health. It is still unclear whether single strains of probiotic bacteria are as effective as mixtures of different strains, and if or how any combination of bacteria in a supplement can interfere with other medications or other aspects of health.5
Microbial Transplant
Food and supplements represent the most common ways probiotics can be delivered to the gastrointestinal tract, but they are not the only way. Another form of treatment currently under investigation is known as fecal microbial transplant, which is pretty much what it sounds like. In short, fecal matter (stool) from a healthy individual is transplanted to the bowel of someone with a chronic condition, with the goal of repopulating their microbiome with more diverse species of bacteria and reducing symptoms. This technique has been shown to be effective in treating gastrointestinal disorders but studies into its value for psychiatric symptoms are in very early stages.6
Looking Ahead
The majority of studies looking at the gut-brain axis and the use of probiotics to reduce symptoms and occurrence of mental health disorders such as bipolar and schizophrenia are preliminary, preclinical studies that support the theory but have yet to demonstrate an absolute effect in humans with mental health issues. Although early research points to positive outcomes, larger population, and human clinical studies are necessary to determine which patients can truly benefit from probiotic or “psychobiotic” treatment of mental health disorders, and how these treatments can best be applied.7
Article Sources
Last Updated: Nov 18, 2018
|
Tag Archives: LEDs
Cooling High Power LEDs
Most LEDs are designed in SMT (surface mount technology) or COB (chip-on-board) packages. In the new 1~8W range of surface mount power LED packages, the heat flux at the devices thermal interface can range from 5 to 20 W/cm2. These AllnGaP and InGaN semiconductors have physical properties and limits similar to other transistors or ASICs (application specific integrated circuit). While the heat of filament lights can be removed by infrared radiation, LEDs rely on conductive heat transfer for effective cooling.
As higher powers are dissipated from LED leads and central thermal slugs, boards have changed to move this heat appropriately. Standard FR-4 technology boards can still be used for LEDs with up to 0.5 W of dissipation, but metallic substrates are required for higher levels. A metal core printed circuit board (MCPCB), also known as an insulated metal substrate (IMS) board, is often used underneath 1W and larger devices. These boards typically have a 1.6 mm (1/16 inch) base layer of aluminum with a dielectric layer attached. Copper traces and solder masks are added subsequently. The aluminum base allows the heat to move efficiently away from the LED to the system.
Increasing power density, a higher demand for light output, and space constraints are leading to more advanced cooling solutions. High-efficiency heat sinks, optimized for convection and radiation within a specific application, will become more and more important.
As with any semiconductor package, thermal resistance plays a significant role in the thermal management of LEDs. The highest thermal resistance in the heat transfer path is the junction-to-board thermal resistance (Rj-b) of the package [2]. Spreading resistance is also an important issue. Thermally enhanced spreader materials, such as metal core PCBs, cold plates, and vapor chambers for very high heat flux applications are viable systems to reduce spreading resistance. [3]
Linear heat sinks are available specifically for LED strips, such as OSRAM SYLVANIA’s DRAGONstick® linear LED strips, which are widely used in architectural lighting. For example,the maxiFLOW linear heat sink from Advanced Thermal Solutions, Inc., has a patented spread fin array that maximizes surface area for more effective convection (air) cooling, particularly when air flow is limited, such as inside display cases.
Round heat sinks are available specifically for round LED boards, which are used to replace halogen light bulbs, in applications such as spotlights and down lighting. A typical LED spotlight is shown in Figure 2 [5]. Here, a round QooLED© heat sink from Advanced Thermal Solutions is used for cooling three LEDs. The round heat sink has a special star-shaped profile fin design that maximizes surface area for more effective convection (air) and radiation cooling in the vertical mounting orientation, e.g., inside ceilings.
Active thermal management systems can be used for high-flux power LED applications. These include water cooling, two-phase cooling, and fans. Although active cooling methods may not be energy-justifiable for LEDs, reasons for using them include ensuring lumen output or maintenance-free operation, or to meet specific wavelength requirements.
|
skip to Main Content
A History Rhyme About The Second Coming
A History Rhyme About The Second Coming
By Jack Kinsella
Why all the attention on politics at a website devoted to the discussion of Bible prophecy? It’s a fair question. The basic political difference between a Democrat and a Republican is rooted in where each side believes their source of the authority to govern originates.
Democrats believe in majority rule, whereas Republicans believe the will of the people is limited by ‘natural’ or ‘Divine’ law.
Sir William Blackstone was an 18th century British jurist whose commentaries set forth two main categories of common law; the law of nature and the law of revelation.
James Wilson, one of the signers of the Constitution and one of the first five Supreme Court justices, looked to Blackstone’s ‘Commentaries’ to form his decisions both in Congress and on the bench. Blackstone’s “Commentaries on the Laws of England” has served as a kind of Common Law ‘Bible’ for the United States since the times of the Founding Fathers.
Sir William argued that the law of nature establishes a rule of moral conduct based on God’s law, which recognizes man as created in the image of God. This rule of moral conduct imposes a rule of action upon man that includes duties to God, self, and neighbor.
According to Blackstone, the authority of a Republican government is limited to passing laws setting forth rules of civil conduct only with such laws conforming to the “law of nature.” Under this principle, certain conduct would always be “malum in se” meaning, “bad in and of itself.”
Blackstone argues that the role of government is not to enumerate rights, but to protect those rights already imparted to every individual by God.
His common law model establishes that the duty of government is to commend what is right and prohibit what is wrong.
Blackstone states, “The principal aim of society is to protect individuals in the enjoyment of those absolute rights which were vested in them by the immutable laws of nature.”
Blackstone defined the word ‘law’ as it applies to government in his Commentaries, calling it, “A rule of civil conduct prescribed by the Supreme power in a state, commanding what is right, and prohibiting what is wrong.”
Are you with me so far? Blackstone’s Commentaries outlined the duties and responsibilities of government in a Constitutional Republic.
The difference, Blackstone explains, is that the US Constitution creates the powers that exist according to Divine Revelation, whereas in other countries, the existing powers determine the nature of the constitution.
In the American republic, then, there were “principles which did not change” and which were “certain and universal in their operation upon all the members of the community”, which were the principles of Biblical natural law.
For example, Blackstone’s Commentaries explained:
“To instance in the case of murder: this is expressly forbidden by the Divine…If any human law should allow or enjoin us to commit it we are bound to transgress that human law….But, with regard to matters that are…not commanded or forbidden by those superior laws such, for instance, as exporting of wool into foreign countries; here the…legislature has scope and opportunity to interpose.”
In other words, the laws of nature (or Divine Law) are beyond the power of the majority to overturn.
The Democrats prefer a ‘pure’ American democracy, similar to that of France, where secular humanism is the state religion and a simple majority makes the laws without Divine oversight. Rights are extended or withdrawn by the majority.
That is why, in the pure democracy of the Democratic Party, abortion is about a ‘woman’s right to choose’ and homosexual marriage is a human rights issue. But in that worldview, religious‘rights’ exist only to the extent that they are shared by the majority government.
Of pure democracy, President James Madison observed;
John Adams warned the Founders,
Noah Webster uttered this unintended prophecy regarding pure democracy;
If ever America had an historical equal, it would had to have been when the Roman Empire was at its peak which history says occurred at just about the same place in history as did the birth of Christianity.
“The exact transition of when the Roman Republic became the Roman Empire is a subject of disagreement among historians and others. Some believe the change took place in 44 B.C. when Julius Caesar was made perpetual dictator. Other views are that Rome went from Republic to Empire when Mark Antony was defeated at the Battle of Actium in 31 B.C. or when in 27 B.C. the Roman Senate granted extraordinary powers to Octavian (Augustus).”
“The first true Roman Emperor is believed to have been Augustus Caesar, who ruled the empire from 27 B.C. to 14 A.D. His rule was followed by that of Emperor Tiberius (14 to 37 A.D.), Caligula (37 to 41 A.D.), Claudius (41 A.D. to 54 A.D.) and Nero (54 to 68 A.D.).”
“After Nero’s death began a short period known as the ‘Year of 4 Emperors’ when Galba, then Otho, then Vitellius, reigned. Galba and Vitellius were murdered while in office while Otho committed suicide after losing a battle. Vespasian, the fourth of the four emperors, began his rule in the middle of 69 A.D. After Vespasian died of natural causes in 79 A.D. he was followed by Titus, who ruled the fast growing empire until 81 A.D. Domitian, the son of Vespasian and the person known for exiling the apostle John to the island of Patmos in 95 A.D., is dictator of Rome’s world empire until 96 A.D. This brings us to the period when Rome was at its peak of power and wealth.”
In the first three chapters of the Book of the Revelation, Jesus outlines a letter to each of the seven churches then existing in Asia Minor.
Looking back over time, it is apparent that each of the Churches correspond to ‘epochs’ in the life of the Church over the past two thousand years. These seven periods of time traversed in chronological order, beginning with Ephesus and ending, in our times, with Laodicea.
The Church Epoch that preceded Laodicea was the Church of Philadelphia, or Church of Brotherly Love. It shared a distinction with the Church of Smyrna, in that it received no words of condemnation from the Lord.
The Church of Sardis corresponded with the Reformation Period from 1500-1750, during which time, the Word of God was redistributed to the common man, ending the Roman Church’s monopoly on the Bible.
“Thou hast a few names even in Sardis which have not defiled their garments; and they shall walk with Me in white: for they are worthy.” (Revelation 3:4)
The Church of Philadelphia was the ‘missionary church’ (1750-1900) during which time, the Word of God was carried by missionaries into the far corners of the world.
“Because thou hast kept the Word of My patience, I also will keep thee from the hour of temptation, which shall come upon all the world, to try them that dwell upon the earth.” (Revelation 3:10)
Then we come to the final Epoch of the Church Age before the Return of Christ. What does the world look like from that perspective? Recall that when the Lord was speaking to John, Domitian is on the throne and Rome is at the peak of its power.
The word ‘Laodicea’ is a compound Greek word meaning, ‘justice of the people’ or, literally, the ‘Church of the People’s Rights’ — a letter-perfect historical rhyme for the modern Church.
And so we find yet another set of history “rhymes”. Modern America “rhymes” with Imperial Rome. It remains at the zenith of its power, although it appears poised to slide either into dictatorship or economic and political irrelevance.
The modern Church “rhymes” with ancient Laodicea. Christians that view the Bible as the unchangeable Word of God which opposes such liberal causes as abortion, same sex marriage, euthanasia, etc. and believe that Bible prophecy is coming to pass in this generation are viewed by mainstream Christianity as “extremists” — and maybe even a little bit dangerous.
“And while they looked stedfastly toward heaven as He went up, behold, two men stood by them in white apparel; Which also said, Ye men of Galilee, why stand ye gazing up into heaven? this same Jesus, which is taken up from you into heaven, shall so come in like manner as ye have seen Him go into heaven.” (Acts 1:10-11)
Christianity opens on Rome and closes on America. And the Second Advent “rhymes” with the First.
This Letter was written by Jack Kinsella on April 16, 2012.
Original Article
Back To Top
|
Incredibly Dense White Dwarf Star Packs the Mass of the Sun Into the Size of the Moon
Researchers say if the star was any more massive it would likely collapse under its own weight and explode
white dwarf star and the moon
A newly discovered white dwarf star (right) is only slightly larger than the moon (left). Giuseppe Parisi
Astronomers have discovered the smallest white dwarf star ever documented around 130 lightyears from Earth, reports Leah Crane for New Scientist. The star, officially given the catchy designation of ZTF J190132.9+145808.7, is roughly the same size as our moon, but what this white dwarf lacks in diameter it makes up for in density with a mass about 1.3 times that of the sun.
The white dwarf was first spotted by Kevin Burdge, a postdoctoral scholar at Caltech, who was looking over all-sky images captured by the Zwicky Transient Facility at Caltech's Palomar Observatory, according to a statement.
The little star is so dense that researchers think it is the progeny of a merger between two formerly separate white dwarfs, they report in a study published this week in the journal Nature.
A white dwarf emerges when certain stars begin to “peter out,” writes Emily Conover for Science News. More commonly, these pint-sized stars are about the size of Earth, which has a radius of 3,958 miles; this white dwarf, by contrast, tacks just 248 miles onto the moon’s roughly 1,000-mile radius.
In the statement, study author Ilaria Caiazzo, a Caltech astrophysicist, explains that the star’s huge mass paired with its petite size isn’t so strange in the world of white dwarfs.
“It may seem counterintuitive, but smaller white dwarfs happen to be more massive," Caiazzo says. "This is due to the fact that white dwarfs lack the nuclear burning that keep up normal stars against their own self gravity, and their size is instead regulated by quantum mechanics."
Apart from being one of the most massive white dwarfs on record, the star has two other unique characteristics: it’s spinning very fast and it has an extremely powerful magnetic field. Per the study, the star does a full rotation about every seven minutes, and the strength of its magnetic field ranges between 600 and 900 megagauss, which makes it nearly one billion times stronger than the sun’s magnetic field.
According to Science News, this particular star is right on the edge of the possible parameters for a white dwarf. If the star were any more massive it would collapse under its own weight and explode in a “type 1a” supernova.
"We caught this very interesting object that wasn't quite massive enough to explode," says Caiazzo. "We are truly probing how massive a white dwarf can be."
In the statement, Caiazzo further speculates that “it's possible that the white dwarf is massive enough to further collapse into a neutron star." According to Caiazzo, the white dwarf is so dense that “in its core, electrons are being captured by protons in nuclei to form neutrons. Because the pressure from electrons pushes against the force of gravity, keeping the star intact, the core collapses when a large enough number of electrons are removed."
If this theory is proven, it would give astronomers an intriguing window into what may be a common path for the formation of neutron stars.
|
Sept. 21, 2011.
Lowland Gorilla
Gorilla gorilla
Order: Primates
Family: Pongidae
1) General Zoological Data
Lowland gorillas are an African species. There are local differences in phenotype and genetic makeup that subdivide Eastern and Western gorillas into subspecies, with perhaps additional subspecies. The larger Mountain gorilla placenta has not been studied and none of this endangered species are in captivity. Numerous gorillas have bred in captivity, in various zoos (Cousins, 1976). The largest colony exists in Aspinall's Howletts and Port Lympne Parks, Kent, UK (Aspinall, 1982). In Africa, the principal predator of gorillas is the leopard. Osteomyelitis from such an apparent attack has been reported by Tutin & Benirschke (1991).
Typical term gorilla placenta, maternal surface at left.
Note the long umbilical cord with few twists. Note also the circummarginate insertion of the membranes on the right.
2) General Gestational Data
Adult females weigh from 70-140 kg (Nowak & Paradiso, 1983). Newborns weigh around 2 kg and have closed skull sutures with thick skull bones. The estrous cycle is 31 days; usually one young is born after 251-295 days of gestation. Twins are uncommon and are usually stillborn or aborted (Rosen, 1972). The newborn weight is about 1.6 to 2 kg at term. Sexual maturity of females is reached at 8 years, and at 11 years in males. In captivity, the lifespan is over 50 years. The placental weight, at term (excluding membranes and cord), is around 350 g. The average placental disk measures approximately 15 x 13 x 2 cm. The membranes (chorion laeve) attach marginally or in a circummarginate fashion. The organ bears great similarity to human placentas, except for its smaller size and the longer umbilical cord. The most recently studied placenta from the "Safari Center" at San Diego Zoo Global was 15 cm in diameter and weighed 225 g. It had the cord knot to be described below.
Wislocki (1932) has described the anatomy and histology of the female reproductive tract, including placentation, in great detail.
Gorilla group at the Wild Animal Park of the Zoological Society of San Diego.
Mother and neonate gorilla at San Diego's Wild Animal Park.
3) Implantation
As is true of the human placenta, the gorilla placenta implants "interstitially", with decidua surrounding the entire implanted blastocyst. Once the membranes "herniate" into the uterine cavity, the peripheral villi, those on the membrane surface, atrophy. Implantation of the blastocyst onto the other endometrial surface does not occur. Thus, a single discoid organ forms with membranes like those in humans. The placenta is anterior or posterior in its location. A wide margin of membranes separates it from the endocervical canal.
4) General Characterization of the Placenta
Other than for their smaller size, the discoid shape and appearance of the gorilla placenta are very similar to those of human placentas. It is cotyledonary with few, relatively indistinct cotyledonary subdivisions seen on the maternal surface. There are much basal calcification and a considerable amount of fibrinoid material. The barrier is identical to that of humans, it is hemochorial and there is moderately extensive trophoblastic invasion of the endometrium and probably also invasion of the superficial myometrium. Vascular trophoblastic invasion occurs in the decidua basalis. There are large deposits of extravillous trophoblast ("X-cells") and some have cystic centers. The same cells infiltrate into the decidua basalis.
Cyst of extravillous trophoblast ("X-cells")
Term gorilla placental villus. It is much like the human.
5) Details of fetal/maternal barrier
The surface area of villous structures was determined by Baur (1970). It is similar to that of other ape placentas and humans. The surface is composed of syncytiotrophoblast with underlying cytotrophoblast. The syncytium has brush (microvillous) borders.
Placental floor, villi below, much fibrinoid and mild calcification
More term villi of gorilla placenta.
6) Umbilical cord
The umbilical cord of the gorilla placenta has two arteries and one vein. There are normally no ducts, but remnants of the former allantoic duct occasionally occur in between the two arteries. Compared to human umbilical cords (55 cm), that of the gorilla is unusually long, often as much as 100 cm. The reason for this extraordinary length is unknown. This aspect has been further discussed by Spatz (1968) who related the length of cords to fetal length. The most recently studied placenta had a 185 cm cord length with a true umbilical knot and the neonate did well.
The cord shown in our photograph was 70 cm long and there was an estimated 20 cm still attached to the newborn. The umbilical cord had apparently ruptured spontaneously or it was bitten through by the mother. This is not often observed in apes, nor is placentophagy, as was discussed by Naaktgeboren & Wagtendonk (1966). In our experience, the cord is most frequently marginally inserted in gorillas, but this was not so in the placentas shown here. Ludwig (1961a,b) also found a marginal insertion of a 65 cm long cord and in other umbilical cords that he observed. He considered the specimen depicted to be a "usual" gorilla cord, even though it had a true knot. The female offspring of this gestation did well.
Last placenta seen with true knot and 185 cm long cord.
This is the true knot in the cord.
This is a section through the knot in the umbilical cord.
Another section through the cord knot.
Section through the cord away from the knot.
The mother with offspring from the mother with long cord and knot.
Remnant of allantoic duct in cord.
7) Uteroplacental circulation
This is essentially the same as in human gestations, except that the maternal decidual arterioles usually have thicker walls.
Membranes with chorion at left and decidua at right. Note the thick, hyalinized arterioles.
8) Extraplacental membranes
Wislocki (1932) and others have shown the presence of a decidua capsularis with remnants of villi in the chorion laeve. The amnion is identical to the human placental amnion, and there is no allantoic membrane.
Membranes with atrophied villus in between decidua (right) and chorion (left), within trophoblast layer.
9) Trophoblast external to barrier
There are extensive islands of "X-cells" and there is invasion by extravillous trophoblast into the decidua basalis and into the maternal vessels. Whether it also extends into the myometrium, as is true in humans, is unknown.
10) Endometrium
There is decidualization, much like that of human placentation with wandering trophoblast, which invades the decidua basalis.
11) Various features
In all respects this placenta is morphologically like human placentas.
12) Endocrinology
A complete review of hormones and their actions in primate pregnancies has been made by Pepe & Albrecht (1995). Much of this applies to gorilla pregnancies for which only few specific studies have been reported, e.g. Czekala et al. (1983), Watson (1984), Mitchell et al. (1982), and Martin et al. (1977).
Successful birth after embryo transfer was reported by Pope et al. (1997). These authors have provided much endocrine information as well and a comprehensive bibliography.
Chorionic gonadotropin is present during pregnancy, but the total urinary estrogen of gorilla pregnancies is significantly lower than that found in human or chimpanzee pregnancies. The syncytiotrophoblast of the ape placenta produces corticotropin-releasing hormone (CRH) and the serum of pregnant gorillas contains its binding protein (CRH-BP) (Bowman et al., 2001).
13) Genetics
Gorillas, as all large apes, possess 48 chromosomes with extensive homology to human chromosomes. Hybrids have not been described.
We have observed a growth-retarded, phenotypically normal 5 year-old gorilla that has a large deletion of chromosome 3 (at q30). The parents have normal chromosomes (Lear et al., 2001).
Mitochondrial DNA was used to differentiate among subspecies (Garner & Ryder, 1996); and Arnason et al. (1996) used mtDNA to survey patterns of hominid evolution.
DNA from hair samples has enabled paternity diagnosis (Field et al., 1998). Using FISH methodology for 24 specimens of gorilla, Schempp et al. (1998) localized human ribosomal DNA to chromosomes 22 and 23 (corresponding to 21 and 22 in humans).
The numerous publications on gorilla chromosomes and genetics are partially cited in the references below.
14) Immunology
No truly immunological studies of pregnancy are known to us.
15) Pathological features
Numerous reproductive diseases have been described, and a review of gorilla mortalities has been published (Benirschke & Adams, 1980). Examples include placental abruption with stillbirth and presumed preeclampsia equivalent to human gestations, fatally acute amebic meningoencephalomyelitis, tularemia, coccidioidomycosis, nephritis, ruptured aortic aneurysm, and a multitude of ailments similar to those found in humans. Scott (1992) and Griner (1983) have compiled some of this information in books. Other aspects of pathology are summarized in a book, which evolved from a primate meeting in 1985 (Benirschke, ed., 1986). Trophoblastic tumors and hydatidiform moles have not been described. Spontaneous abortions do occur, but their chromosomes have not been studied. Insufficient numbers of few placentas have been studied to eliminate ascending placental infections as a possible disease, as exists in chimpanzees.
In the placenta of abruptio and a few others we have observed, there occur typical infarcts, very similar to those seen in human placentas. Intervillous thromboses and so-called "Tenney Parker changes" (excessive syncytial knotting) also occur. Eclampsia, with a live-born neonate, was reported by Baird (1981); in my opinion, however, that diagnosis is somewhat dubious.
Abruptio placentae with green and yellow
discoloration at left; at right is the underlying
infarct (From Benirschke, 1980).
Term gorilla placental villi with focal necrosis and hemosiderin deposition in villus. The cause is unknown; in human placentas this would be suspicious of former hemorrhage or CMV infection.
16) Physiological data
Whereas alkaline phosphatase, (the most abundant enzyme of human placentas), is present in placentas of chimpanzees and orangutans, it is extremely low or absent in gorilla placentas (Doellgast et al., 1979, 1981). Information on blood groups and many other physiological data can be found in Benirschke, 1986.
Sonographic estimation of birth size has been advanced by Yeager et al. (1981).
17) Other resources
Cell strains of numerous gorillas and DNA are available from CRES at the Zoological Society of San Diego.
18) Other features of interest
Here follow a few examples of pathology with similarity to human reproductive pathology. Surely, many more of these conditions exist but that information is unpublished. One may ask, why are the umbilical cords of gorillas and chimpanzee twice as long as those of humans? What is the frequency of chromosomal errors at conception? Many comparative questions of this kind remain to be answered.
Spontaneous abortus with hydatid villi lacking
vessels and degenerating decidua. This is essentially
similar to a human spontaneous abortus
due to chromosomal error.
This is another view of the aborted pregnancy. The villi are significantly enlarged (swollen, "hydatid"), but there is still a floor of decidua basalis. Villi contain no vessels and are comparable to what in human pregnancies might be a "partial mole" due to triploidy. No trophoblastic proliferation.
Another, similar spontaneous abortus.
A very young spontaneous abortus with defective embryo in the center of a blood-filled chorion.
Infarct of mature placenta with live infant
Decidual necrosis is on top.
Histologic appearance of the infarcted placental tissue.
Gorilla placenta with fetal vascular latex injection- note marginal cord and one-to-one correspondence of fetal blood vessels.
Knot in umbilical cord of gorilla placenta, at right.
Five year-old male gorilla with growth retardation, presumably due to a large deletion of a portion of one chromosome # 3.
Stillborn gorilla male. Fetal death because of abruptio placentae.
Anderson, M.P., Oosterhuis, J.E., Kennedy, S. and Benirschke, K.: Pneumonia and meningoencephalitis due to amoeba in a lowland gorilla. J. Zoo Anim. Med. 17:87-91, 1986.
Antonius, J.I., Ferrier, S.A. and Dillingham, L.A.: Pulmonary embolus and testicular atrophy in a gorilla. Folia Primatol. 15:277-292, 1971.
Arnason, U., Gullberg, A., Janke, A. and Xu, X.: Pattern and timing of evolutionary divergences among hominoids based on analysis of complete mtDNAs. J. Molec. Evol. 43:650-661, 1996.
Arnheim, N., Krystal, M., Schmickel, R., Wilson, G., Ryder, O. and Zimmer, E.: Molecular evidence for genetic exchanges among ribosomal genes on nonhomologous chromosomes in man and apes. Proc. Natl. Acad. Sci. 77:7323-7327, 1980.
Aspinall, J.: The husbandry of gorillas in captivity. Help (in-house magazine). Spring 1982, pp. 12-17.
Baird, J.N.: Eclampsia in a lowland gorilla. Amer. J. Obstet. Gynecol. 141:345-346, 1981.
Baur, R.: Über die Relation zwischen Zottenoberfläche der Geburtsplacenta und Gewicht des Neugeborenen bei verschiedenen Säugetieren. Z. Anat. Entw.-Gesch. 131:31-38, 1970.
Benirschke, K. ed.: Primates. The Road to self-sustaining Populations. Springer-Verlag, NY 1986.
Benirschke, K. and Adams, F.D.: Gorilla diseases and causes of death. J. Reprod. Fertil. Suppl. 28:139-148, 1980.
Bernstine, J.J.: An epizootic of hydatid disease in captive apes. J. Zoo Anim. Med. 3:16-20, 1973.
Carter, A.M.: J.P.: Hill on placentation in primates. Placenta 20:513-317, 1999.
Cousins, D.: The breeding of gorillas, Gorilla gorilla, in Zoological Collections. Zool. Garten 46:215-236, 1976.
Doellgast, G.J. and Benirschke, K.: Placental alkaline phosphatase in Hominidae. Nature 280:601-602, 1979.
Doellgast, J., Wei, S.C., Kennedy, M., Stills, H. and Benirschke, K.: Primate placental alkaline phosphatase. FEBS Letters 135:61-64, 1981.
Dutrillaux, B., Rethoré, M-O., Prieur, M. and Lejeune: Term gorilla placenta villus, much like the human, J.: Analyse de la structure fine des chromosomes du gorille (Gorilla gorilla). Comparaison avec Homo sapiens et Pan troglodytes. Humangenetik 20:343-354, 1973.
Dutrillaux, B., Rethoré, M.-O. and Lejeune, J.: Comparaison du caryotype de l'orang-outang (Pongo pygmaeus) a celui de l'homme, du chimpanzé et du gorille. Ann. Génét.: 18:153-161, 1975.
Egozcue, J. and Chiarelli, B.: The idiogram of the lowland gorilla (Gorilla gorilla gorilla). Folia Primatol. 5:237-240, 1967.
Field, D., Chemnick, L., Robbins, M., Garner, K. and Ryder, O.A.: Paternity determination in captive lowland gorillas and orangutans and wild mountain gorillas by microsatellite analysis. Primates 39:199-209, 1998.
Garner, K.J. and Ryder, O.A.: Mitochondrial DNA diversity in gorillas. Molec. Phylogenet. Evol. 6:39-48, 1996.
Griner, L.A.: Pathology of Zoo Animals. Zool. Soc. San Diego, 1983.
Haaf, T. and Schmid, M.: Chromosome heteromorphisms in the gorilla karyotype. Analyses with distamycin A/DAPI, quinacrine and 5-azacytidine. J. Hered. 78:287-292, 1987.
Hamerton, J.L., Fraccaro, M., de Carli, L., Nuzzo, F., Klinger, H.P., Hulliger, L., Taylor, A. and Lang, E.M.: Somatic chromosomes of the gorilla. Nature 192:225-228, 1961.
Henderson, A.S., Atwood, K.C. and Warburton, D.: Chromosomal distribution of rDNA in Pan paniscus, Gorilla gorilla beringei and Symphalangus syndactylus: Comparison to related primates. Chromosoma 59:147-155, 1976.
Lear, T.L., Houck, M.L., Zhang, Y.W., Debnar, L.A., Sutherland-Smith, M.R., Young, L., Jones, K.L., and Benirschke, K.: Trisomy 17 in a bonobo (Pan paniscus) and deletion of 3q in a lowland gorilla (Gorilla gorilla gorilla): comparison with human trisomy 18 and human deletion 4q syndrome. Cytogenet. Cell Genet. 95:228-233, 2001).
Lucas, M. and Wallace, I.: Chromosomes of Gorilla gorilla gorilla. J. Zool. 169:403-407, 1973.
Ludwig, K.S.: Beitrag zum Bau der Gorilla-Placenta. Acta Anat. 45:110-123, 1961a.
Ludwig, K.S.: Ein weiterer Beitrag zum Bau der Gorilla-Placenta. Acta Anat. 46:304-310, 1961.b
Martin, R.D., Kingley, S.R. and Stavy, M. Prospects for coordinated research into breeding of great apes in zoological collections. Dodo#14, pp.45-55, 1977.
Miller, D.A.: Evolution of primate chromosomes. Man's closest relative may be the gorilla, not the chimpanzee. Science 198:1116-1124, 1977.
Miller, D.A., Firschbein, I.L., Dev., V.G., Tantravahi, R. and Miller, O.J.: The gorilla karyotype: chromosome lengths and polymorphisms. Cytogenet. Cell Genet. 13:536-550, 1974.
Mitchell, W.R., Loskutoff, N.M., Czekala, N.M. and Lasley, B.L.: Abnormal menstrual cycles in the female gorilla (Gorilla gorilla). J. Zoo Anim. Med. 13:143-147, 1982.
Morgan, D.G.: Dissecting aneurysm of the aorta in a gorilla. Vet. Rec. 86:502-505, 1970.
Mrasek, K., Heller, A., Rubtsov, N., Triforov, V., Starke, H., Rocchi, M., Claussen, U. and Liehr, T.: Reconstruction of the female Gorilla gorilla karyotype using 25-color FISH and multicolor banding (MCB). Cytogenet Cell Genet. 93:242-248, 2001.
Naaktgeboren, C. and Wagtendonk, A.M.van: Wahre Knoten in der Nabelschnur nebst Bemerkungen uber Plazentophagie bei Menschenaffen. Z. Säugetierk. 31:143-147, 1982.
Nowak, R.M. and Paradiso, J.L.: Walker's Mammals of the World. Vol. I. 4th ed. Johns Hopkins University Press, Baltimore, 1983.
Pope, C.E., Dresser, B.L., Chin, N.W., Liu, J.H., Loskutoff, N.M., Behnke, E.J., Brown, C., McRae, M.A., Sinoway, C.E., Campbell, M.K., Cameron, K.N., Owns, O.M., Johnson, C.A., Evans, R.R. and Cedars, M.I.: Birth of a western lowland gorilla (Gorilla gorilla gorilla) following in vitro fertilization and embryo transfer. Amer. J. Primatol. 41:247-260, 1997.
Rideout, B.A., Gardiner, C.H., Stalis, I.H., Zuba, J.R., Hadfield, T. and Visvesvara, G.S.: Fatal infections with Balamuthia mandrillaris (a free-living amoeba) in gorillas and other old world primates. Vet. Path. 34:15-22, 1997.
Robinson, P.T. and Benirschke, K.: Congestive heart failure and nephritis in an adult gorilla. J. Amer. Vet. Assoc. 117:937-938, 1980.
Rosen, S.I.: Twin gorilla fetuses. Folia Primatol. 17:132-141, 1972.
Schempp, W., Zeitler, S. and Rietschel, W.: Chromosomal localization of rDNA in the gorilla. Cytogenet. Cell Genet. 80:185-187, 1998.
Scott, G.B.D.: Comparative Primate Pathology. Oxford University Press, 1992.
Tutin, C.E.G. and Benirschke, K.: Possible osteomyelitis of skull causes death of a wild lowland gorilla in the Lopé reserve, Gabon. J. Med. Primatol. 20:357-360, 1991.
Watson, L.M.: Hormone levels and overt social behaviors, including signed output, in a captive lowland gorilla. Zoo Biol. 3:285-306, 1984.
Wislocki, G.B.: On the female reproductive tract of the gorilla, with a comparison of that of other primates. Carnegie Inst. Contrib. to Embryol. # 135, pp. 163-204, 1932.
Yeager, C.H., O'Grady, J.P., Esra, G., Thomas, W., Kramer, L. and Gardner, H.: Ultrasonic estimation of gestational age in the lowland gorilla: A biparietal diameter growth curve. J.A.V.M.A. 179:1309-1310, 1981.
back to top
© 2002. All rights reserved.
|
Margaret Thatcher
Studio portrait, {{circa|1995–96}} Margaret Hilda Thatcher, Baroness Thatcher, (; 13 October 19258 April 2013) was a British who served as Prime Minister of the United Kingdom from 1979 to 1990 and Leader of the Conservative Party from 1975 to 1990. She was the longest-serving British prime minister of the 20th century and the first woman to hold that office. A Soviet journalist dubbed her the "Iron Lady", a nickname that became associated with her uncompromising politics and leadership style. As prime minister, she implemented policies that became known as Thatcherism.
Thatcher studied chemistry at Somerville College, Oxford, and worked briefly as a research chemist, before becoming a barrister. She was elected Member of Parliament for Finchley in 1959. Edward Heath appointed her Secretary of State for Education and Science in his 1970–1974 government. In 1975, she defeated Heath in the Conservative Party leadership election to become Leader of the Opposition, the first woman to lead a major political party in the United Kingdom. On becoming prime minister after winning the 1979 general election, Thatcher introduced a series of economic policies intended to reverse high inflation and Britain's struggles in the wake of the Winter of Discontent and an oncoming recession. Her political philosophy and economic policies emphasised deregulation (particularly of the financial sector), the privatisation of state-owned companies, and reducing the power and influence of trade unions. Her popularity in her first years in office waned amid recession and rising unemployment, until victory in the 1982 Falklands War and the recovering economy brought a resurgence of support, resulting in her landslide re-election in 1983. She survived an assassination attempt by the Provisional IRA in the 1984 Brighton hotel bombing and achieved a political victory against the National Union of Mineworkers in the 1984–85 miners' strike.
Thatcher was re-elected for a third term with another landslide in 1987, but her subsequent support for the Community Charge ("poll tax") was widely unpopular, and her increasingly Eurosceptic views on the European Community were not shared by others in her cabinet. She resigned as prime minister and party leader in 1990, after a challenge was launched to her leadership. After retiring from the Commons in 1992, she was given a life peerage as Baroness Thatcher (of Kesteven in the County of Lincolnshire) which entitled her to sit in the House of Lords. In 2013, she died of a stroke at the Ritz Hotel, London, at the age of 87.
A polarising figure in British politics, Thatcher is nonetheless viewed favourably in historical rankings of British prime ministers. Her tenure constituted a realignment towards neoliberal policies in the United Kingdom, with debate over the complicated legacy attributed to Thatcherism persisting into the 21st century. Provided by Wikipedia
by Thatcher, Margaret, 1925-
Published 1993
|
Top professional medical molding machine manufacturer
Home / News / Injection molding PRODUCT NEWS /
Plastic molding
Plastic molding
Issue Time:2021-04-21
The choice of plastic molding is mainly determined by the type of plastic (thermoplastic or thermosetting), the initial form, and the shape and size of the product. The commonly used methods for processing thermoplastics include extrusion, injection molding, calendering, blow molding and thermoforming. The processing of thermosetting plastics generally uses compression molding, transfer molding, and injection molding. Plastic molding is the process of making various forms (powders, pellets, solutions and dispersions) of plastics into products or blanks in the desired shape. There are as many as thirty kinds of molding methods. Laminating, molding, and thermoforming are forming plastics on a flat surface. The above-mentioned plastic processing methods can all be used for rubber processing. In addition, there are castings using liquid monomers or polymers as raw materials. Among these methods, extrusion and injection molding are the most used, and they are also the most basic molding methods.
Process characteristics
After the plastic part is taken out of the mold and cooled to room temperature, the size shrinkage is called shrinkage. Since shrinkage is not only the thermal expansion and contraction of the resin itself, but also related to various forming factors, the shrinkage of the plastic part after forming should be called forming shrinkage.
1. Forms of forming shrinkage Forming shrinkage is mainly manifested in the following aspects:
(1) The linear size shrinkage of the plastic part due to thermal expansion and contraction, elastic recovery and plastic deformation during demolding, the size of the plastic part shrinks after being demolded and cooled to room temperature. For this reason, the cavity design must be considered make up.
(2) When shrinking directional forming, the molecules are arranged in the direction, which makes the plastic part present anisotropy. Along the material flow direction (ie parallel direction), the shrinkage is large and the strength is high, and the direction perpendicular to the material flow (ie vertical direction) is small shrinkage. , Low intensity. In addition, due to uneven density and filler distribution at various parts of the plastic part during molding, the shrinkage is uneven. The difference in shrinkage makes the plastic parts prone to warpage, deformation, and cracks, especially in extrusion and injection molding, the directionality is more obvious. Therefore, the shrinkage direction should be considered during mold design, and the shrinkage rate should be selected according to the shape of the plastic part and the direction of the flow material.
(3) When forming post-shrinking plastic parts, due to the influence of forming pressure, shear stress, anisotropy, uneven density, uneven filler distribution, uneven mold temperature, uneven hardening, plastic deformation and other factors, it will cause a The effect of the series of stresses cannot all disappear in the viscous flow state, so there is residual stress when the plastic part is formed under the stress state. After demolding, due to the stress balance and the influence of storage conditions, the residual stress changes and the plastic part shrinks again, which is called post-shrinkage. Generally, plastic parts change the most within 10 hours after demolding, and are basically shaped after 24 hours, but the final stabilization takes 30-60 days. Generally, the post-shrinkage of thermoplastics is larger than that of thermosetting, and extrusion and injection molding are larger than compression molding.
(4) Post-treatment shrinkage Sometimes plastic parts require heat treatment after forming according to performance and process requirements. After treatment, the size of the plastic parts will also change. Therefore, for high-precision plastic parts during mold design, post-shrinkage and post-processing shrinkage errors should be considered and compensated.
2. Calculation of shrinkage The forming shrinkage of plastic parts can be expressed by shrinkage, as shown in formula (1-1) and formula (1-2).
(1-1) Q real=(a-b)/b×100
(1-2) Q meter = (c-b)/b×100
In the formula: Q real-actual shrinkage rate (%)
Q meter—calculate shrinkage rate (%)
a —The one-way size of the plastic part at the forming temperature (mm)
b — one-way size of plastic parts at room temperature (mm)
c — one-way size of the mold at room temperature (mm)
The actual shrinkage rate represents the actual shrinkage of the plastic part. Because its value is very different from the calculated shrinkage, the mold design uses Q as the design parameter to calculate the cavity and core size.
3. The factors that affect the change of shrinkage rate are not only different for different types of plastics in actual forming, but also different batches of the same type of plastic or different parts of the same plastic part have different shrinkage values. The main factors that affect the change of shrinkage rate are The factors are as follows.
(1) Plastic varieties. Various plastics have their own shrinkage ranges. The shrinkage and anisotropy of the same type of plastics are also different due to different fillers, molecular weights, and ratios.
(2) Characteristics of plastic parts The shape, size, wall thickness, presence or absence of inserts, the number and layout of the inserts also have a great influence on the shrinkage rate.
(3) Mold structure The parting surface and pressure direction of the mold, the form, layout and size of the pouring system also have a greater impact on the shrinkage and directionality, especially in extrusion and injection molding.
(4) Molding process Extrusion and injection molding processes generally have larger shrinkage and obvious directionality. The preheating condition, forming temperature, forming pressure, holding time, filling material form and hardening uniformity all affect the shrinkage rate and directionality.
As mentioned above, the mold design should be based on the shrinkage range provided in the specifications of various plastics, and according to the shape, size, wall thickness, presence or absence of inserts, parting surface and pressure forming direction, mold structure and Various factors such as the form, size and position of the feed inlet, and the forming process are comprehensively considered to select the shrinkage value. For extrusion or injection molding, it is often necessary to select different shrinkage rates according to the shape, size, wall thickness and other characteristics of each part of the plastic part.
In addition, forming shrinkage is also affected by various forming factors, but it is mainly determined by the type of plastic, the shape and size of the plastic part. Therefore, adjusting various forming conditions during forming can also appropriately change the shrinkage of the plastic part.
The ability of plastics to fill the cavity under a certain temperature and pressure is called fluidity. This is an important process parameter that must be considered during mold design. High fluidity is easy to cause excessive overflow, inadequate filling of the cavity, loose structure of plastic parts, accumulation of resin and filler separately, easy to stick mold, difficult to demold and clean, and premature hardening. However, if the fluidity is small, the filling is insufficient, forming is not easy, and the forming pressure is high. Therefore, the fluidity of the selected plastic must be compatible with the requirements of the plastic part, the forming process and the forming conditions. When designing the mold, the pouring system, parting surface and feeding direction should be considered according to the flow performance. The fluidity of thermoset plastics is usually expressed in terms of Raschig fluidity (in millimeters). Larger values mean good fluidity. Each type of plastic is usually divided into three different levels of fluidity for different plastic parts and forming processes. Generally, plastic parts with large area, many inserts, thin cores and inserts, and complex shapes with narrow deep grooves and thin walls are not good for filling. Plastics with better fluidity should be used. Plastics with a Raschig fluidity of 150mm or more should be used for extrusion molding, and plastics with a Raschig fluidity of 200mm or more should be used for injection molding. In order to ensure that each batch of plastics have the same fluidity, the batch method is commonly used in practice to adjust, that is, to mix the same variety of plastics with different fluidity, so that the fluidity of each batch of plastics can compensate each other to ensure the quality of plastic parts . The Raschig fluidity values of commonly used plastics are shown in Table 1-1. However, it must be pointed out that in addition to the variety of plastics, the fluidity of plastics is often affected by various factors when filling the cavity, so that the plastic actually fills the cavity. The ability to change. Such as fine and uniform particle size (especially round pellets), high humidity, high moisture and volatile matter, proper preheating and forming conditions, good mold surface finish, proper mold structure, etc., are all conducive to improving fluidity. Conversely, poor preheating or molding conditions, poor mold structure, large flow resistance, or plastic storage period, overdue, high storage temperature (especially for amino plastics), etc. will cause the actual flow performance of the plastic to decrease when filling the cavity and cause filling bad.
Specific volume and compression ratio
Specific volume is the volume occupied by each gram of plastic (in cm3/g). The compression ratio is the ratio of the volume or specific volume between the plastic powder and the plastic part (its value is always greater than 1). They can all be used to determine the size of the die loading chamber. A large value requires a large volume of the charging chamber, and at the same time indicates that the plastic powder is filled with a lot of air, it is difficult to exhaust, the forming cycle is long, and the productivity is low. The opposite is true if the specific volume is small, and it is conducive to pressing and pressing. The specific volume of various plastics is shown in Table 1-1. However, the specific volume value often has errors due to the particle size of the plastic and the unevenness of the particles.
Hardening characteristics
During the forming process, the thermosetting plastic transforms into a plastic viscous state under heating and pressure, and the fluidity increases to fill the cavity. At the same time, a condensation reaction occurs, the crosslinking density continues to increase, the fluidity drops rapidly, and the melt gradually solidifies . When designing the mold, for materials with fast hardening speed and short flow state, attention should be paid to easy loading, loading and unloading of inserts, and selection of reasonable forming conditions and operations to avoid premature hardening or insufficient hardening, resulting in poor molding of plastic parts.
The hardening speed can generally be analyzed from the holding time, which is related to the type of plastic, the wall thickness, the shape of the plastic part, and the mold temperature. However, it is also subject to other factors, especially related to the preheating state. Proper preheating should keep the plastic to maximize its fluidity and increase its hardening speed as much as possible. Generally, the preheating temperature is high and the time is long (when allowed Within the range), the hardening speed is accelerated, especially when the pre-compacted ingot is preheated with high frequency, the hardening speed is significantly increased. In addition, high molding temperature and long press time increase the hardening speed. Therefore, the hardening speed can also be adjusted by preheating or forming conditions to be appropriately controlled.
The hardening speed should also be suitable for the requirements of the forming method. For example, injection and extrusion molding should require slow chemical reaction and slow hardening during plasticization and filling, and should maintain a fluid state for a long time, but when the cavity is filled, it should be at high temperature and high pressure. The bottom should harden quickly.
Moisture and volatile content
Various plastics contain different levels of moisture and volatile content. When they are too much, the fluidity will increase, the material will overflow, the retention time will be long, the shrinkage will increase, and the defects such as ripples and warping will easily occur, which will affect the electrical and mechanical properties of the plastic parts. However, when the plastic is too dry, it will be difficult to form with poor fluidity. Therefore, different plastics should be preheated and dried as required. For materials with strong moisture absorption, especially in the humid season, even the preheated materials should be prevented from re-absorption.
Since various plastics contain different components of water and volatiles, and at the same time condensation occurs during the condensation reaction, these components need to be turned into gases and discharged out of the mold during forming. Some gases have a corrosive effect on the mold and have a corrosive effect on the human body. Stimulating effect. Therefore, in the mold design, we should understand the characteristics of various plastics, and take corresponding measures, such as preheating, mold chrome plating, opening an exhaust groove or setting an exhaust process during forming.
Method editing
Plastic products are made of a mixture of synthetic resin and various additives as raw materials by injection, extrusion, pressing, pouring and other methods. While plastic products are being molded, they also obtain the final performance, so plastic molding is a key process for production.
Figure 1 Injection molding
Injection molding, also called injection molding, is a method of using an injection machine to quickly inject molten plastic into a mold and solidify it to obtain various plastic products. Almost all thermoplastics (except fluoroplastics) can use this method, and it can also be used for the forming of some thermosetting plastics. Injection molding accounts for about 30% of the production of plastic parts. It has the advantages of being able to form complex shapes at one time, precise dimensions, and high productivity. However, the cost of equipment and molds is relatively high, and it is mainly used for the production of large quantities of plastic parts.
There are two commonly used injection molding machines: plunger type and screw type. Figure 1 is a schematic diagram of screw type injection molding. The principle of injection molding: the powdery and granular raw materials are fed into the barrel from the hopper. When the plunger advances, the raw materials are pushed into the heating zone, and then through the shunt shuttle, the molten plastic is injected into the mold cavity through the nozzle, and the plastic product is obtained by opening the mold after cooling. After the plastic injection part is taken out of the mold cavity, proper post-treatment is usually required to eliminate the stress generated during the forming of the plastic part and stabilize the size and performance. In addition, there are cutting off burrs and gates, polishing, surface coating, etc.
Extrusion molding
Extrusion is a process in which plasticized plastic is continuously extruded into a mold by means of screw rotation and pressure, and a plastic profile that is compatible with the shape of the mold is obtained when it passes through a certain shape of the die. Extrusion molding accounts for about 30% of plastic products, and is mainly used for various plastic profiles with a certain cross section and large length, such as plastic tubes, plates, rods, sheets, strips, materials, and special-shaped materials with complex cross-sections. It is characterized by continuous forming, high productivity, simple mold structure, low cost, compact organization and so on. Except for fluoroplastics, almost all thermoplastics can be extruded, and some thermosetting plastics can also be extruded.
Figure 2 is a schematic diagram of spiral extrusion. The granular plastic is sent from the hopper into the spiral chamber, and then sent to the heating zone by the rotating screw to melt and be compressed; under the action of the spiral force, it is forced to pass through the extrusion with a certain shape Mold, obtain a profile consistent with the cross-sectional shape of the die; after it falls on the conveyor belt, it is cooled and hardened by jetting air or water to obtain a solidified plastic part
Figure 3 Press molding
Figure 3 Press molding
Compression molding, also known as compression molding, compression molding, compression molding, etc., is to add solid pellets or prefabricated pieces into a mold, and then soften and melt it by heating and pressurizing, and fill the mold under pressure. Cavity, the method of obtaining plastic parts after curing. Press molding is mainly used for thermosetting plastics, such as phenolic, epoxy, silicone, etc.; it can also be used to press thermoplastic polytetrafluoroethylene products and polyvinyl chloride (PVC) records. Compared with injection molding, the press molding equipment and mold are simple, and can produce large-scale products; but the production cycle is long, the efficiency is low, it is difficult to realize automation, and it is difficult to produce thick-walled products and products with complex shapes.
Figure 3 is a schematic diagram of press forming. The general press forming process can be divided into several stages: feeding, clamping, venting, curing and demolding. After demolding the plastic part, post-treatment should be carried out, and the treatment method is the same as that of the injection-molded plastic part.
Blow molding
Blow molding (belonging to the secondary processing of plastics) is a processing method in which hollow plastic parisons are blown and deformed by means of compressed air, and the plastic parts are obtained after cooling and shaping. The methods mainly include hollow blow molding and film blow molding.
Figure 4 is a schematic diagram of the extrusion blow molding of a hollow part. The extruded or injected tubular parison with a certain temperature is placed in a split blow mold, the mold is closed, and compressed air is blown in through a blow tube to inflate the parison Then make it close to the mold wall, after holding pressure, cooling and shaping, the mold is opened and the hollow part is taken out.
The casting of plastic is similar to the casting of metal. The processing method of injecting polymer materials or monomer materials in a fluid state into a specific mold, reacting and curing them under certain conditions, and forming a plastic part consistent with the mold cavity. This forming method has simple equipment, does not require or slightly pressurize, has low mold strength requirements, and has low production investment, and can be applied to various sizes of thermoplastic and thermosetting plastic parts. However, plastic parts have low precision, low productivity, and long forming cycles.
Gas-assisted injection molding
Gas-assisted injection molding (referred to as gas-assisted molding) is a new method in the field of plastic processing. The gas-assisted forming process can be roughly divided into 3 methods: A) Hollow forming, that is, the plastic melt is injected into the mold cavity, and when it is filled to 60%-70% of the cavity volume, the injection is stopped and the gas is injected until the pressure is maintained. Cool down and shape. This process is mainly suitable for thick-walled plastic products such as handles and handles. B) Short shot, that is, when the plastic melt is filled to 90%-98% of the cavity volume, air intake begins. This method is mainly used for thick-walled or partial-walled products with larger planes. C) Full injection, that is, when the plastic melt is filled to completely fill the cavity, the gas is injected, and the space created by the volume shrinkage of the melt is filled by the gas, and the gas pressure and the melt pressure are used together to make the product warp The deformation is greatly reduced, and it is used for the molding of thin-walled products with larger planes, and its process control is more complicated. The first two methods are also called short-material gas-assisted injection, and the latter is called full-material gas-assisted injection.
The gas-assisted process includes the following four stages: the first stage, plastic injection. The melt enters the cavity and meets the lower temperature of the mold wall to form a thinner solidified layer; the second stage: gas is incident. The inert gas enters the molten plastic, pushing the unsolidified plastic in the center into the cavity that has not been filled; the third stage: the gas is incident. The gas continues to push the plastic melt to flow until the melt fills the entire cavity; the fourth stage: gas pressure retention. In the pressure-holding state, the gas in the air passage compresses the melt and feeds to ensure the appearance quality of the part.
Gas-assisted forming has the following advantages: eliminate sink marks on the product surface, improve product surface quality; reduce warpage and deformation, reduce flow streaks; reduce product internal stress and increase product strength; save plastic raw materials and reduce product weight (generally, it can be reduced by 20% -40%); Improve the distribution of materials on the section of the product, improve the rigidity of the product; shorten the molding time, increase production efficiency; extend the service life of the mold.
|
Islam and Homosexuality
Islam and Homosexuality
Before the emergence of Islam in the Arabian Peninsula, all kinds of sexuality was being experienced. Although we don’t have many documents related to the age, it is understood from the references in the Kur’an that sexuality was not a taboo. Actually Islam didn’t mention any strict sexual rules either. If we just take Kur’an as the reference, it may even be called a sex positive religion.The only reference to homosexuality is in the sections about Sodom and Gomorrah. But even in those sections homosexuality is not very clearly condemned. People are punished because of having done everything excessively. They don’t only sleep with men, they sleep with women too, they drink too much, they got involved in pleasure too much. “Much” is the keyword here.
And the punishment for almost all crimes are mentioned in the Kur’an but there’s no specific punishment for homosexuality. But these sections in Kur’an have always been the defending point of homophobic Islamic people.What brings condemnation to homosexuals is not the Kur’an but the Islamic societies. Cultures also shape the religion as well as the society itself.
During the first years of Islam, homosexuality was never mentioned as a crime. There are even rumors that Ali, one of the members of Mohammed’s family had an affair with Mohammed. And in the famous 1001 Arabian Nights, there are some stories which are openly about homosexual relationships. But this did not mean that there was a conscious homosexual community. If those people knew the word “homosexual” they wouldn’t call themselves so. It was just sleeping with men as well as with women.
In Islamic cultures homosexual relationships were not very open. But they were not in complete secrecy either. Everything was all right as long as it was done behind closed gates and was not talked about. But with modernization, this changed in the Islamic societies. AS it is easy to comment Kur’an in the negative way, homosexuality started to be accepted as a crime and it is punished.
Homosexuality is a legal crime and forbidden in most of Islamic countries like Saudi Arabia, Iran, etc. Whatever is forbidden is more attractive and homosexuality is not a habit that can be banned. So in all of these countries people are engaged in homosexual activity. As there are punishments for the act, everything is done in secret.
Salaam Canada is a Canadian organization dedicated to lesbian, gay, bisexual, trans (LGBT) Muslims including those questioning their sexual orientation/gender identity and their friends. Salaam Canada’s goal is to provide a safe space and a forum for LGBTQ Muslims to communicate issues of common concern, share individual experiences & institutional resources. By using our knowledge, our faith in Islam and our belief in Allah (God), the mission of Salaam Canada is to help LGBTQ Muslims in reconciling their sexual orientation/gender identity with the religion of Islam.
Leave a comment
|
Ozone Generator
Arab Engineers Water Technology
Polyglass pressure vessels
Georg Fischer instruments
Sodium Hypochlorite
Sodium Hypochlorite is a chlorine compound often used as a disinfectant. The chemical formula is NaClO.
Sodium Metabisulfite
Inorganic compound of chemical formula Na 2 S 2 O 5 . It is used as a disinfectant, antioxidant, and preservative agent.
Hydrochloric acid
The Hydrochloric acid used to control the pH of process water streams. chemical formula of hydrochloric acid is HCl.
The formalin used as an antiseptic, disinfectant, and especially today as a fixative for histology. The chemical formula is HCHO.
|
Tuesday, June 2, 2015
Fig. 1 East Coast Ground Zero
I. Some Background
Regular readers know that recent posts have covered the coming invasion of the East Coast of the United States by sea level rise (SLR).
We don't know if that is happening because the U.S. Military has taken Dredd Blog's advice to invade ourselves, because that is probably the only way to get rotting U.S. infrastructure repaired (every nation we invade gets billions for repairs, so we might as well invade ourselves and get it over with: War - Great Stimulant - Let's Invade us, 2).
Anyway, to read more about SLR areas already covered by those Dredd Blog posts, click here ("Series Posts" tab link), then arrow down to the "SEA LEVEL RISE" link area.
II. Even More Ice Streams
In those previous posts we covered various sources of SLR in ice streams, but we need to continue to keep a general focus on them, because there are a lot of ice streams in Antarctica (List of Antarctic ice streams).
So, today we will cover a different ice stream source in an area where ice loss has doubled, the Antarctic (BBC).
Fig. 2 USGS (click to enlarge)
Specifically, we will look closely at only a few ice streams there, as we have already closely looked at ice streams elsewhere (Greenland & Antarctica Invade The United States).
Then, we will zoom in to focus on two ice streams in a specific area, the Amundsen Sea Embayment, which engenders the Thwaites Ice Stream and the Pine Island Ice Stream (a.k.a. glaciers).
The reason I have chosen those two ice stream glaciers is because they are accelerating substantially, and they have enough SLR potential to qualify as an area of interest (Antarctic glacial melt rate triples in Amundsen Sea embayment).
III. Our Area of Interest Is 1 meter / 3 feet of SLR
Fig. 3 Amundsen Sea Embayment
The entire Antarctic Peninsula, if all ice melted and entered the sea, would only generate 1.51 feet of SLR (see Fig. 2).
However, if the West Antarctic Ice Sheet were to do the same, it would generate 26.44 feet of SLR (again, see Fig. 2).
Thus, only about 11% (3 ÷ 26.44) of that potential is needed to cause SLR of 3 ft / 1m, the threshold level I am using as a point to focus on (that figure is based on the statement by scientists, in the video below, that a 1m / 3ft SLR would be catastrophic to coastal areas).
Including the East Coast area that already has 1.5 feet of SLR (see Fig. 1).
IV. The Nitty Gritty
Over a year has passed since the paper referred to in the following quote was published:
"A massive glacier system in West Antarctica has started collapsing because of global warming and will contribute to significant worldwide sea-level rise, two teams of scientists warn in a pair of major studies released Monday.
Scientists had previously thought the two-mile-thick (3.2 kilometers) glacier system would remain stable for thousands of years, but new research suggests a faster time frame for melting.
A rapidly melting section of the West Antarctic Ice Sheet appears to be in irreversible decline and will sink into the sea, scientists at the University of California, Irvine and NASA reported Monday.
"This retreat will have major implications for sea-level rise worldwide," said Eric Rignot, a UC-Irvine Earth science professor and lead author of a study to be published in a journal of the American Geophysical Union.
The glaciers contain enough ice to raise global sea level by 4 feet (1.2 meters) and are melting faster than most scientists had expected, which will require adjusting estimates of sea-level rise, said Rignot, who is also a glaciologist at NASA's Jet Propulsion Laboratory in Pasadena, California."
(National Geographic, emphasis added). Those two glaciers only need to melt / calve 75% of their potential in order to engender 1m / 3ft SLR (3 ÷ 4 = 0.75).
V. The Odds Are Against Us
These two glaciers alone can contribute more than 1m / 3 ft of SLR necessary to cause a global catastrophe, but they are definitely not the only SLR game in town:
2:50 - "Yes, absolutely."
(Greenland & Antarctica Invade The United States). There are many more ice streams to consider, but you get the point (they are all melting in some degree at the same time NOW).
VI. Don't Wait For The Models
The scientists keep saying (and I keep quoting them) that the models are seriously underestimating SLR.
That means catastrophic SLR is closer than we think, and far more certain than we have been told.
SLR is on the march, and there are no defenses other than leaving the fossil fuels in the ground, going code red for renewable, non-polluting energy sources, and waiting to see if it works or not (i.e. are we already too late?).
It would not hurt at all in the long run, to consider shutting down civilization as we know it for awhile (if we don't SLR will).
VII. Conclusion
Now you know why the "pessimists" (a.k.a. realists) are winning most of the bets on this issue, and why I am repeating some of this information ad nauseum.
We really need to get this particular point, because it is happening, and it is even a matter of law.
Law, one of the slowest areas to get much of anything that is current (Global Warming Induced Climate Change Is A Matter of Law, Public Trust Litigation).
HBO Vice: "Our Rising Oceans", with Dr. Eric Rignot:
2:50 - "Yes, absolutely."
1 comment:
1. Remember that the destruction of civilization is not the same as the destruction of the human species.
One comes before the other.
|
The quality of your sleep affects your day in profound ways—it can inspire or sap creativity, flexibility, energy and mood. It can also make us feel as if we can take on the world or are unable to perform the simplest tasks. If you’re having a “down” day, there’s a good chance you didn’t sleep well the night before. Some things can’t be controlled, which is even more reason to control the things you can. These 10 tips are considered basic sleep hygiene, an odd term defined as “habits and practices that are conducive to sleeping well on a regular basis.” Are you practicing these sleep basics?
1. Be consistent. Try to go to bed and wake up at about the same time every day, including weekends and vacations. Aim for seven-plus hours. If you’re still lying there after 20 minutes, get up. Worrying about not sleeping can trigger a vicious cycle.
2. Control your environment. Quiet and darkness are critical for sound sleep. Use blackout curtains, eye shades, ear plugs, fans, white noise or whatever works to control external stimuli. Keep the room at a comfortable 65 degrees. Sound chilly? Read about the science of sleep temperature.
3. Power down. Turn off electronic devices a half-hour before bedtime. Artificial blue light affects your circadian rhythms and suppresses the release of melatonin, a sleep-inducing hormone. Likewise, avoid exposure to any bright light before bedtime.
4. Watch what you consume. Food, caffeine and alcohol late at night all can wreak havoc on sleep. Avoid heavy meals at night or eating within two hours of bedtime. When your stomach is rumbling, though, a light, healthy snack may help you fall asleep.
5. Develop a bedtime routine. A warm bath, gentle yoga, soft music, a calming book or a cup of herbal tea are all practices that tell your body it’s time to shut down for the night.
6. Exercise. Moderate daily physical activity has been shown to increase deep sleep, which promotes the rejuvenation of brains and bodies. Timing is everything: Exercising late at night elevates body temperature and raises endorphin levels, which may be counterproductive.
7. Sleep on a good mattress. Old (more than 8–10 years) or poor-quality mattresses that are not adequately supportive or large enough (especially if you share a bed) will interfere with ideal sleep. If it is time to replace your mattress, consider this purchase an investment in good sleep. If the price of the mattress seems too good to be true, it probably is. Avoid chemicals of concern and improve indoor air quality by choosing a mattress made with CertiPUR-US® certified foam, which meets strict standards for content and emissions.
8. Limit Naps. Daytime naps close to bedtime or longer than 30 minutes can interfere with nighttime sleep.
9. Don’t bring anxieties to bed. If worries plague you at bedtime, set a time to write down concerns and work out possible solutions. Make to-do lists throughout the day and put them away at night. Meditation or mind games that distract you from troublesome thoughts can really help. Try reciting the alphabet backwards, thinking of historical figures, dog breeds (or anything else) that begin with the letters A, B, C. etc. successively, or counting backwards from 300 by sevens.
10. Know when to see a doctor. Chronic insomnia, frequent nighttime awakening, snoring and falling asleep during the day can be signs of a serious sleep disorder. Seek the help of a medical professional that specializes in sleep.
By |2020-08-03T09:05:37-04:00July 29, 2020|
Share This Story, Choose Your Platform!
Subscribe to the Blog
|
September 4, 2021 admin 0 Comments
Private investigators (PI’s) are people who can be employed by individuals, institutions or NGOs to undertake investigative services. Private investigators are also called private detectives or private investigators. Their role is as an objective third party who assists the police or law enforcement authorities in gathering and assessing information that is relevant to their investigations. They are employed by large corporations, intelligence agencies and media organizations for similar purposes. Private investigators can work independently or as part of a larger team.
private investigators
There are many private investigators out there who offer various types of investigation services, most of them criminal. They have been hired by many lawyers to conduct private investigation services. It has become a lucrative business and many private investigators are starting their own businesses. Many of them start off with a few cases they feel they can perform well and grow their clientele accordingly. They are normally paid on a per case basis, depending on what the client and the investigator can achieve.
Sometimes, private investigators may work in conjunction with other officials, such as police officers, to carry out specific surveillance operations. Such activities may need the co-operation of other officials and they might need to use video equipment, hidden cameras or any number of gadgets that help them gather evidence. In this case, surveillance equipment and methods must be legally approved. Other types of spying may also be lawful according to local laws. For example, if a private investigator wants to monitor a public place for safety concerns, he needs to secure the proper authorization from the government.
Private investigators may also use covert or secret methods of investigating. Covert surveillance can involve planting hidden cameras in business settings or at public places such as in churches and public places of worship. The reason why a private investigator would want to install hidden cameras in these places is because these may help the investigator to gather proof against someone who is committing a crime. Another type of covert surveillance involves using social media to gather information about an individual. If a private investigator uses a social media site to search for information about an individual, it may require the permission of the user in order to obtain such information.
When working with criminal law firms, private investigators have to follow certain guidelines in order to comply with state and federal law. These guidelines vary from state to state. Some states allow private investigators to open an account with banks. However, banks usually do not provide banking services to prospective private investigators. Thus, most banks prohibit the former employees or job applicants from using their bank accounts to conduct personal business. A good alternative to banks is using online money transfer services like PayPal, Moneybookers or others.
In addition, there are a number of ways to find information about prospective employees and clients. The internet is one of the best ways to conduct research about potential employers. There are various background check websites that allow private investigators to perform nationwide background checks. If a private detective conducts a nationwide search, he or she will need to submit a request to every government department and agency in the country in order to get relevant information. The whole process may take months if not years.
There are different types of investigation available to private investigators. For instance, they can perform search engine optimization (SEO) investigation, which means that they will have to conduct keyword research in order to uncover relevant information about a specific person. One popular example of this method is when a website owner wants to increase his or her ranking in the major search engines. In this type of investigation, private investigators look for links on websites that are related to the owner’s business.
Private investigation is growing in popularity because private investigators can perform background checks as well as criminal justice checks on individuals. Some private investigators offer reverse cellphone number look-ups as well. If you’re looking to hire private investigators to conduct a background check, you should look for a reliable and reputable company. A good and effective company will be able to conduct an unlimited number of background checks for a minimal fee.
|
Quick Answer: How Do You Identify A Research Type?
What are the 4 types of research methods?
The type of research data you collect may affect the way you manage that data..
What are the types of research with definition?
What are the types of research?Basic research: A basic research definition is data collected to enhance knowledge. … Applied research: Applied research focuses on analyzing and solving real-life problems.More items…
What are the basic research methods?
What are the 3 research methods?
What is basic research and example?
What are the major types of research?
MethodologyTypes of research.Correlational research.Descriptive research.Ethnographic research.Cross-sectional studies.Longitudinal studies.Case studies.
What are the different types of research models?
Model ClassificationFormal versus Informal Models. … Physical Models versus Abstract Models. … Descriptive Models. … Analytical Models. … Hybrid Descriptive and Analytical Models. … Domain-Specific Models. … System Models. … Simulation versus Model.More items…
What are the 6 characteristics of research?
What are the 7 types of research?
What are the three types of research methods?
How many types of research methods are there?
What are the 8 characteristics of research?
Eight characteristics of the research are: 1) Research originates with a question or problem. 2) Research requires clear articulation of a goal. 3) Research usually divides the principal problem into more manageable sub-problems. 4) Research is guided by the specific research problem, question, or hypothesis.
What are the two types of data?
What are the 5 types of research methods?
What are the two major types of research?
|
Varying Sentences Jazzes Up Persuasive Writing
Varying sentences improves style. When kids feel finished with their writing, it’s time to shake things up. First, begin every sentence in a paragraph differently. Second, vary sentence types. For more advanced writing, experiment with sentence length.
Varying Sentences
Do you want your students’ writing to shine? Then try these sentence shake-ups. When the work is done, you’ll be amazed at the difference.
Varying Sentences – Beginnings
Ask young writers to begin every sentence in a paragraph differently. This little habit improves style instantaneously!
Help your third grade, fourth grade, and fifth grade students improve their persuasive writing by varying sentences.
Varying Sentences – Types
Throwing in a question or exclamation jazzes up a paragraph. In a persuasive paragraph, kids can also use commands. Often, changing just one sentence does the trick. Use this strategy sparingly.
Third grade, fourth grade, and fifth grade students can improve persuasive writing by using different sentence types.
Varying Sentences – Length
What happens when a paragraph is filled with standard-length sentences? Yep, it’s choppy. Boring, even. Kids can avoid this with long and short sentences.
When kids explain, they should use longer sentences. How can this be achieved?
• Combine related sentences.
• Add a phrase or clause.
• Get descriptive – add more to the sentence.
To punctuate, try short sentences. These little gems make the audience stop dead in their tracks. They’ll really notice what you’re saying. To achieve this, use brief commands, interjections, or stripped-down sentences. Where does this strategy work best? In persuasive writing, end with a short, pointed call to action.
Add some punch to persuasive writing! Even kids in third grade, fourth grade, and fifth grade can vary sentence lengths for effect.
Persuasive Writing Series
Yes, you can help kids improve their writing! Learn more about teaching persuasive writing in these blog posts:
Looking for related instructional materials? Try these circus-themed video, anchor charts, modeling, and student sheets. You’ll find them in my Teachers pay Teachers store.
Circus clip art was created by Kate Hadfield.
Previous Post
Elaboration Strengthens Persuasive Writing
Next Post
Transitions Make Persuasive Writing Flow
|
History Of The Andalusian Horse
The Andalusian horse originates from the rugged, hilly areas of the Iberian Peninsula and is one of the most ancient horse breeds. The Andalusian horse was developed in Southern Spain and although Andalusian horses were originally highly regarded as a cavalry horse due to their agility and courage they became less favoured as a war horse when knights were heavily armoured and required heavier horses to carry them. However, Andalusian horses gained popularity again amongst cavalries with the introduction of firearms when a fast, agile horse was needed.
During the Renaissance grand riding academies were formed across Europe where dressage and high school riding evolved and Andalusian horses were popular due to their agility, impulsion and natural balance. In Spain, Andalusian horses were also the mounts of bull fighters.
In Spain Andalusian horses are known as the Pure Spanish Horse or PRE (Pura Raza Española) and PRE horses are registered in the Spanish Stud Book maintained by the Military and Government of Spain.
The Andalusian horse has been used in the development of many other horse breeds around the world including the Lippizaner.
Height Of The Andalusian Horse
The Andalusian horse stands 15.2 to 16.2 hh.
Colour Of The Andalusian Horse
Andalusian horses are most often grey but other colours such as bay, chestnut, black, palomino, dun also exist.
Breed Characteristics Of The Andalusian Horse
The Andalusian horse has a long head with broad forehead and convex profile, long arched neck, abundant mane, short body with powerful hindquarters and strong fine legs. They have a high knee action and are short striding with great presence.
Temperament Of The Andalusian Horse
The Andalusian horse is intelligent, docile and calm.
Uses Of The Andalusian Horse
The Andalusian horse is used as a general riding horse, for bull fighting, dressage, classical riding and high school work.
|
Examples of Ethos
When creating an argument, you've probably heard the word ethos. But what exactly is ethos? It is the ethical perspective and authority you put into your work. Explore ethos examples in literature, movies and speeches to better understand the concept.
ethos example appeal to authority or credibility ethos example appeal to authority or credibility
What Is Ethos?
Do you have a strong set of ethics or a strict moral code? At some point in your life, you’ll likely come across a piece of writing or a speech that will appeal to your ethical perspective. When that happens, you’ll know the writer or speaker has employed the rhetorical device of ethos.
Ethos is an ethical appeal and appeals to your sense of right and wrong. It works to build authority with an audience. For example:
This cream has been backed by dermatologists.
This works to build ethos in advertising by showing the product's authority and quality. Let’s talk a little bit more about this mode of persuasion, then dive into several examples of ethos in action.
Elements of Ethos
Is ethos a rhetorical device? Yes, it is. But before you can make an ethical appeal in your writing or speech, you need to build a few key things with an audience. Once these virtues are established, ethos can be very effective within your writing:
• wisdom, intelligence or ability
• moral virtue
• trust
Ethos Examples in Literature
Let’s move from the intangible (ethics and emotions) to the tangible. We’ll take things to a more concrete level and enjoy some examples of ethos, starting with the written word.
To Kill a Mockingbird by Harper Lee
In To Kill a Mockingbird, Scout’s father, Atticus, is using ethos so blatantly, he might as well say, “Hey, jurors, find your ethics and make the right decision.” He’s calling each juror out on the carpet, reminding them that no one man is better than any other man in the courtroom or in society as a whole. This is an ethical ideal that Atticus preached and demonstrated in both his personal and private life.
East of Eden by John Steinbeck
John Steinbeck loved to write in the first person. He often became the narrator of this own novels. In East of Eden, he's appealing to people's sense of ethics regarding freedom and building his credibility as a narrator. He's standing firm in his resolve to fight against any ideal that limits the individual and is clearly making a case for others to follow.
The Catcher in the Rye by J.D. Salinger
Ethos isn't always an in-your-face point within literature. Sometimes, it's the steps that a writer takes to make a character believable that carry the ethos. For example, the slang that J.D. Salinger uses in his creation of Holden Caulfield creates his credibility as a rebellious 16-year-old. See this example where Holden's slang like "damn" makes him a reliable author.
Examples of Ethos in Movies
It’s hard to have a leading character if the entire audience isn’t captivated by him or her. As such, the leading man or woman in a movie must follow a strict code of ethics. Part of the suspense usually comes into play when those ethics are called into question in a pivotal moment. What will they choose? Let’s take a look at some examples of ethos in movies.
Albus Dumbledore in the Goblet of Fire
When you talk about ethos and trustworthiness in a character, you can't look any farther than Albus Dumbledore in the Harry Potter series. Not only is he the headmaster of Hogwarts, but his respect in the wizarding world is unmatched. Albus's speech after Cedric's death works to exemplify this since he goes against the Ministry of Magic to share the truth with his students.
Albus Dumbledore: "I think therefore you have the right to know exactly how he died. You see Cedric Diggory was murdered by Lord Voldemort. The Ministry of Magic does not wish me to tell you this. But I think to do so would be an insult to his memory."
Black Panther
The 2018 film Black Panther is the highest-grossing superhero movie of all time in the United States. When establishing ethos and authority, the main character embodies both. Not only is he a king, but he cares about his people. He's a good person and being the right kind of king is important to him. Get a sense of his ethos through this beautiful line by T'Chaka to T'Challa.
Eye in the Sky
It turns out Helen Mirren can play the Queen of England and a British army colonel. In Eye in the Sky, Mirren's character orders a drone strike to take out a group of terrorists in Kenya. However, moral judgments come into play when a woman enters the kill zone. Is one person's life worth the payoff that comes with the extinction of a terrorist cell? Plenty of characters in the film lean on their sense of ethos to make their case for and against the drone strike. The ethos of the characters Watts and Powell is hard to miss.
Colonel Katherine Powell: "You are cleared to engage, lieutenant."
Steve Watts: "She stopped."
Colonel Katherine Powell: "Lieutenant, we have this one opportunity. Let's not lose it."
Steve Watts: "Ma'am, um, she's selling bread."
Carrie Gershon: "Jesus"
Colonel Katherine Powell: "Those men are about to disperse. Engage now!"
Steve Watts: "Ma'am, I understand we have clearance. I have to fire if I see the HVI's moving or when this girl's out of the frag radius but I want to give her a chance to get out of the way."
Colonel Katherine Powell: "Lieutenant, you have clearance! There is a lot more at stake than you see here in this image."
Ethos Appeal Examples in Speeches
It’s no secret a politician must be a master persuader. They have to earn the confidence of legions of people across a city or town, or even across a nation. One of the best ways to do that is to align their ethical code with the people they’re trying to elicit votes from. Let’s take a look at powerful orators who apply ethos to their speeches.
Barack Obama
In his speech in 2012, Barack Obama couldn’t be any more transparent in his use of ethos. He begins his speech by setting down his authority as a president and clarifying what he's learned. You can see this through the excerpt.
Ronald Reagan
In Ronald Reagan's farewell address to the nation in 1989, he made an appeal using ethos. It's one thing to make a statement that knocks on morality's door. It's another thing to make a sound argument, relying heavily upon the people's moral code.
Few things in life are more important than freedom. It's the foundation upon which America was built. Before he left the Oval Office, Reagan made one final appeal. He encouraged government leaders not to overstep their bounds because, if they did, it would be detrimental to the people's right to liberty.
Winston Churchill
Another great who was able to effectively use ethos in his speeches was Winston Churchill. In his 1941 address to the joint session, he establishes his credibility as a speaker. He shows the audience what he shares with common people. This excerpt works well to demonstrate his moral values.
"I may confess, however, that I do not feel quite like a fish out of water in a legislative assembly where English is spoken. I am a child of the House of Commons. I was brought up in my father's house to believe in democracy. 'Trust the people.' That was his message. I used to see him cheered at meetings and in the streets by crowds of workingmen way back in those aristocratic Victorian days when as Disraeli said 'the world was for the few, and for the very few.'"
Ethos and Aristotle
The great philosopher Aristotle divided the act of persuasion into three realms: ethos, logos, and pathos. When attempting to persuade someone, either in written or oral form, you might want to appeal to their ethics (ethos), logic (logos) or emotion (pathos). Any approach may prove successful, but that depends on your intent and audience.
Although Aristotle’s preferred method of persuasion was logos (logic), he wasn’t completely opposed to ethos (ethics). He was, however, averse to pathos (emotion). He said, “The arousing of prejudice, pity, anger, and similar emotions has nothing to do with the essential facts.”
To call someone’s moral code into question is tricky terrain. You never want to launch into an ad hominem attack and pretend to know the inner workings of someone else’s mind. Perhaps that’s why Aristotle was a fan of cold, hard, scholarly facts.
Do You Have East of Eden Ethos?
Sure you do. We all have some sort of code we live by. That’s why it’s a great rhetorical device. Whenever writers can reach through the pages and tug at the audience’s sense of right and wrong, they stand a chance of securing a faithful following. Interested to see how else you can hook readers into your work? Scan through these examples of rhetorical devices. Maybe you’ll pair a little hyperbole with your ethos and see how the audience responds!
|
/Artificial Intelligence: A New Genre of Music
Artificial Intelligence: A New Genre of Music
Summary: AI is changing the way music is heard, and made
Original author and publication date: Amnol Saxena – April 10, 2020
Futurizonte Editor’s Note: Anthropologists believe music is the oldest form of communication among humans, older than words. Perhaps a new way of communication is now emerging.
Source: Archive
From the article:
It is raining outside. You are in your bed, cuddled up with your favorite book and listening to your favorite music.
There is a high probability that the music you have been listening to has been recommended to you by your music streaming application, which perfectly suits the weather outside and your current activity (which is reading).
While music tech companies – like Tencent backed Joox, QQ Music, KKBox, etc – seem to have different value propositions, in terms of the myriad of regional music offered to the listeners, the monetization models, etc.; they all sing one same song today.
And that is the song of AI.
Artificial Intelligence has widely gained popularity in the music tech industry in the recent few years. The reasons behind this rise in the uptake of AI in core music streaming application tech is because of some obvious, and yet some other not-so-obvious reasonsAI Augments Listeners’ Experiences Through Personalized Playlists:
The past:
Each artist held their own personality which they presented through their music – while some loved the jazzy nature of Louis Armstrong, others melted every time they heard Elvis Presley sing one of his love songs. Some headbanged to The Beatles, while the others swayed to The Doors.
The present:
Music streaming app companies like Joox, QQ Music and KuGou have been using AI to analyze the preferences of their listeners and recommend specially curated playlists for personalized customer experience.
By using AI-based “recommendation engines”, the music streaming applications analyze the existing history of the listeners and recommend new songs.
The future:
While AI is being used to provide recommendations today, in my opinion there is a great possibility that the music streaming industry will try to offer features that will read body vitals like heart rate, stress levels, breathing rate, maybe even neurological signals from wearable devices. The feature may offer biometric and physiology-based music.
Imagine you are traveling in a metro, filled with people. The rush to reach the office and the excess number of people makes you anxious. The tiny wearable over your ear may identify your anxiety and offer to play music from your favorite artist but in a softer, calmer melody.
A feedback mechanism may autonomously indicate how this softer melody is affecting your health vitals and further improvement in the music will be done to deliver more curative results.
There is a possibility that AI will be able to vary the song’s melody, genre, tonal quality, harmonic rhythm, etc. to suit your body vitals to try and essentially “heal” you.
READ the complete original article here.
|
The Hitler Diaries Hoax: How Germans Faked Something Everyone Would Read
World War II | May 9, 2020
(Der Spiegel)
On April 25, 1983, Stern magazine, Germany’s version of Life, announced to the world that they'd come into possession of Adolf Hitler's diaries. It was an astonishing find: Until that moment, no one knew that Hitler even kept diaries. According to Stern, the diaries were lost in a plane crash in 1945, but they'd finally been recovered for the world to see. The so-called diaries related some of the most salacious details about Hitler that had ever been heard, from accounts of Eva Braun's pregnancy to his jealousy of Stalin. There was only one problem: There were no lost Hitler diaries. They were forgeries. It took weeks for the truth to come to light, so for a brief period in the early '80s, historians and World War II buffs believed that they'd hit upon the holy grail.
An Amateur Forger With Big Ideas
Born and raised in East Germany, Konrad Kujau was a lifelong criminal with a history of selling Nazi memorabilia dating back to the 1970s. There have always been buyers for this kind of thing, but Kujau discovered that he could ask for higher prices if he used his forgery skills to make the pieces seem more authentic. Kujau sold helmets and weapons that he claimed were used by Hitler and documents that he claimed were written by Joseph Goebbels, but his techniques were fairly crude. He used modern printing technology to create stationary and attempted to make documents look era-appropriate by dipping them in tea.
Not His First Rodeo
Kujau had a large enough customer base that he was able to sell a considerable amount of forged artwork to people who wanted original paintings by Hitler. Kujau assumed because he was an amateur painter that his work would pass just as well for Hitler's amateur paintings, and his plan actually worked. He went on to copy the manuscript of Mein Kampf by hand to pass it off as an early version of the text even though the originals were produced by typewriter. After that sold, he produced a false introduction for a third volume of the book, assuming that one would actually show up at some point.
A True Nazi Believer
Like many people, greed took hold of Kujau in the early '80s, and he started boasting that he'd come across a truly groundbreaking item titled Political and Private Notes From January 1935 Until June 1935. Adolf Hitler. The look of the diaries were official enough: They were sealed with red wax, a black ribbon, and the initials "F.H." in in brass-plated tin.
Kujau explained that his brother, Major General Fischer of the East German Army, found the documents along with a series of other collectibles following a plane crash in Dresden that occurred in 1945. Gerd Heidemann, a reporter for Stern, convinced the paper that the diaries were genuine, and they didn't bother to look into it any further.
It was bad call. By all accounts, Heidemann was obsessed with the Nazi era, even though his brand of fanaticism was and is still looked down on in Germany. It's unclear if he believed the documents were real or just wanted to craft a kinder image of Hitler. Kujau claims that Heidemann worked alongside him to milk as much cash as they could from Stern, but that's still contested.
(Whale Oil)
Several Bad Moves
After the initial interest in the single volume of Hitler's diaries, Kujau went on to write at least 60 volumes of the Furher's supposed personal thoughts throughout World War II in his own handwriting. All in all, Stern spent 9.3 million Deutsche marks (about $19 million at the time) on the diaries and moved quickly to syndicate them in international outlets like Newsweek and the Times of London.
To keep the findings a secret, Stern rushed the authentication of the diaries and went ahead with a press conference to announce the newfound works of Hitler on April 25, 1983. Just days later, however, the German Federal Archives reported that Stern was in possession of false documents. Additional research showed that the paper, ink, and string used in creating the "personal" seals were all produced after the Second World War, making it impossible for Hitler to have written the diaries.
The Fiction Fooled Experts
In 1983, Newsweek ran a special report on the diaries, including brief excerpts of the "findings" by Stern. The entries from the diary ran alongside commentary by Professor Gerhard L. Weinberg of the University of North Carolina, an academic expert on Hitler, who stated:
On balance, I am inclined to consider the material authentic ... The new find should be read with a giant saltshaker at hand. There is still room—however unlikely—for suspecting that the whole thing is a hoax. An obvious motive would be money. Another would be an attempt to rehabilitate Hitler.
The diaries included notes on Hitler's home and work life at the time, thoughts on Stalin and Churchill, and commentary on his relationship with Eva Braun. All of it was fiction.
Kujau and Heidmann were put on trial
Forgery And Journalist Were Convicted
Both Heidemann and Kujau were put on trial in Germany, which turned into a "he said, he said" mudslinging match between the forger and the quasi-journalist. Kujau admitted his forgeries and spent most of the trial attempting to implicate Heidemann in the scheme to bilk millions out of Stern, while prosecutors went out of their way to show the jury Heidemann's large collection of Nazi memorabilia, which included Nazi-era beer mugs, military hats, and a swastika banner that was supposed to have adorned Hitler's opera box.
Kujau treated the trial like a job interview for his post-prison life. When asked if a watercolor purchased by Heidemann was a legit piece by Hitler, Kujau said, "That was a genuine Kujau-Hitler, your honor." In 1985, both men received four years and eight months in prison.
(History Collection)
No Lessons Learned
By the time Kujau was released from prison, he was legitimately famous in the art world. He opened a gallery of forgeries in Stuttgart, Germany and later moved to Majorca, where he was known to practice forgery on demand for tourists. A kind of gentleman bandit, when he passed away in 2000, The Guardian noted that "he might still be around somewhere, having pulled off his last grand sting."
Heidemann didn't receive the same treatment. In 2008, he was reportedly destitute and living alone on a government stipend in a small flat in Hamburg.
Tags: crime | Hitler | literature
Like it? Share with your friends!
Share On Facebook
Jacob Shelton
|
in , ,
How to Charge Your Phone Using Your Body As A Phone Charger
You can charge your phone using your body electricity by making a capacitor using items commonly found in your pocket.
You need two silver coins, a paper clip, a piece of paper, your charging cable and of course your phone.
The electricity from your body is stored in the capacitor and then sent to the phone to charge it.
The coins act as a two plates of a capacitor and the air gap and the paper act as the insulator (dielectric). The paper clip is to connect inside of your usb connector to the outer plate of the capacitor. Have fun! Try it out.
Written by How Africa
Leave a Reply
Nigerians are the Most Educated in the U.S., Survey reveals
Why Africa’s Youth Should Look Into Entrepreneurship To Secure Their Future
|
Select files on your computer
Converter JPS to JBG
JPS is a bitmap category category format. Its developer is the Joint Photographic Experts Group. A raster stereoscopic JPEG image is stored in a file with a .jps extension. It is this image that is used to create 3D effects from a 2D image. In order to obtain a stereoscopic effect, apply a pair of static images of the same size for the right and left eye. They are nearby. Stored as a single image. Differ from each other, but slightly. They, in particular, have a slightly changed perspective. There are other effects. In other words, this is a pair of copies of the same image. Moreover, each of them is slightly different. Including a different angle. Images with the .jps extension are made and stored using a stereo camera, which should have at least two lenses. Such an image is viewed in several ways. Say, using special devices that provide the opportunity for each eye to see its picture. Hence the effect of volumetric image. In particular, when you overlay pictures on top of each other, then you take 3D glasses. JPS files can also be viewed using programs that can convert a pair of 2D images into one 3D image.
JBG is a graphic format for raster images, which is usually referred to as the Image JBIG 1-bit Raster file type. The purpose of the format is to store images - both black and white and color. Compared to other similar formats, it has a smaller file size. The .jbg file extension is related to the fact that the image is compressed without loss and is called JBIG. It was developed by the Joint Bi-level Image Experts Group. A .jbg file stores a 1-bit bitmap in JBIG graphic format. It is used as a standard for fax machines. JBIG is a lossless image compression standard. JBIG was developed primarily to compress facsimile images. However, it can be effectively applied to other classes of images. The format uses a variant of arithmetic compression, which is patented by IBM and known as Q-coder. A JBG file is a Jarnal background image. Jarnal is a cross-platform program that is written in Java for notes, thumbnails, and PDF annotations. It functions like a regular notebook with a small difference. It is believed that it can also be used as a stylus on a tablet.
Support our project reference in social networks
|
UBC Undergraduate Research
Fort Langley Cranberries and a San Francisco Market : A Transition in the Role of Natives from 1827 to 1858 Bethune, William
Early Canadian Native-newcomer relations and interactions are important for painting an accurate picture of the times. The changes that were made to Native lifeworlds are numerous. This paper discusses an example of one of these changes: a transitioning Native economic role in the context of the cranberry trade out of Fort Langley. It briefly examines the susceptibility of Native populations to Newcomer influence. The initial participation that they had in every step of resource trade processes is also touched upon in order to provide contrast to their newfound role. A narrative of the cranberry trade is used to to illustrate how shifting markets, a changing Newcomer presence, technological advancements, and Fort Langley as a centre of production for the cranberry trade affected the economic role of the Natives. The paper finds that the Native population was in transition from having close involvement in every or many steps of the resource trade process in a more localized economy to being a supply of labour for a new Newcomer economy. It demonstrates that changing economic scale and Native-newcomer economic dynamics were inextricably linked.
Item Media
Item Citations and Data
Attribution-NonCommercial-NoDerivatives 4.0 International
Usage Statistics
|
Human characteristics - courage
by Firkin - uploaded on January 20, 2017, 8:15 am
In a 19th century book called 'The Nobility of Life, its graces and virtues, portrayed in prose and verse by the best writers', the illustrator drew scenes to represent human characteristics. This one is supposed to represent courage and was subtitled 'The lifeboat'
bravery characteristic courage human lifeboat ocean rescue scene sea Semi-Realistic People ship shipwreck storm virtue
Safe for Work?
|
Analyze a change that has occurred in your subject CJ organization.
Welcome to the fourth SLP in this course! In this module, you will continue to analyze the criminal justice organization that you have focused on in the SLP assignments in the first three modules of CJA502.
Required Reading
Refer to the required and optional readings on [subject of module], the theme for this module.
Please write a 2-3 page paper, not including cover and reference pages, in which you:
Analyze a change that has occurred in your subject CJ organization. Consider the following questions:
How did the change come about?
How did the change agent trigger the change? How was it brought forward? What was done and said? Was it implemented in a participatory or an authoritarian fashion?
How did others in the organization react to the change? Was there resistance to change, and if so what was the nature of the resistance? How was it overcome?
What was employees’ initial reaction to the change? Was it supported initially, or were employees won over?
How is the change working out? Looking back, what would you have done differently if you were the change agent?
Please state the NAME OF YOUR CHOSEN ORGANIZATION on the title page of your paper.
Keys to the Assignment
Examining the organizational management functions, processes, and challenges in the selected criminal justice organization.
Place New Order
It's Free, Fast & Safe
Feeling Lucky?
|
Why is it important for kids to write?
There are many benefits to creative writing that will help your children:
Imagination And Creativity
Creative writing encourages kids to exercise their creative minds and practice using their imaginations. It improves their ability to come up with alternatives. This broadens their thought processes, which can lead to success in many areas, including problem solving and analysis.
Children often have difficulty understanding and expressing how they feel. Through writing, children have a safe place to explore, and this can be a highly beneficial tool for expressing their feelings.
Communication And Persuasion Skills
A well-written piece involves a lot of thought, planning, organization, and use of language to get a point across. What great practice for kids at laying out their thoughts and trying to clearly convince someone of their point of view.
Creativity seems to diminish as we get older. Those crazy stories of fairy tale princesses battling ferocious dragons to save the town later turns into business prose. So, encourage your children to write, to be creative, to use their imagination, and then praise them when they do. Build their confidence to clearly communicate their point of view, their thoughts, and their feelings. Then think about publishing those precious stories to read over and over again at a kid-friendly site such as Scribblitt.com, and hold onto childhood just a little longer.
About the Author
Andrea Bergstein is the founder of Scribblitt.com, the only self-publishing website truly designed for kids to help them write, illustrate, and professionally publish their own books and comics. Andrea previously held the position of VP Marketing at Nelvana Animation Studios, Director of Marketing at Mattel Toys, and Consultant in Marketing Strategy.
|
The elusiveness of divorce in medieval England: the marital troubles of the last Warenne earl of Surrey (d.1347)
In today’s blog Dr Simon Payling from our Commons 1461-1504 project continues our ongoing look into the marriages of Parliamentarians, both happy and unhappy. Divorce in medieval England was infrequent and difficult to secure, but this did not stop individuals from making an attempt…
Medieval England knew two forms of divorce. The first, and overwhelmingly the most important, was divorce a vinculo matrimonii (from the bond of marriage), a ruling by the Church that a marriage had never been valid. This turned on some default in the couple’s consent to it, either that consent had been coerced or they themselves were canonically incapable of giving it (because, for example, they were underage or too closely related to make a valid marriage). The second, what might be termed a separation, was divorce a mensa et thoro (from bed and board), a ruling that the couple need no longer live together on the grounds, most commonly, of cruelty or adultery. No doubt the latter form had some practical benefits, but in one important sense it corresponded poorly to the realities of the human condition. It did not provide for remarriage until after the death of wife or husband. This might have profound dynastic consequences, or at least threaten them, and fear of those consequences could lead to the most dramatic of events. Witness, to cite the prime example, Henry VIII’s campaign to divorce Katherine of Aragon a vinculo.
From the point of view of the Church, these limited grounds for divorce were necessary protections for the sanctity of marriage and the spiritual health of couples; but their value is not to be understood only from a religious perspective. Aristocratic spouses stuck in unhappy marriages may have struggled to see that value, but for aristocratic society as a whole the difficulty of securing a divorce provided stability and diminished one important potential for conflict. Within this elite, marriages represented not simply the union of a couple but a political and financial arrangement between two families which could not be repudiated without damaging consequences.
The arms of Earl Warenne
This fact that no canonically valid marriage could be undone meant that divorces were infrequent, although this did not prevent some hopeful individuals making the attempt. The most notable and protracted case involved John Warenne, earl of Surrey, who, in 1306, in a match brokered by the ageing Edward I, married the King’s granddaughter, Joan, daughter of Henri, late count of Bar. The marriage was prestigious but also problematic: Joan was only ten years old, half the earl’s age. It also appears that the earl did not have a temperament ideally suited to marriage: in the view of one modern commentator he sowed ‘his wild oats rather freely; indeed, he appears to have had a considerable supply and he did not get rid of them all until his death’. At all events, by 1313, the couple were on terms of active hostility.
Warenne spent the next 30 years attempting to secure a divorce and to divert his considerable inheritance into the hands of his illegitimate issue. At every turn he met with failure. The papal curia resisted his claims that he was too closely related to marry Joan (an impediment for which the couple had had a papal dispensation at the time of their marriage), and that he had been precontracted to his mistress, Maud Nereford, by whom he had two sons. Finally, in apparent desperation, he resorted to a radical gambit. In the mid-1340s, when he was approaching sixty and had been married to Joan for nearly 40 years, he claimed, as a new impediment to the validity of his marriage, that, previous to his marriage, he had had a sexual liason with Joan’s maternal aunt, Mary (b.1278-d. by 1332), daughter of Edward I, a nun in the Wiltshire house at Amesbury. This sensational and discreditable claim got him no further, and, if it were true, it begs the question of why he did not rely upon it when first seeking divorce. He died in 1347 still technically married to Joan. Not only was he unable to secure divorce but his attempts to pass his inheritance to his illegitimate issue foundered on the resistance of his common-law heir, his nephew, Richard Fitzalan, earl of Arundel. Here lies a certain irony, for Arundel was one of the few members of the higher nobility to successfully sue for divorce a vinculo: in 1344, at about the time his uncle was pursuing his desperate claim in respect of his affair with Mary, the papal curia accepted his claim that he had been forced into marriage with Isabel Despenser, his wife of more than 20 years, when under age.
The Warenne Seal
It would be a mistake to view the pursuit of divorce only in terms of aggressive husbands, like Warenne and Arundel, seeking to rid themselves of wives for some dynastic or political advantage. The boot was sometimes on the other foot. In terms of rank the most notable instance dates from 1271 when Alice Lusignan, niece of the half-blood to Henry III, divorced Gilbert de Clare (d.1295), earl of Gloucester, on the grounds of too close a blood relationship.
More vivid are those instances were a wife cited not kinship, coercion or pre-contract but rather the impotence of her husband. This too went to the issue of consent. The Church was inclined to see impotence as a permanent condition and took the view that one who was incapable of having issue was also incapable of consenting to a marriage. Impotence was thus grounds for divorce a vinculo. The unlucky marital history of Maud, a daughter of the northern baronial house of Clifford, is a case in point. In about 1406 she married another northern lord, John Neville, Lord Latimer, but soon after secured a divorce causa frigiditatis of her husband. While, not surprisingly perhaps, he did not remarry, she found a new spouse in Edward III’s grandson, Richard, earl of Cambridge, who was executed for conspiring against Henry V in 1415. Twice bitten, she did not marry again. A more dramatic example involved a family on the borders of the peerage. In the late 1360s Nicholas, grandson of William, Lord Cantilupe, married the young daughter of Sir Ralph Paynell of Casthorpe (Lincolnshire). She immediately sought divorce on the grounds that her husband was without male genitalia. In the face of her husband’s violent objections, she persevered and won a divorce, going on to remarry. Nicholas fared less well, dying in Avignon in 1371 as he attempted to reverse the papal ruling. His brother and successor, William, was perhaps even more unfortunate in the game of marriage, for his ended in murder. In 1375 he was killed by his wife, Maud Neville. The fact that Paynell was also implicated in the murder suggests that the crime was connected, in part, to his daughter’s unhappy experience as Nicholas’s wife.
The lurid stories of the Cantilupe brothers suggests that divorce could serve to magnify rather than lessen the tensions between families that naturally arose from unhappy marriages. In this context, the barriers the medieval Church placed in the way of its accomplishment may, for the aristocracy, have served a social good.
Further reading
F. Royston Fairbank, ‘The Last Earl of Warenne and Surrey, and the Distribution of his Possessions’, Yorkshire Archaelogical Journal, xix (1907)
F. Pedersen, ‘Motives for Murder: the Role of Sir Ralph Paynel in the Murder of William Cantilupe’, in Continuity, Change and Pragmatism in the Law: Essays in Memory of Professor Angelo Forte (2016)
B. Wells-Furby, Aristocratic marriage, Adultery and Divorce in the Fourteenth Century: the Life of Lucy de Thweng (1279-1347) (2019)
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Course Category: Physics
Showing 1-7 of 7 results
Communication Technologies
Biometrics Technology
LTE Technology
Basic Electronics
This course supplies basic information on how to use electronic components and explains the logic behind solid state circuit design. Starting with an introduction to semiconductor physics, the tutorial moves on to cover topics such as resistors, capacitors, inductors, transformers, diodes, and transistors. Certification Academy Europe presents high-quality formal diplomas,…
Antenna Theory
Antenna Theory is meant to provide the readers a detailed description of the antennas used in communication systems. After completing this tutorial, you will be able to calculate the parameters of an antenna and decide which antenna suits for which type of application and why. Certification Academy Europe presents high-quality…
Future Proof Fiber
More commonly known as “Future Proof”, Fiber to the home (FTTH) is a new technology to deliver a communication signal over optical fiber. It is an efficient and economic substitute of existing copper infrastructure including telephone and coaxial cable. This technology is effectual enough to provide much higher bandwidth to…
Artificial Neural Network
|
Sciatica...so many causes, so little time
What is sciatica?
Sciatica is a term used to describe pain, pins and needles and/or numbness running down the back of one or both legs. It is often caused by something in the lumbar spine or low back region impinging the sciatic nerve.
What causes sciatica?
Sciatica is caused by irritation or damage to the sciatic nerve. The sciatic nerve starts in the lower back and extends all the way to the toes. The picture below shows the path of the nerve through the lower back and legs.
Most commonly the nerve is affected in the lumbar spine either by spinal discs or the vertebrae. A disc protrusion or “slipped disc”, osteo arthritis, spinal degeneration and muscle spasm can all irritate and compress the sciatic nerve.
Treatment Options
Treatment will vary and depends on the cause of your sciatica. The Chiropractor at Dubbo Spine Centre will use different methods of treatment based on the cause of your condition, and of course your preference. Treatment usually involves some soft-tissue/massage-type work, and some gentle chiropractic manipulations to the low back, hip, and pelvic regions typically. Each treatment session will be unique, and we will use the best modalities and techniques to get you back into action.
Dubbo Spine Centre Chiropractor Sciatica
If you have any questions or are unsure if the chiropractic treatment is right for you, please contact the clinic and we will answer any questions you may have.
|
The Evolution of Linoleum
Monday, March 19, 2018
I received an interesting reference question a few months ago from researchers trying to identify a vintage floral linoleum pattern that was used in a work of art created in the 1960s. They asked if we could find the pattern in some of the linoleum trade catalogs in our library collection.
The researchers had searched Hagley’s online library catalog and identified a range of catalogs dating from 1924 to 1952.
I love old linoleum patterns, having lived in a few old houses where the tastes of many generations of inhabitants could be documented through the layers of the kitchen floor, so I looked forward to this challenge!
I pulled the catalogs in question, and examined them one by one for any matching patterns. Unfortunately I wasn’t able to find the pattern in question, but I fell in love with the quirky designs from each decade all over again.
Linoleum was created by an Englishman named Frederick Walton. The story goes that one night in 1855, he forgot to seal a container of linseed oil that he was using as paint thinner, and a skin of solidified oil formed on top of the oil. He peeled it off and began to think of ways that this rubbery substance might be used. Walton experimented with ways to speed up the lengthy process of creating a solid product.
In 1860 Walton applied for the first of a series of patents, and named this new material ‘Linoleum,’ from the Latin words linum (Latin for flax) and oleum (Latin for oil). Walton established the Linoleum Manufacturing Company Ltd., in 1864, in Staines, England. By 1869, Walton’s Linoleum had become so popular that he was exporting to the rest of Europe and the United States.
The chief competition to Linoleum was oil-cloth, which had been an economical and practical floor covering since the 18th century. Linoleum was a superior product because it was thicker, more waterproof and longer-wearing.
Walton faced competition in the United States when Sir Michael Nairn, an established oil cloth manufacturer in Scotland, opened the American Nairn Linoleum Company in 1887 in Kearny, New Jersey. Nairn manufactured and sold his own brand of linoleum. Walton decided to sue Nair for trademark infringement for the use of the word that Walton had invented.
In 1878, the British Courts ruled against Walton, partly because he had failed to register linoleum as a trade name, but also because linoleum had become so commonplace, they determined that linoleum was a generic term.
Only fourteen years after linoleum was invented, it had become a ubiquitous feature in homes and commercial buildings. Linoleum is considered to be the first product name to become a generic term.
Another important company in the flooring market in the United States was the Armstrong Cork and Tile Company. Thomas Armstrong, the son of Scottish-Irish immigrants from Derry, started his company in 1860 as a two-man cork-cutting shop in Pittsburgh. The company grew to be the largest cork supplier in the world by the 1890s. Searching for new cork-based products, the company decided to add linoleum floor covering to its line and, in 1908, the first Armstrong linoleum was produced in a new plant in Lancaster, Pennsylvania.
Linoleum layer's handbook
Linoleum layer's handbook, containing detailed directions for laying and caring for floors of modern linoleum.
Lancaster, Armstrong Cork Co., circa 1928.
In 1902, the United Roofing and Manufacturing Co. in Pennsylvania started producing “Congo” roofing, supposedly named for the fact that asphalt used in the roofing material came from the African Congo. The company entered into an agreement with the Barrett Manufacturing Co. to have the Barrett plant in Erie, Pennsylvania manufacture Congo roofing. They soon realized that the three foot wide strips of Congo roofing material could easily be used as floor runners to deaden noise and minimize dust and dirt collection. To differentiate between the Congo roofing and the flooring material, the flooring was given the name Congoleum.
In 1924, the Congoleum Corporation acquired Nairn Linoleum Manufacturing Corporation and changed the name to Congoleum-Nairn, Inc. Together the company produced Congoleum felt-based flooring and Nairn linoleum.
Linoleum had many features that made it a popular choice for a floor covering. It provided a cushioned, comfortable floor, it was easy to clean, and it was durable. Linoleum was produced in an array of colors and patterns, including mosaics, tiles, marbles, and carpet patterns.
Here are some examples from some several years of the Congoleum-Nairn pattern book. First, from 1928:
Rug patterns in the catalog
Detailed rug patterns
Pattern book pages.
Pattern book. Philadelphia, Pa. Congoleum- Nairn, 1928.
And some examples from 1939:
The same pattern in different color options
Utliity mat patterns
Congoleum-Nairn pattern book
Congoleum-Nairn patterns, Kearny (N.J.) Congoleum- Nairn, Inc., 1939.
And finally, from 1947:
Rug pattern
1947 Congoleum-Nairn pattern book.
1947 Congoleum-Nairn pattern book. Kearny, N.J. Congoleum- Nairn, Inc., 1947.
Linoleum was eventually replaced in the 1950s and 1960s with plastic-based products. Today, what people may refer to as linoleum is actually made from polyvinyl chloride, which has similar flexibility and durability to linoleum, but is relatively less flammable, and can be produced with greater brightness.
While linoleum is used less commonly today, high-quality linoleum is still available! It is used in many places, like hospitals and healthcare facilities because it is made of organic materials, is antibacterial, and non-allergenic. In fact, in 1997, vinyl-flooring giant Armstrong bought the world's second-largest linoleum maker, DLW (Deutsche Linoleum Werke,) reentering a market it had left in the 1970s.
A 2001 trade catalog from Forbo Flooring systems states that Forbo is the “world leader in linoleum." The Forbo Group is headquartered in Baar, Switzerland, but Forbo Flooring’s North American headquarters is in Hazelton, Pennsylvania.
Forbo's main brand for its core Linoleum collections is called Marmoleum. The designs range from subtle marbled designs to modern concrete and intriguing striped patterns. The catalog says that “There are over 300 colors and 12 different design structures to choose from.”
With so many choices, and with its natural composition and durability, linoleum may be found under our feet for many years to come.
Forbo Linoleum colorful samples.
Forbo Linoleum Inc. Introducing the new Marmoleum collection, Hazleton, Pa.,The Company, 2001.
Linda Gross is the Reference Librarian at Hagley Museum and Library.
|
In group theory, given a group G under a binary operation *, a subset H of G is called a subgroup of G if H also forms a group under the operation *. More precisely, H is a subgroup of G if the restriction of * to H x H is a group operation on H. This is usually represented notationally by HG, read as "H is a subgroup of G".
A proper subgroup of a group G is a subgroup H which is a proper subset of G (i.e. HG). The trivial subgroup of any group is the subgroup {e} consisting of just the identity element. If H is a subgroup of G, then G is sometimes called an overgroup of H.
The same definitions apply more generally when G is an arbitrary semigroup, but this article will only deal with subgroups of groups. The group G is sometimes denoted by the ordered pair (G,*), usually to emphasize the operation * when G carries multiple algebraic or other structures.
This article will write ab for a*b, as is usual.
Read more about Subgroup: Basic Properties of Subgroups, Cosets and Lagrange's Theorem, Example: Subgroups of Z8, Example: Subgroups of S4
|
Image result for Why Socialization Is Key For Your New Puppy
Why One Should Put an Emphasis on a Puppy’s Socialization
There’s an old saying which states that one can’t teach an old dog new tricks. It’s not true on a purely technical level. But this can be seen as a rather pedantic view of things. The spirit of the statement is instead just saying that youth comes with a certain mental flexibility not found in old age. While the saying is meant as a metaphor for humans, it really is true for dogs as well.
A puppy and an adult dog can both learn about a new setting. But the puppy will adapt to it faster and more fully than the adult dog. This is one of the main reasons why it’s so important to train dogs as early on as possible. And it’s also why socialization is a vital part of this process.
Consider how children learn to talk. Speech is a fairly natural part of human life. But it still takes constant immersion for a child to start to get the hang of it. Now consider how much more difficult it would be for any other species to learn human language. Every species has natural instincts intended to make intercommunication easier.
But it’s quite a different thing for them to have to learn to communicate with a whole other type of mammal. But that’s exactly what we expect from dogs. And they are up for the task. However, success is often dependent on just when the training begins.
An adult dog will struggle if he’s trying to learn how to fit in with humans for the first time. But a puppy is an altogether different story. As with human babies, a puppy is hardwired to pick up on social cues. He’s learning just what it means to be a dog. Instincts will cover some of that. But the more intelligent an animal is the more it’s able to learn when it’s a baby. And as most people know, dogs are extremely intelligent.
When dogs get older that learning phase closes to some extent. Much of this has to do with survival in natural environments. When an animal is a baby it can depend on its mother to protect it.
This means it’s able to spend every minute learning and playing. It’s learns what is and isn’t dangerous by watching mom. But by the time an animal reaches adulthood it can’t afford to just stand idly by while potential dangers approach. It needs to be more wary of the world as a survival method.
Of course all this leads to an important question. How does one actually ensure a dog learns socialization to the full extent of his potential? It begins by looking for local classes. For example, someone in Houston would look for a puppy class Houston adjacent.
There’s a few reasons why someone in this position would want to look for a local class. But one of the most significant is the simple fact that different areas have very different environments. Humans don’t have much sense of smell. A human relies on his eyes, while dogs tend to rely on their sense of smell. And they learn about their environment through local scents. Staying closer to home makes them more comfortable. In this case that would mean they’d learn more with a puppy class Houston adjacent.
Finally, by starting to learn about humans early in life they’re also reinforcing another positive behavior. The young puppy gets a chance to learn alongside other puppies. Human children learn best when their peers are part of the process. And puppies are no different in that respect. When they’re around other puppies they’re not just learning from the humans. The puppies are also learning by watching how other puppies adapt to the process.
|
Top Special Offer! Check discount
Get 13% off your first order - useTopStart13discount code now!
drug producing countries and U.S Relations
The United States of America has invested vast amounts of money worth up to trillions of dollars to combat the illegal drug war. (The Macias 26). The war has not been fought yet. Many have blamed the government for using insufficient tactics to combat the threat that has become monstrous in the US. The war on illegal drugs has led to the arrest and jailing of a number of people in the US, particularly marginalized people and people of color. (Matthew 27-8). More than 25 per cent of people locked up in American jails are illegal drug-related cases. Although the US is viewed to be so aggressive in the war on illicit drugs both within and outside her borders, the numbers of citizens especially the youths who have been affected by drugs are so many. (Macias 28). In fact, the US is among the nations in the world which have been hit so hard by the effects of drugs among citizens. Many people have either died or have been maimed due to excessive use of drugs. (Macias 28). The population of drug dependents in the US has also been on the rise over years. For example, in 2015, close to 52000 deaths in the US were caused by an opiate epidemic, a clear indication showing that the US will not give up in the fight against illicit drugs until the war is over. (Macias 29). The war on drugs has led to various policies being formulated for instance the foreign policy on drugs which has redefined the US relations with other countries producing drugs, especially in the Southern America.
History of the American War on Illicit Drugs
The war on drugs both within and outside the US has been almost each President’s ambition since the 1960s. Each President on power from then has appeared to be doing something in their own style to eradicate the mystery of drugs on the American soil. President Richard Nixon started the fight on illicit drugs in 1960 where he wanted to eliminate the cultural divide which had existed in the US for decades. (Livingston 355). In the process, he christened a plan to end drugs from the US. President Richard was aiming at creating a drug-free America where the citizens could prosper. (Livingston 355). However, these wars on illicit drugs have failed terribly to meet the initially intended goals. In fact, many studies are showing that in America today, drug abuse among citizens is so high than it was before even though the government spends trillions of dollars on the fight and rehabilitation of those already affected by drugs.
From the early 1970s, till today, the subject of the war on drugs has been viewed by many Americans as a controversial matter. It is controversial because as much as some people are really supporting the noble war and would wish it be won, a majority of Americans are seeing the war as unnecessary. (Livingston 356). Those against the war on drugs have always argued that it is the rights of humans to use what they want to use, that some drugs have medicinal value and therefore people should not be barred from using and that they are dissatisfied by the high government’s allocation of money on the war on drugs. (Livingston 358). Although both the federal and the state governments have kept changing the policies on the war against illicit drugs over time, the principles of these policies have remained unchanged; to end completely illicit drugs from American soils at all costs.
Timeline of Events in the War on Drugs
As early as the 1870s, the US had started adopting laws as well as policies prohibiting the use of illicit drugs. The early policies criminalized the use of opium although they were not a section of the entire drug eradication program. (Livingston 359). It took almost a hundred years before President Richard Nixon could officially launch the war against illicit drugs in 1971. Later, President Carter promised during his campaigns to decriminalize marijuana if elected but ended up doing little when in office. (Livingston 361). During the 1980s, the First Lady Nancy Reagan also campaigned overwhelmingly against abuse of drugs. In fact, she is remembered by her “just say no” campaign in 1986.
As the war was at its peak, prices of drugs also skyrocketed. Profits made from trafficking drugs were abnormal. The international drug traffickers consequently attained immense power and influence, and they could influence their leaders through bribery to operate illegally in their home countries especially in the Latin America. By 1990s, the fight escalated to another level. Military operations started being used as a way to deal with trafficking. Drug ships were being intercepted by the US Coast Guard. Paramilitary raids on drug dens became very common. As presented by the Drug Policy Alliance, about 40000 such raids were experienced annually in the 1990s.
In 2000, President Bill Clinton provided one billion US dollars to Colombia. The money was to be used to purchase herbicides to kill the coca plant, purchase helicopters for spraying, and for training. The reason for taking this action was because the flow of cocaine into America from Colombia has increased. (Livingston 365). Despite the struggles, there are states within the US that decriminalized marijuana for medical use. The State of California started in 1996 and since then till 2012, twenty states had followed suit. This has appeared as a tug of war where the national government is fighting all forms of illicit drugs whereas the states within the nation are decriminalizing the very drugs being fought by Federal Government. (Livingston 3700). In 2009, President Barrack Obama gave a directive that stopped the Department of Justice from pursuing the users of marijuana for medical reasons. The directive is seen a loophole for non-medical users of marijuana to continue using it in the pretense of medical reasons.
International Drug Trafficking and American Foreign Policies about Illicit Drugs
Drug trafficking is not a problem of the Americas alone, but rather a problem which has hit the whole world. From reports, the trend in production of drugs both natural and synthetic drugs internationally have increased. Opium and marijuana production has been doubled whereas the coca plant production has been tripled. These are statistics which scare the world leaders and fighters of drug wars.
Currently, the US has various policies which are aimed at controlling drug use and trafficking. They are called international drug control policies. The primary goal of the policies is to reduce the inflow of illicit drugs into the US. The second goal of these policies to control, the levels of illicit drugs being cultivated, processed and supplied for consumption worldwide. The current US International Policies on Drug Trafficking Strategies have the following elements; eradication of the crops used for the production of drugs, interdiction, and enforcement of law, international cooperation, sanctions and/or economic support, and institutional development.
Eradication of the Crops
Among the policies on drug trafficking, the US has maintained for long is the reduction of cultivation and production of the illicit narcotics by eradicating them. (Fuentes 44). The US support eradication of marijuana, opium, and coca in many countries through initiation and supervision of various programs in the producing countries facilitated by US government. (Fuentes 44). The US supports the producer countries by offering technical assistance, herbicides, aircraft, and any other support to ensure the crops are eradicated completely.
Interdiction and Enforcement of Law
In this case, the US normally assists the host countries in seizing narcotic drugs before they reach American soil. (Fuentes 32). The US also penetrates and attacks the criminal ring to destabilize their economy so as to impede their efforts to ferry drugs to the various location in the world. Additionally, the US government also trains foreign anti-narcotic officers and equip them with necessary materials as a way of helping in the prevention of drugs before they reach the American borders. (Fuentes 34). Other elements of the US International Policies on Drugs which are also very pivotal in the fight against narcotics are; international cooperation, economic assistance, and institutional development.
US-South America Relations on War against Illicit Drugs
The US-funded war on drugs in South America has been so intense although the efforts have been limited by the geographical location of the nation. From the 1970s, the US government has invested trillions of dollars in the war against illicit drugs in Southern America trough trying to dismantle the drug cartels within Latin America. The aerial fumigation attempts and foreign policies on drugs Latin America was subjected to, has however been criticized as the main reason drug trafficking has shifted northwards.
In the 1980s, South America was the leading producer of coca, a plant used to produce cocaine. Peru, Bolivia, and Colombia produced 65%, 25%, and 10% respectively of the world’s coca plant. The war led to the death of over 15000 people some of who were innocent. (Lynch 48). After the big cartels had been executed, paramilitaries displaced many small-scale farmers in an attempt to control land use and drug trade routes. (Lynch 48). Between 2000 and 2010, the US’s expenditure on military and economic aid known as Plan Colombia ran to a tune of US dollars 7.3 billion. (Murphy and Davis 2). The program was successful in bringing many coca-growing areas under the US under control, although till date, Colombia is still the world’s leading producer of coca and cocaine. Another major threat of Plan Colombia is that it shifted the drug trade to Peru, Bolivia, and Central America, Mexico included.
There are new realities which are threatening the relations of US and the Southern America with regards to fighting against illicit drugs. Uruguay is the first ever Latin American nation to decriminalize the use of marijuana. (James and George 3-5). Colombia and Guatemala have also been in the limelight supporting the move by Uruguay although they do not have the capacity to implement the same policies in the respective countries. (Fuentes 348-52). The US has also been viewed to have loosened the fight because many states in the US have legalized the use of marijuana and 55% of the US citizens do not see anything wrong with using marijuana for whatever reason, either pleasure or medical.
Many Latin American countries today are opposing the liberalization of drug laws, despite other key countries like Bolivia and Uruguay relenting and loosening the laws on drugs. The debate about drug trafficking in the Latin America is broadening whereas reviewers think that the influence of US’s policies in the Latin America is declining. (Livingston 378). It is becoming evident that newer approaches should be sought as American countries are debating on possible alternatives to war staged by the US on drugs. The first step is putting in place the US foreign policy on drugs which states the war on drugs is a shared responsibility and which needs a cumulative approach both by the US and the affected region. (Livingston 379). Some countries such as Bolivia and Uruguay have declined the US aid, coupled with compromised certification of drugs and willingness to resist pressure from the US, is a clear indication that the relations between the US and the Latin American countries have been compromised and the US can no longer dictate implementation of policies on these countries like was the case in the 1970s and 80s.
Another country to the south which has had long relations with the US regarding fight against drugs is Mexico. In the year 2006, the US and Mexico launched a crackdown on the drug cartels and organizations thereby escalating the levels of conflicts on these illegal businesses. (Foley 4). The consequences of these conflicts, however, are that tens of thousands of people in drug-related cases were killed. The US has invested both financially and intelligence in Mexico as an effort to combat the menace. (Foley 4-5). The primary goal of these wars has been to suppress the flow of illicit drugs into the US. Analysts have found discrepancies in the moves by the US and have termed the efforts fruitless. (Foley 5). They have therefore suggested that there should be new approaches to addressing the issue of drug trafficking within the Latin American countries.
Strategies for the Future
As much as there have been considerable investments by governments of the US and the rest of the Latin America to fight illicit drugs, there is still a high inflow of narcotics into the US. (Gomez 354). Anti-narcotic policies and initiatives have not yet yielded the expected results and therefore much still need to be done by all the governments, apparently, change of tactics on how this menace is addressed. (Gomez 355). Leaders in all the counties remain optimistic that success will once be achieved on the war. There have been several bilateral agreements between the US and the rest of the Latin American countries concerning war on narcotics and it is expected that these agreements will yield positive results. (Gomez 358). However, there is still a big threat emanating from the countries which have refused to criminalize certain drugs and are resisting pressures from the US to enact laws which criminalize production and processing of illicit drugs example are Bolivia and Guatemala.
It is clear from reports and studies that despite all the efforts being applied by the US to end the problem of drug abuse, there are several impediments which are countering the efforts. It is prudent to say therefore that the US should resort to other means of addressing the problem of drug trafficking both within its borders and the outside world. It is arguable that there is no point of US fighting production of marijuana in other nations yet there are several states within its jurisdiction which have legalized the herb for medical purposes. It is pointless to use marijuana for medical purposes if there are alternative drugs which can be used in place of marijuana. The approaches to fighting these drugs need to be revised if the war is to be won soon.
Works Cited
Foley, J. Bowen. Mexico: Unprecedented Cooperation at Sea. U.S. Department of State Press Release, 2012, pp. 4-6
Fuentes, Guidetti. Valor Scorned: The Disarming of Highway Drug Interdiction in America. Unpublished Paper, 2000, pp.32-6.
Fuentes, Kelly. Drug Supply and Demand: The Dynamics of the American Drug Market and some Aspects of Colombian and Mexican Drug Trafficking. Journal of Contemporary Criminal Justice, 15(4), 2009, pp. 344-55
Gomez, Cespedes. The Federal Law Enforcement Agencies: An Obstacle in the Fight against Organized Crime in Mexico. Journal of Contemporary Criminal Justice, 15(4), 2010, pp. 356-9.
James, Finckenauer., and George, L. Ward. Mexico and the United States: Neighbors Confront Drug Trafficking. National Institute of Justice, 2007, pp. 1-9
Livingston, Grace. Inside Colombia: Drugs, Democracy, and War. Rutgets University Press, 2004, pp. 377-89.
Lynch, Tuomy. War No More: The Folly and Futility of Drug Prohibition. National Review, 53(2), 2001, pp 47-8.
Macias, Steven. Remarks at Opening of U.S.-Mexico Binational Commission, Washington, DC: U.S. Department of State, 2000, pp. 25-33.
Murphy, Philip., and Davis, L. Liston. Improving Anti-drug Budgeting. RAND Publication, 2005, pp. 2.
July 24, 2021
|
1 Oct No Comments
According to Mankiw(2014),consumer utility is a branch of micro-economics. Utility is that satisfaction that consumers derive from consumption of a good or service. Understanding whether the consumer’s utility increases or decreases would important in shaping consumption behavior for the individual. On the first day of being supplied with my favorite pizza for lunch and dinner my excitement would be the highest. As Grady(2012) observes, the marginal utility derived from consumption of a particular good increases at an increasing rate on consumption of the first pizza, increases at decreasing rate as more and more pizza are consumed and it eventually declines sharply. This means that my utility would be highest on the first few days of consuming the pizza and it would continue to increase at a decreasing rate towards the tenth day as I consume more and more pizza. The utility would however sharply decline towards the thirtieth day of the month.
Supposing my national brand gasoline company informs me of free gasoline every day for a period of one year, my excitement would gain be very high on the first day then it would decrease as more and more days go by. The utility derived from consumption of gasoline would remain high on the tenth day though it would be decreasing as I continue to consume more and more gasoline. My excitement and utility would have declined and almost negative by the thirtieth day of gasoline consumption.
The two scenarios though similar in that they involve the utility derived from consumption of a good and service, they would be different in terms of the length of time the good or service is consumed. The longer the time, the more the decline in utility and excitement about consuming any good. They can also be considered to be different in terms of their elasticity. Pizza is more elastic as compared to gasoline which is less elastic as an individual can do without it, (Rosella, 2010).
Grady K,. (2012): Core Microeconomics, Healthcare and Network Goods. Worth Publishers. http://www.dictionaryofeconomics.com/article?id=pde2008_N000138
Rossella A: (2010): Differentiated Networks: Equilibrium and Efficiency. Rand Journal of Economics, Vol.39, No.3, pp747-769.
Holly C,.(2013): The Sherman Antitrust Act: Getting Big Business Under Control. Rosen PublishingGroup.http://www.amazon.com/Sherman-Antitrust-Act-Landmark-Legislation/dp/1608704874.
McNeese T,. (2012): The Robber Barons and the Sherman Antitrust Act: Reshaping American Business. Chelsea House Publishers. http://www.amazon.com/The-Robber-Barons-Sherman-Antitrust/dp/1604130083.
Mankiw G,.(2014):Principles of Microeconomics. South-Wester College Publishers.
Click following link to download this document
|
"Environmental Crisis" in the Late 1960s
Gaviota State Beach in Santa Barbara County
Shoreline in Santa Barbara County, California, under threat from oil processing platforms in late 1960s
During the late 1960s, an “environmental crisis” took shape as a series of environmental catastrophes and revelatory books transformed the American environmental consciousness. Soon before the crisis took its final form, several immensely popular books including Rachel Carson’s 1962 Silent Spring and Ralph Nader’s 1965 Unsafe at Any Speed pushed the public to question the relationship between the government, tasked with protecting the public interest, and industries, incentivized to act in their own economic interests. Pervasive smog in New York City and Los Angeles, the Santa Barbara oil spill, and the Cuyahoga River fires made headlines and frightened Americans across the country. Paul Ehrlich’s book The Population Bomb (1968) connected the dots and helped the public realize that the issues were all related: an exponentially growing population meant an increasing demand for limited resources, which led to unwise decisions about resource use. Driven by fear and empowered with information, the American public was poised to demand change.
Smog in Los Angeles and New York City
Nearly 100 million vehicles filled American roads by 1970, producing more than half of the country’s emissions of hydrocarbons and carbon monoxide. In the 1960s, the US did not yet have strong air quality standards, and the emissions of automobiles and industries polluted the air, sometimes resulting in deadly smog. Smog occurs when the compounds that vehicles, power plants, and factories emit interact with sunlight in the atmosphere to create ground-level ozone which can exacerbate respiratory diseases including asthma, bronchitis, and emphysema. Smog production depends on weather conditions but affects those living in cities most severely because of the high density of vehicles and industries found there. In the late 1960s, Los Angeles’s persistent smog problems and deadly smog events in New York City drew public attention to the consequences of air pollution.
Los Angeles Smog, 1973.
Smog in Downtown
Los Angeles
Los Angeles County was home to approximately 4 million cars by the mid-1960s. Because Los Angeles is in a basin, automotive emissions hung in the air and smog became a part of daily life. Public concern about the health effects of smog prompted several groups to mobilize around the issue. A group of Beverly Hills housewives formed Stamp Out Smog in the 1950s and drew attention to the city’s problems through demonstrations in which they wore gas masks. At one media event in 1964, the group’s president cut a cake celebrating smog’s twenty-first birthday. Stamp Out Smog and other groups did more than protest—they lobbied city and state officials. As a result of citizen pressure, California enacted the country’s toughest automotive emission standards in the late 1960s. Los Angeles’s air quality improved gradually, though smog continued to plague the city.
New York City Smog
Smog in New York City
New York City, the country’s economic center, suffered from smog under certain weather conditions. Two of its most severe smog events occurred in the Novembers of 1953 and 1966 when, as an article in the EPA Journal reported, “Indian summer heat inversions trapped the chemicals and particulates from industrial smokestacks, chimneys, and vehicles that crammed the city streets, keeping the pollutants from rising.” The 1953 event closed at least two airports, caused respiratory distress among many New York residents, and was later linked to the deaths of between 170 and 200 people, though at the time its severity was not known. Thirteen years later, a similar smog event occurred over Thanksgiving weekend, again linked to the deaths of about 200 people. This event was more widely-publicized, thanks to a growing awareness of the health effects of air pollution, and caused people across the country to connect these severe events to the pollution in their communities. As a result, air quality emerged as an issue of great concern for the growing environmental movement.
The Santa Barbara Disaster and Cuyahoga Fire
Santa Barbara Oil Platform
Oil Platform off the Santa Barbara Coast
MI Daily Santa Barbara
Interior Secretary Admits Lax Drilling Enforcement
In January 1969, a Union Oil drilling platform exploded off the coast of Santa Barbara, California, dumping around 100,000 barrels worth of crude oil into the ocean, killing wildlife and washing up on the beaches enjoyed by coastal residents. The episode received massive local and national media coverage, outraged public opinion, and contributed to a widespread sense that Southern California was "losing the fight against pollution of its irreplaceable water resources," in the words of the Los Angeles Times. Local politicians, residents, and environmental organizations had already been demanding action to address the smog crisis, chemical toxins such as DDT, and pollutants in the drinking supply. Investigations revealed that industrial drainage was washing up on Venice Beach, the Greater Los Angeles Zoo was flushing raw sewage directly into the Los Angeles River, and corporations were dumping solid waste into the Inner Harbor. And then during January-February 1969, the residue from the Santa Barbara oil spill covered more than 35 milies of Southern California coastline and had a catastrophic ecological impact. This episode created a public outcry, placed oil corporations on the defensive, and inspired politicians at the local, state, and national levels to take action. The state of California strengthened its environmental regulatory agencies, including fines for corporate polluters, and enacted a moratorium on new offshore oil drilling platforms.
Nixon Santa Barbara Statement
Nixon Calls for Safer
Resource Development
The Santa Barbara oil spill also had an immediate impact in national politics. Interior Secretary Walter Hickel informed Congress that oil companies should have to pay for the costs of the clean-up and admitted that his agency had been too lax in granting drilling permits in the past. Richard Nixon visited the area and promised that his administration would create stricter regulations on oil drilling in coastal waters in response to what he called the "Santa Barbara tragedy." The president also established a panel of scientists and engineers to advise the federal government on the coastal clean-up operation. Nixon, however, embraced resource conservation management and not ecological preservation, telling Americans that "the obligation to develop our natural resources carries with it the duty to protect our human resources. This country can no longer afford to squander valuable time before developing answers to pollution and oil slicks from wells, tankers, or any other source."
"Congress May End Santa Barbara Oil Leases"
Santa Barbara Activists
Denounce Nixon
Many Californians and other Americans, instead, concluded that public policy should ban oil drilling platforms near highly populated coastal areas altogether. Under pressure, the Nixon administration proposed to cancel all oil drilling permits off the immediate coast of Santa Barbara and designate the waters a marine sancturary. Democratic politicians in California denounced the administration's actions as "too little, too late." Environmentalists also opposed the administration's plan to compensate oil companies for any financial losses caused by the creation of the sancturary. Get Oil Out, a citizen activist group based in Santa Barbara, criticized the president because an oil spill beyond the boundaries of the sanctuary would be just as damaging. One hundred thousand people signed the Get Oil Out petition for a complete ban on offshore oil production, and in response California officials went further than the federal government by halting all drilling in state waters. Under pressure, federal agencies placed an effective moratorium on offshore drilling in California as well. Environmental Action, the organization planning the first Earth Day, expressed the consensus view that the "nation's worst oil spill" near Santa Barbara had "helped spur nationwide concern with the environment."
Cuyahoga River 1969
Industry on Cuyahoga River in
Cleveland, Ohio (1969)
The Cuyahoga River fire in June 1969 also resulted from industrial oil pollution in an American waterway and raised concerns about the fate of the Great Lakes in general. The Cuyahoga runs through the center of Cleveland, Ohio, and had long been a dumping ground for waste and sewage from the industrial development along its banks. The river had caught on fire more than a dozen times during the past century, but the 1969 incident drew massive national attention even though the damage was minimal compared to previous occurrences. Time magazine ran a cover story highlighting the Cuyahoga River as the national symbol of the transformation of urban rivers across America into "sewers"--the once great but now filthy Mississippi River, the stinking Potomac in Washington, D.C., and a Cleveland waterway with no visible animal life that "oozes rather than flows." Industrial corporations in Cleveland discharged toxic chemicals and waste into the Cuyahoga with barely any regulatory oversight, and the city's antiquated sewer system spilled 25 million gallons of raw sewage daily into the river, and therefore into Lake Erie as well. The Cuyahoga fire brought national attention to the near "death" of Lake Erie, which was receiving 1.5 billion gallons of noxious waste per day from industrial pollution and sewage overflows in Cleveland, Detroit, and other cities. Along with the Santa Barbara oil spill, the Cuyahoga fire and the pollution of Lake Erie and other Great Lakes helped galvanize environmental consciousness, shift public attitudes, and create the climate for federal laws such as the National Environmental Policy Act of 1969 and the Clean Water Act of 1972.
"The Population Bomb"
Population Growth in Michigan Daily
Quote in the Michigan Daily about population growth, 1969.
Population Bomb
Population Bomb, 1968
In his controversial 1968 book The Population Bomb, biologist Paul Ehrlich argued the environmental crises of the past decade could be traced to a single cause: overpopulation. It was a simple “numbers game:” the Earth had too many people and too few resources to support them. Ehrlich warned that attempts to stretch the Earth’s resources to support the population would result in mass starvation, epidemics, and, ultimately, the breakdown of social order. He saw population control as the only solution to the problem because “the birth rate must be brought into balance with the death rate or mankind will breed itself into oblivion.” Given the severity of this threat, he argued the US government should incentivize having fewer children and improve access to birth control and abortion. He recommended that the US also encourage population control in developing countries, where birth rates were highest, by making population control measures a condition of foreign aid.
President Nixon, Special Message on Population Growth
Nixon's Special Message
on Population Growth, 1969.
Ehrlich’s ominous message caught the public’s attention and placed the issue on the national agenda. On July 18, 1969, President Nixon addressed Congress in a special message about population growth, which he called “one of the most serious challenges to human destiny in the last third of this century.” He highlighted population growth's impact in the US and abroad, and asked Congress to create the Commission on Population Growth and the American Future to study the issue and discuss how the country would house, educate, employ, transport, and protect “the next hundred million Americans” who would be born in the next fifty years. Nixon warned that "the ecological system upon which we now depend may seriously deteriorate if our efforts to conserve and enhance the environment do not match the growth of the population."
A Pledge of Social Responsibility
"Pledge of Social Responsibility"
Michigan Daily, 1970.
Though the problem began to be studied at the federal level, most of the action on the issue took place in communities and on college campuses. Thousands of college students read The Population Bomb and decided to participate in Earth Day and join the environmental movement. Several leaders of Environmental Action, the group which would coordinate the first Earth Day, credit Ehrlich’s book for motivating them to action. In this “Pledge of Social Responsibility” printed in the Michigan Daily, U-M’s chapter of Zero Population Growth, a movement that sought to bring the population growth rate to zero, asked students to pledge to have two or fewer children “in recognition of the modern crisis of uncontrolled population growth.” As the urgency of the pledge suggests, The Population Bomb helped generate the energy that propelled the country toward Earth Day.
DOCUMERICA: The Environmental Protection Agency's Program to Photographically Document Subjects of Environmental Concern, 1972-1977, National Archives, https://catalog.archives.gov/id/542493
New York Times, November 21, 23, 1953
Michigan Daily Digital Archives
Environmental Protection Agency, “Two ‘Killer’ Smogs the Headlines Missed,” EPA Journal 12:10 (December 1986)
Sarah Gardner, “LA Smog: the Battle against Air Pollution” Marketplace, July 14, 2014, https://www.marketplace.org/2014/07/14/sustainability/we-used-be-china/la-smog-battle-against-air-pollution
Los Angeles Times, January 22, February 26-27, December 31, 1968, Spetember 19, December 21, 1969
National Oceanic and Atmospheric Administration, "45 Years after the Santa Barbara Oil Spill, Looking at a Historic Disaster Through Technology," January 28, 2014, https://response.restoration.noaa.gov/about/media/45-years-after-santa-barbara-oil-spill-looking-historic-disaster-through-technology.html
Keith C. Clarke and Jeffrey J. Hemphill, "The Santa Barbara Oil Spill, A Retrospective," Yearbook of the Association of Pacific Coast Geographers (2002), 157-162, https://www.scribd.com/document/34113674/1969-Santa-Barbara-Oil-Spill
"America's Sewage System and the Price of Optimism," Time (August 1, 1969)
Ohio History Central, "Cuyahoga River Fire," http://www.ohiohistorycentral.org/w/Cuyahoga_River_Fire
Media Resources Center (University of Michigan) Records, 1948-1987, Bentley Historical Library, University of Michigan
Paul Ehrlich, The Population Bomb (New York: Ballantine Books Inc., 1968)
Public Papers of the Presidents 1969
|
Explore BrainMass
Explore BrainMass
Standard Normal Distribution Table
Use the Normal Table to find the following:
a) Probability (z < -2.65)
b) Probability (z > -1.55)
c) Probability (-2.00 < z < 2.25)
d) Probability ( 1.25 < z < 2.40)
Please show the steps you used to get to the answers.
© BrainMass Inc. brainmass.com March 4, 2021, 6:07 pm ad1c9bdddf
Solution Preview
a) Probability (z < -2.65)
In the tables prob values for only positive z is given
We will look up Cumulative probability for Z = 2.65
This is equal to 0.4960
This means the probability from z=0 to z= 2.65 is 0.4960
Since the normal distribution is symmetric this is also equal to probability from z=0 to z= -2.65
Probability (z < -2.65) = 0.5 - Prob ( Z= 0 to ...
Solution Summary
Using standard normal distribution table has been used to calculate probabilities in the solution.
|
Chlorine in its various forms is the most commonly used pool disinfectant. In addition to killing bacteria, chlorine helps to kill algae and destroy waste material not removed by the filter system. In its natural or elemental state, chlorine is a gas. But chlorine gas is very toxic and hard to handle safely. This is why chlorine is combined with other compounds to form several liquids and solids which are effective sanitizers and safer to handle than chlorine gas. The use of chlorine in a swimming pool as a sanitizer has long been recognized.
No matter what form of chlorine you use, it’s primary purpose is to combine with water to form what is called FREE CHLORINE or “Hypochlorous Acid”(HA) . It is only the chlorine in the form of HA that sanitizes and disinfects. Over 90% of the sanitizing power of any chlorine comes in form of the all important HA. HA has certain limitations. It tends to be unstable in the presence of sunlight, high temperatures and low ph levels. These conditions cause rapid chlorine loss. The amount of Hypochlorous Acid (disinfectant) your chlorine forms is dependent on the pool water Ph. As the Ph increases, the percentage of chlorine that forms into disinfectant decreases. For example, at a Ph of “8”, only 23% of the chlorine is forming disinfectant. This is one reason why Ph should be monitored.
All swimming pools can develop “CHLORINE DEMAND” when insufficient chlorine is present. Dissolved iron, bacteria, perspiration, algae, pollen spores, and other organic materials create a “CHLORINE DEMAND” in pool water. If enough chlorine were added to form the suitable amount of HA to oxidize all of the pollutants present, a “CHLORINE DEMAND” would no longer exist.
Calcium Hypochlorite is a dry “Unstabilized” granular product with a calcium base. It is 65% available chlorine with a Ph of 12 to 13. It is slow dissolving and should be dissolved in a bucket of water prior to adding to the pool. Constant use of a calcium based chlorine will increase the calcium hardness and cause the Ph to climb upward. In the presence of heat and sunlight, it is relatively unstable and precipitates out of the water rapidly.
Sodium Hypochlorite is a “Unstabilized” liquid chlorine used in many large commercial as well as smaller private pools where handling and storage is not a problem. It has 10% to 15% available chlorine with a Ph of 13 to 14. It does not store well in heat or sunlight and should be used as soon as possible after it is manufactured. The best results are obtained when it is fed automatically through a sodium hypochlorite feeder directly into the water lines. When in the pool water, liquid chlorine is adversely affected by sunlight. For this reason it is best if you only use liquid when a well balanced Stabilizer reading is present. This product is sold fresh at 10-12% available chlorine in 2.5 gallon yellow jugs.
Lithium Hypochlorite is an “Unstabilized” granular fast dissolving material with 35% available chlorine and a Ph of 2 to 3. It is not generally available in all parts of the country and is used primarily as a shock treatment. Most stores do not carry this type of chlorine due to its high cost.
Sodium Dichlor is a “Stabilized” granular chlorine with 56% to 62% available chlorine and a Ph of 7. It is highly resistant to sunlight and heat, thus remaining in water much longer that the previously mentioned chlorine. It can be hand fed as a shock or granular.
Sodium Trichlor is a “Stabilized” tablet or stick form of chlorine with 90% available chlorine and has a Ph of 2 or 3. It is usually fed by means of a floating chlorinator or a by-pass type erosion feeder connected to the return lines of the filter system. It is a chlorine that is resistant to sunlight and has a relatively long life in the pool water. DownTown Pools carrys this product in stock under the names of KING SIZE TABS and in granular form as SUPER ALGAE KILL used primarily to kill black algae or heavy algae problems.
|
The fertile window is the handful of days during a woman’s menstrual cycle when it is possible to conceive. In order to conceive, sex has to take place within the fertile window. (If this is the first time you are hearing this critical information, you are not alone. Sex ed classes typically teach that sex at any time can lead to pregnancy).
The fertile window exists because egg cells do not last forever after ovulation. Once an egg cell is released from the ovary, that egg cell lasts only about twelve hours. At the most, if your egg cells last exceptionally long and you ovulate two eggs, you are fertile for up to 48 hours. That is a rare exception.
Fortunately, sperm cells can last for up to five days in a supportive vagina/uterus which greatly extends the opportunities for sex that leads to conception.
There are many ways of detecting the fertile window.
1. Testing leutenizing hormone (LH) levels. The surge of LH triggers ovulation. LH levels can be detected in urine, by using LH strips, ovulation predictor kits or electronic fertility monitors.
2. Basal body temperature monitoring. After ovulation occurs, body temperature rises in response to higher levels of the hormone progesterone. This change occurs after ovulation, so it cannot predict ovulation. However, if you have regular cycles, you can track your temperature change over a few cycles and then know pretty well when you typically ovulate.
3. Detecting cervical fluid changes. To help with conception, fertile quality cervical fluid, which looks and feels like egg white, is produced around the time of ovulation. You may see it on your underwear, on toilet paper, or on your finger if you do an internal exam. You know it is fertile quality if it stretches between your fingers.
The are other methods of detecting ovulation, but these are the three most reliable ones that can be used at home.
Trying to time sex to correspond to the fertile window is called ‘timed intercourse’ in the scientific literature. A large review of timed intercourse had these findings:
• The overall quality of the information we have about whether or not timed intercourse is helpful is poor
• As best the authours could determine given the quality of the evidence, timed intercourse does improve the chance of becoming pregnant
Guidelines for doctors assessing infertility in the US and the UK advise against recommending timed intercourse because it can be stressful. I find this viewpoint to be paternalistic. Some couples might find it stressful but others will feel empowered by having the information and increasing their knowledge of their cycle patterns. Whether or not to do it should be the couple’s decision.
As a naturopathic doctor with a special interest in this area, information from LH levels, basal body temperature charts and pattern of cervical fluid can provide me with useful information that can impact the treatment plan I recommend. In most cases, a little extra stress is a good trade-off for that information.
Dr. Andrea Hilborn is a naturopathic doctor in Kingston, Ontario
Cochrane Database Syst Rev. 2015 Mar 17;(3):CD011345. doi: 10.1002/14651858.CD011345.pub2. Timed intercourse for couples trying to conceive. Manders M1, McLindon L, Schulze B, Beckmann MM, Kremer JA, Farquhar C.
Fertil Steril. 2012 Aug;98(2):302-7. Epub 2012 Jun 13. Diagnostic evaluation of the infertile female: a committee opinion. Practice Committee of American Society for Reproductive Medicine.
Fertility: assessment and treatment for people with fertility problems. National Collaborating Centre for Women’s and Childrens Health. Commissioned by the National Institute for Clinical Excellence. Feb 2004. RCOG Press.
|
Strychnine (/ˈstrɪknn/ or /-nɪn/; US mainly /ˈstrɪknn/)[5][6] is a highly toxic, colorless, bitter, crystalline alkaloid used as a pesticide, particularly for killing small vertebrates such as birds and rodents. Strychnine, when inhaled, swallowed, or absorbed through the eyes or mouth, causes poisoning which results in muscular convulsions and eventually death through asphyxia.[7] While it is no longer used medicinally, it was used historically in small doses to strengthen muscle contractions, such as a heart and bowel stimulant[8] and performance enhancing drug. The most common source is from the seeds of the Strychnos nux-vomica tree.
IUPAC name
Preferred IUPAC name
• 57-24-9 (base) checkY
• 60-41-3 (sulfate) checkY
3D model (JSmol)
ECHA InfoCard 100.000.290 Edit this at Wikidata
RTECS number
• WL2275000
UN number 1692
• InChI=1S/C21H22N2O2/c24-18-10-16-19-13-9-17-21(6-7-22(17)11-12(13)5-8-25-16)14-3-1-2-4-15(14)23(18)20(19)21/h1-5,13,16-17,19-20H,6-11H2/t13-,16-,17-,19-,20-,21+/m0/s1 checkY
• O=C7N2c1ccccc1[C@@]64[C@@H]2[C@@H]3[C@@H](OC/C=C5\[C@@H]3C[C@@H]6N(CC4)C5)C7
Molar mass 334.419 g·mol−1
Appearance White or translucent crystal or crystalline powder; Bitter tasting
Odor Odorless
Density 1.36 g cm−3
Melting point 270 °C; 518 °F; 543 K
Boiling point 284 to 286 °C; 543 to 547 °F; 557 to 559 K
0.02% (20°C)[2]
Acidity (pKa) 8.25[3]
Main hazards Extremely toxic
GHS pictograms GHS06: ToxicGHS09: Environmental hazard
GHS Signal word Danger
H300, H310, H330, H410
P260, P264, P273, P280, P284, P301+310
NFPA 704 (fire diamond)
Flash point Non flammable.
Non flammable.
Lethal dose or concentration (LD, LC):
0.5 mg/kg (dog, oral)
0.5 mg/kg (cat, oral)
2 mg/kg (mouse, oral)
16 mg/kg (rat, oral)
2.35 mg/kg (rat, oral)[4]
0.6 mg/kg (rabbit, oral)[4]
NIOSH (US health exposure limits):
PEL (Permissible)
TWA 0.15 mg/m3[2]
REL (Recommended)
TWA 0.15 mg/m3[2]
IDLH (Immediate danger)
3 mg/m3[2]
checkY verify (what is checkY☒N ?)
Infobox references
Strychnine is a terpene indole alkaloid belonging to the Strychnos family of Corynanthe alkaloids, and it is derived from tryptamine and secologanin.[9][10] The enzyme, strictosidine synthase, catalyzes the condensation of tryptamine and secologanin, followed by a Pictet-Spengler reaction to form strictosidine.[11] While the enzymes that catalyze the following steps have not been identified, the steps have been inferred by isolation of intermediates from Strychnos nux-vomica.[12] The next step is hydrolysis of the acetal, which opens the ring by elimination of glucose (O-Glu) and provides a reactive aldehyde. The nascent aldehyde is then attacked by a secondary amine to afford geissoschizine, a common intermediate of many related compounds in the Strychnos family.[9]
A reverse Pictet-Spengler reaction cleaves the C2–C3 bond, while subsequently forming the C3–C7 bond via a 1,2-alkyl migration, an oxidation from a cytochrome P450 enzyme to a spiro-oxindole, nucleophilic attack from the enol at C16, and elimination of oxygen forms the C2–C16 bond to provide dehydropreakuammicine.[13] Hydrolysis of the methyl ester and decarboxylation leads to norfluorocurarine. Stereospecific reduction of the endocyclic double bond by NADPH and hydroxylation provides the Wieland-Gumlich aldehyde, which was first isolated by Heimberger and Scott in 1973, although previously synthesized by Wieland and Gumlich in 1932.[12][14] To elongate the appendage by 2 carbons, acetyl-CoA is added to the aldehyde in an aldol reaction to afford prestrychnine. Strychnine is then formed by a facile addition of the amine with the carboxylic acid or its activated CoA thioester, followed by ring-closure via displacement of an activated alcohol.
Chemical synthesisEdit
As early researchers have noted, the strychnine molecular structure, with its specific array of rings, stereocenters, and nitrogen functional groups, is a complex synthetic target, and has stimulated interest for that reason and for interest in the structure-activity relationships underlying its pharmacologic activities.[15] An early synthetic chemist targeting strychnine, R.B. Woodward, quoted the chemist who determined its structure through chemical decomposition and related physical studies as saying that "for its molecular size it is the most complex organic substance known" (attributed to Sir Robert Robinson).[16]
The first total synthesis of strychnine was reported by the research group of R. B. Woodward in 1954, and is considered a classic in this field.[17][9] The Woodward account published in 1954 was very brief (3 pp.),[18] but was followed by a 42-page report in 1963.[19] The molecule has since received continuing wide attention in the years since for the challenges to synthetic organic strategy and tactics presented by its complexity; its synthesis has been targeted and its stereocontrolled preparation independently achieved by more than a dozen research groups since the first success (see main strychnine total synthesis article).
Mechanism of actionEdit
Strychnine is a neurotoxin which acts as an antagonist of glycine and acetylcholine receptors. It primarily affects the motor nerve fibers in the spinal cord which control muscle contraction. An impulse is triggered at one end of a nerve cell by the binding of neurotransmitters to the receptors. In the presence of an inhibitory neurotransmitter, such as glycine, a greater quantity of excitatory neurotransmitters must bind to receptors before there will be an action potential generated. Glycine acts primarily as an agonist of the glycine receptor, which is a ligand-gated chloride channel in neurons located in the spinal cord and in the brain. This chloride channel will allow the negatively charged chloride ions into the neuron, causing a hyperpolarization which pushes the membrane potential further from threshold. Strychnine is an antagonist of glycine; it binds noncovalently to the same receptor, preventing the inhibitory effects of glycine on the postsynaptic neuron. Therefore, action potentials are triggered with lower levels of excitatory neurotransmitters. When the inhibitory signals are prevented, the motor neurons are more easily activated and the victim will have spastic muscle contractions, resulting in death by asphyxiation.[7][20] Strychnine binds the Aplysia californica acetylcholine binding protein (a homolog of nicotinic receptors) with high affinity but low specificity, and does so in multiple conformations.[21]
In high doses, strychnine is very toxic to humans (minimum lethal oral dose in adults is 30–120 mg) and many other animals (oral LD50 = 16 mg/kg in rats, 2 mg/kg in mice),[22] and poisoning by inhalation, swallowing, or absorption through eyes or mouth can be fatal. S. nux-vomica seeds are generally effective as a poison only when they are crushed or chewed before swallowing because the pericarp is quite hard and indigestible; poisoning symptoms may therefore not appear if the seeds are ingested whole.[citation needed]
Animal toxicityEdit
Strychnine poisoning in animals usually occurs from ingestion of baits designed for use against gophers, moles, and coyotes. Strychnine is also used as a rodenticide, but is not specific to such unwanted pests and may kill other small animals.[23] In the United States, most baits containing strychnine have been replaced with zinc phosphide baits since 1990. In the European Union, rodenticides with strychnine are forbidden since 2006. Some animals are immune to strychnine, usually these are species such as fruit bats that have evolved resistance to poisonous alkaloids in the fruit they eat. The drugstore beetle has a symbiotic gut yeast that allows it to digest pure strychnine.
Strychnine toxicity in rats is dependent on sex. It is more toxic to females than to males when administered via subcutaneous injection or intraperitoneal injection. Differences are due to higher rates of metabolism by male rat liver microsomes. Dogs and cats are more susceptible among domestic animals, pigs are believed to be as susceptible as dogs, and horses are able to tolerate relatively large amounts of strychnine. Birds affected by strychnine poisoning exhibit wing droop, salivation, tremors, muscle tenseness, and convulsions. Death occurs as a result of respiratory arrest. The clinical signs of strychnine poisoning relate to its effects on the central nervous system. The first clinical signs of poisoning include nervousness, restlessness, twitching of the muscles, and stiffness of the neck. As the poisoning progresses, the muscular twitching becomes more pronounced and convulsions suddenly appear in all the skeletal muscles. The limbs are extended and the neck is curved to opisthotonus. The pupils are widely dilated. As death approaches, the convulsions follow one another with increased rapidity, severity, and duration. Death results from asphyxia due to prolonged paralysis of the respiratory muscles. Following the ingestion of strychnine, symptoms of poisoning usually appear within 15 to 60 min. The LD50-values for strychnine in animals are listed below in table 1.
The LD50 values for strychnine in animals
Organism Route LD50 (mg/kg)
Bird-wild[24] Oral 16
Cat[25] Intravenous 0.33
Cat[26] Oral 0.5
Dog[27] Intravenous 0.8
Dog[25] Subcutaneous 0.35
Dog[26] Oral 0.5
Duck[24] Oral 3.0
Mouse[28] Intraperitoneal 0.98
Mouse[29] Intravenous 0.41
Mouse[30] Oral 2.0
Mouse[31] Parenteral 1.06
Mouse[32] Subcutaneous 0.47
Pigeon[24] Oral 21.0
Quail[24] Oral 23.0
Rabbit[27] Intravenous 0.4
Rabbit[25] Oral 0.6
Rat[33] Oral 16.0
Rat[34] Intravenous 2.35
Human toxicityEdit
An 1809 painting depicting opisthotonus
The symptoms of poisoning in humans are generally similar to those as in other animals, because the mechanism of action is apparently similar across species. The toxicity of strychnine in humans is not ethically studied, so most information known comes from cases of strychnine poisoning, both unintentional and deliberate.
After injection, inhalation, or ingestion, the first symptoms to appear are generalized muscle spasms. They appear very quickly after inhalation or injection — within as few as five minutes — and take somewhat longer to manifest after ingestion, typically approximately 15 minutes. With a very high dose, the onset of respiratory failure and brain death can occur in 15 to 30 minutes. If a lower dose is ingested, other symptoms begin to develop, including seizures, cramping, stiffness,[35] hypervigilance, and agitation.[36] Seizures caused by strychnine poisoning can start as early as 15 minutes after exposure and last 12 – 24 hours. They are often triggered by sights, sounds, or touch and can cause other adverse symptoms, including hyperthermia, rhabdomyolysis, myoglobinuric kidney failure, metabolic acidosis, and respiratory acidosis. During seizures, mydriasis (abnormal dilation), exophthalmos (protrusion of the eyes), and nystagmus (involuntary eye movements) may occur.[23]
As strychnine poisoning progresses, tachycardia (rapid heart beat), hypertension (high blood pressure), tachypnea (rapid breathing), cyanosis (blue discoloration), diaphoresis (sweating), water-electrolyte imbalance, leukocytosis (high number of white blood cells), trismus (lockjaw), risus sardonicus (spasm of the facial muscles), and opisthotonus (dramatic spasm of the back muscles, causing arching of the back and neck) can occur. In rare cases, the affected person may experience nausea or vomiting.[23]
The proximate cause of death in strychnine poisoning can be cardiac arrest, respiratory failure, multiple organ failure, or brain damage.[23]
The minimum lethal dose values estimated from different cases of strychnine poisoning are listed below in table 2.
Minimum lethal dose estimates for strychnine in humans
Route Dose (mg)
Human[37][38] Oral 100–120
Human[39] Oral 30–60
Human (child)[40][41] Oral 15
Human (adult)[42] Oral 50–100
Human (adult)[41] Oral 30–100
Human[43] Intravenously 5–10 (approximate)
For occupational exposures to strychnine, the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have set exposure limits at 0.15 mg/m3 over an 8-hour work day.[2]
Because strychnine produces some of the most dramatic and painful symptoms of any known toxic reaction, strychnine poisoning is often portrayed in literature and film including authors Agatha Christie and Arthur Conan Doyle.[44]
Strychnine may be introduced into the body orally, by inhalation, or by injection. It is a potently bitter substance, and in humans has been shown to activate bitter taste receptors TAS2R10 and TAS2R46.[45][46][47] Strychnine is rapidly absorbed from the gastrointestinal tract.[48]
Strychnine is transported by plasma and erythrocytes. Due to slight protein binding, strychnine leaves the bloodstream quickly and distributes to the tissues. Approximately 50% of the ingested dose can enter the tissues in 5 minutes. Also within a few minutes of ingestion, strychnine can be detected in the urine. Little difference was noted between oral and intramuscular administration of strychnine in a 4 mg dose.[49] In persons killed by strychnine, the highest concentrations are found in the blood, liver, kidney and stomach wall. The usual fatal dose is 60–100 mg strychnine and is fatal after a period of 1–2 hours, though lethal doses vary depending on the individual.
Strychnine is rapidly metabolized by the liver microsomal enzyme system requiring NADPH and O2. Strychnine competes with the inhibitory neurotransmitter glycine resulting in an excitatory state. However, the toxicokinetics after overdose have not been well described. In most severe cases of strychnine poisoning, the patient dies before reaching the hospital. The biological half-life of strychnine is about 10 hours. This half-life suggests that normal hepatic function can efficiently degrade strychnine even when the quantity ingested is high enough to cause severe poisoning.[citation needed]
A few minutes after ingestion, strychnine is excreted unchanged in the urine, and accounts for about 5 to 15% of a sublethal dose given over 6 hours. Approximately 10 to 20% of the dose will be excreted unchanged in the urine in the first 24 hours. The percentage excreted decreases with the increasing dose. Of the amount excreted by the kidneys, about 70% is excreted in the first 6 hours, and almost 90% in the first 24 hours. Excretion is virtually complete in 48 to 72 hours.[50]
There is no specific antidote for strychnine but recovery from exposure is possible with early supportive medical treatment. Strychnine poisoning demands aggressive management with early control of muscle spasms, intubation for loss of airway control, toxin removal (decontamination), intravenous hydration and potentially active cooling efforts in the context of hyperthermia as well as hemodialysis in kidney failure (to note, strychnine has not been shown to be removed by hemodialysis).[23] Strychnine poisoning in today's age generally results from herbal remedies and strychnine-containing rodenticides.[51] Moreover, management should be tailored to the patient's history of chief complaint and workup to rule out other causes. If a poisoned person is able to survive for 6 to 12 hours subsequent to initial dose, they have a good prognosis.[23] The patient should be kept in a quiet and darkened room, because excessive manipulation and loud noises may cause convulsions. Because these convulsions are extremely painful, appropriate analgesics should be administered. Treatment of strychnine poisoning involves oral administration of activated charcoal which adsorbs strychnine within the digestive tract; unabsorbed strychnine is removed from the stomach by gastric lavage, along with tannic acid or potassium permanganate solutions to oxidize strychnine. Activated charcoal may be beneficial, but its benefit remains unproven, to note its use should be avoided in any patient with a tenuous airway or altered mental status.[52] Seizures are controlled by anticonvulsants, such as phenobarbital or diazepam,[23] along with muscle relaxants such as dantrolene to combat muscle rigidity. Historically chloroform or heavy doses of chloral, bromide, urethane or amyl nitrite were used to restrain the convulsions.[citation needed] Because medications such as diazepam are not effective to relieve convulsions in all cases, concurrent use of barbiturates and/or propofol can be utilized.
The sine qua non of strychnine toxicity is the "awake" seizure, in which tonic-clonic activity occurs but the patient is alert and oriented throughout and afterwards.[53] Accordingly, George Harley (1829–1896) showed in 1850 that curare (wourali) was effective for the treatment of tetanus and strychnine poisoning. It is important to note that if seizure activity is present, the use of muscle paralysis will only mask the signs of ongoing seizure activity despite otherwise ongoing present brain damage.[54]
Strychnine was the first alkaloid to be identified in plants of the genus Strychnos, family Loganiaceae. Strychnos, named by Carl Linnaeus in 1753, is a genus of trees and climbing shrubs of the Gentianales order. The genus contains 196 various species and is distributed throughout the warm regions of Asia (58 species), America (64 species), and Africa (75 species). The seeds and bark of many plants in this genus contain strychnine.
The toxic and medicinal effects of Strychnos nux-vomica have been well known from the times of ancient India, although the chemical compound itself was not identified and characterized until the 19th century. The inhabitants of these countries had historical knowledge of the species Strychnos nux-vomica and Saint-Ignatius' bean (Strychnos ignatii). Strychnos nux-vomica is a tree native to the tropical forests on the Malabar Coast in Southern India, Sri Lanka and Indonesia, which attains a height of about 12 metres (39 ft). The tree has a crooked, short, thick trunk and the wood is close grained and very durable. The fruit has an orange color and is about the size of a large apple with a hard rind and contains five seeds, which are covered with a soft wool-like substance. The ripe seeds look like flattened disks, which are very hard. These seeds are the chief commercial source of strychnine and were first imported to and marketed in Europe as a poison to kill rodents and small predators. Strychnos ignatii is a woody climbing shrub of the Philippines. The fruit of the plant, known as Saint Ignatius' bean, contains as many as 25 seeds embedded in the pulp. The seeds contain more strychnine than other commercial alkaloids. The properties of S. nux-vomica and S. ignatii are substantially those of the alkaloid strychnine.
Strychnine was first discovered by French chemists Joseph Bienaimé Caventou and Pierre-Joseph Pelletier in 1818 in the Saint-Ignatius' bean.[55][56] In some Strychnos plants a 9,10-dimethoxy derivative of strychnine, the alkaloid brucine, is also present. Brucine is not as poisonous as strychnine. Historic records indicate that preparations containing strychnine (presumably) had been used to kill dogs, cats, and birds in Europe as far back as 1640.[50] It was also used during World War II by the Dirlewanger Brigade against civilian population.[57]
The structure of strychnine was first determined in 1946 by Sir Robert Robinson and in 1954 this alkaloid was synthesized in a laboratory by Robert B. Woodward. This is one of the most famous syntheses in the history of organic chemistry. Both chemists won the Nobel prize (Robinson in 1947 and Woodward in 1965).[50]
Strychnine has been used as a plot device in the author Agatha Christie's murder mysteries.[58]
Performance enhancerEdit
Strychnine was popularly used as an athletic performance enhancer and recreational stimulant in the late 19th century and early 20th century, due to its convulsant effects. Maximilian Theodor Buch proposed it as a cure for alcoholism around the same time. It was thought to be similar to coffee.[59][60] Its effects are well-described in H. G. Wells' novella The Invisible Man: the title character states "Strychnine is a grand tonic ... to take the flabbiness out of a man." The protagonist replies: "It's the devil, ... It's the palaeolithic in a bottle."[61]
See alsoEdit
1. ^ Retrieved from SciFinder. [May 7, 2018]
2. ^ a b c d e "Strychnine". CDC - NIOSH Pocket Guide to Chemical Hazards.
3. ^ Everett AJ, Openshaw HT, Smith GF (1957). "The constitution of aspidospermine. Part III. Reactivity at the nitrogen atoms, and biogenetic considerations". Journal of the Chemical Society: 1120–3. doi:10.1039/JR9570001120.
4. ^ a b "Strychnine". Immediately Dangerous to Life or Health Concentrations (IDLH). National Institute for Occupational Safety and Health (NIOSH).
5. ^ Wells JC (2008). Longman Pronunciation Dictionary (3rd ed.). Longman. ISBN 978-1-4058-8118-0.
6. ^ Jones D (2011). Roach P, Setter J, Esling J (eds.). Cambridge English Pronouncing Dictionary (18th ed.). Cambridge University Press. ISBN 978-0-521-15255-6.
7. ^ a b Sharma RK (2008). Consice textbook of forensic medicine & toxicology. Elsevier.
8. ^ Munro, J. M. H. (1914-04-18). "Veronal Poisoning: Case of Recovery from 125 Grains". British Medical Journal. 1 (2781): 854–856. doi:10.1136/bmj.1.2781.854. ISSN 0007-1447. PMC 2300683. PMID 20767090. An attempt was made to administer a soap-and-water enema, but the sphincter was not acting. After hypodermic injection of 1/45 grain [1.44 mg] strychnine, a second attempt was made, and a good evacuation of the bowel followed, after which half a pint [284 ml] of normal saline was injected and retained. [...] We decided to adhere to the treatment already commenced--namely, periodical rectal injection of saline and withdrawals of urine by catheter, with oxygen inhalation for cyanosis, and strychnine hypodermically as the pulse weakened.
9. ^ a b c Bonjoch J, Solé D (September 2000). "Synthesis of Strychnine". Chemical Reviews. 100 (9): 3455–3482. doi:10.1021/cr9902547. PMID 11777429.
10. ^ Dewick PM (2009). Medicinal natural products: a biosynthetic approach (3rd ed.). Chichester: A John Wiley & Sons. pp. 377–378. ISBN 978-0-470-74167-2.
11. ^ Treimer JF, Zenk MH (November 1979). "Purification and properties of strictosidine synthase, the key enzyme in indole alkaloid formation". European Journal of Biochemistry. 101 (1): 225–33. doi:10.1111/j.1432-1033.1979.tb04235.x. PMID 510306.
12. ^ a b Heimberger SI, Scott AI (1973). "Biosynthesis of strychnine". Journal of the Chemical Society, Chemical Communications (6): 217–8. doi:10.1039/C39730000217.
13. ^ Tatsis EC, Carqueijeiro I, Dugé de Bernonville T, Franke J, Dang TT, Oudin A, et al. (August 2017). "A three enzyme system to generate the Strychnos alkaloid scaffold from a central biosynthetic intermediate". Nature Communications. 8 (1): 316. Bibcode:2017NatCo...8..316T. doi:10.1038/s41467-017-00154-x. PMC 5566405. PMID 28827772.
14. ^ Wieland H, Gumlich W (1932). "Über einige neue Reaktionen der Strychnos - Alkaloide. XI" [On some new reactions of the Strychnos alkaloids. XI]. Justus Liebig's Annalen der Chemie (in German). 494: 191–200. doi:10.1002/jlac.19324940116.
16. ^ Robinson R (1952). "Molecular structure of Strychnine, Brucine and Vomicine". Progress in Organic Chemistry. 1: 2.
17. ^ Nicolaou KC, Sorensen EJ (1996). Classics in Total Synthesis: Targets, Strategies, Methods. Wiley. ISBN 978-3-527-29231-8.[page needed]
18. ^ Woodward RB (1954). "The total synthesis of strychnine". Experientia. 76 (Suppl 2): 213–28. doi:10.1021/ja01647a088. PMID 13305562.
19. ^ Woodward RB (1963). "The total synthesis of strychnine". Experientia. 19 (Suppl 2): 213–28. doi:10.1016/S0040-4020(01)98529-1. PMID 13305562.
20. ^ Waring RH, Steventon GB, Mitchell SC (2007). Molecules of death. Imperial College Press.[page needed]
21. ^ Brams M, Pandya A, Kuzmin D, van Elk R, Krijnen L, Yakel JL, et al. (March 2011). "A structural and mutagenic blueprint for molecular recognition of strychnine and d-tubocurarine by different cys-loop receptors". PLOS Biology. 9 (3): e1001034. doi:10.1371/journal.pbio.1001034. PMC 3066128. PMID 21468359.
22. ^ "Strychnine". INCHEM: Chemical Safety Information from Intergovernmental Organizations.
23. ^ a b c d e f g "CDC - The Emergency Response Safety and Health Database: Biotoxin: STRYCHNINE - NIOSH". Retrieved 2016-01-02.
24. ^ a b c d Tucker RK, Haegele MA (September 1971). "Comparative acute oral toxicity of pesticides to six species of birds". Toxicology and Applied Pharmacology. 20 (1): 57–65. doi:10.1016/0041-008X(71)90088-3. PMID 5110827.
25. ^ a b c RTECS (1935)
26. ^ a b Moraillon R, Pinoult L (1978). "Diagnostic et traitement d'intoxications courantes des carnivores" [Diagnosis and treatment of common poisoning of carnivores]. Rec Med Vet (in French). 174 (1–2): 36–43.
27. ^ a b Longo VG, Silvestrini B, Bovet D (May 1959). "An investigation of convulsant properties of the 5-7-diphenyl-1-3-diazadamantan-6-01 (1757-I. S.)". The Journal of Pharmacology and Experimental Therapeutics. 126 (1): 41–9. PMID 13642285.
28. ^ Setnikar I, Murmann W, Magistretti MJ, Da Re P (February 1960). "Amino-methylchromones, brain stem stimulants and pentobarbital antagonists". The Journal of Pharmacology and Experimental Therapeutics. 128: 176–81. PMID 14445192.
29. ^ Haas H (October 1960). "[On 3-piperidino-1-phenyl-1-bicycloheptenyl-1-propanol (Akineton). 2]". Archives Internationales de Pharmacodynamie et de Therapie. 128: 204–38. PMID 13710192.
30. ^ Prasad CR, Patnaik GK, Gupta RC, Anand N, Dhawan BN (November 1981). "Central nervous system stimulant activity of n-(delta 3-chromene-3-carbonyl)-4 iminopyridine (compound 69/224)". Indian Journal of Experimental Biology. 19 (11): 1075–6. PMID 7338366.
31. ^ Zapata-Ortiz V, Castro De La Mata R, Barantes-Campos R (July 1961). "[The anticonvulsive action of cocaine]" [The anticonvulsive action of cocaine]. Arzneimittel-Forschung (in German). 11: 657–62. PMID 13787891.
32. ^ Sandberg F, Kristianson K (September 1970). "A comparative study of the convulsant effects of strychnos alkaloids". Acta Pharmaceutica Suecica. 7 (4): 329–36. PMID 5480076.
33. ^ Spector WS (1956). Handbook of Toxicology. 1. Philadelphia: W. B. Saunders Company. p. 286.
34. ^ Ward JC, Crabtree DG (1942). "Strychnine X. Comparative accuracies of stomach tube and intraperitoneal injection methods of bioassay". Journal of the American Pharmaceutical Association. 31 (4): 113–5. doi:10.1002/jps.3030310406.
35. ^ Duverneuil C, de la Grandmaison GL, de Mazancourt P, Alvarez JC (April 2004). "Liquid chromatography/photodiode array detection for determination of strychnine in blood: a fatal case report". Forensic Science International. 141 (1): 17–21. doi:10.1016/j.forsciint.2003.12.010. PMID 15066709.
36. ^ Santhosh GJ, Joseph W, Thomas M (July 2003). "Strychnine poisoning". The Journal of the Association of Physicians of India. 51: 739–40. PMID 14621058.
37. ^ Zenz C, Dickerson OB, Horvath EP (1994). Occupational Medicine (3rd ed.). St Louis. p. 640.
38. ^ Palatnick W, Meatherall R, Sitar D, Tenenbein M (2008). "Toxicokinetics of acute strychnine poisoning". Journal of Toxicology. Clinical Toxicology. 35 (6): 617–20. doi:10.3109/15563659709001242. PMID 9365429.
39. ^ Lewis RG (1996). Sax's Dangerous Properties of Industrial Materials. 1–3 (9th ed.). New York: Van Nostrand Reinhold. p. 3025.
40. ^ Goodman LS, Gilman AG, Gilman AM (1985). The pharmalogical basis of therapeutics. New York Macmillan Publishing & Co., Inc.
41. ^ a b Gossel TA, Bricker JD (1994). Principles of Clinical Toxicology (3rd ed.). New York: Raven Press. p. 351.
42. ^ Migliaccio E, Celentano R, Viglietti A, Viglietti G (1990). "[Strychnine poisoning. A clinical case]". Minerva Anestesiologica. 56 (1–2): 41–2. PMID 2215981.
43. ^ Ellenhorn MJ, Schonwald S, Ordog G, Wasserberger J, eds. (1997). "Strychnine". Medical Toxicology: Diagnosis and Treatment of Human Poisoning. Baltimore: Williams & Wilkins. pp. 1660–62.
44. ^ "Chemistry in its element - strychnine". Royal Society of Chemistry. Retrieved 18 May 2016.
45. ^ Meyerhof W, Batram C, Kuhn C, Brockhoff A, Chudoba E, Bufe B, et al. (February 2010). "The molecular receptive ranges of human TAS2R bitter taste receptors". Chemical Senses. 35 (2): 157–70. doi:10.1093/chemse/bjp092. PMID 20022913.
46. ^ Born S, Levit A, Niv MY, Meyerhof W, Behrens M (January 2013). "The human bitter taste receptor TAS2R10 is tailored to accommodate numerous diverse ligands". The Journal of Neuroscience. 33 (1): 201–13. doi:10.1523/JNEUROSCI.3248-12.2013. PMC 6618634. PMID 23283334.
47. ^ Meyerhof W, Born S, Brockhoff A, Behrens M (2011). "Molecular biology of mammalian bitter taste receptors. A review". Flavour and Fragrance Journal. 26 (4): 260–8. doi:10.1002/ffj.2041.
48. ^ Lambert JR, Byrick RJ, Hammeke MD (May 1981). "Management of acute strychnine poisoning". Canadian Medical Association Journal. 124 (10): 1268–70. PMC 1705440. PMID 7237316.
49. ^ Gupta RC (2009-01-01). Handbook of toxicology of chemical warfare agents. Elsevier/Academic Press. ISBN 978-0-12-800159-2. OCLC 433545336.
50. ^ a b c Gupta RC, Patocka J (2009). Handbook of Toxicology of Chemical Warfare Agents. London: Academic Press. p. 199. ISBN 9780080922737.
51. ^ Katz J, Prescott K, Woolf AD (September 1996). "Strychnine poisoning from a Cambodian traditional remedy". The American Journal of Emergency Medicine. 14 (5): 475–7. doi:10.1016/S0735-6757(96)90157-6. PMID 8765115.
52. ^ Smith BA (1990). "Strychnine poisoning". The Journal of Emergency Medicine. 8 (3): 321–5. doi:10.1016/0736-4679(90)90013-L. PMID 2197324.
53. ^ Boyd RE, Brennan PT, Deng JF, Rochester DF, Spyker DA (March 1983). "Strychnine poisoning. Recovery from profound lactic acidosis, hyperthermia, and rhabdomyolysis". The American Journal of Medicine. 74 (3): 507–12. doi:10.1016/0002-9343(83)90999-3. PMID 6829597.
54. ^ "Rapid Sequence Termination (RST) of status epilepticus". 2014-06-04.
55. ^ Pelletier PP, Caventou JB (1818). "Note sur un nouvel alkalai" [Note on a new alkali]. Annales de Chimie et de Physique (in French). 8: 323–324.
56. ^ Pelletier PP, Caventou JB (1819). "Mémoire sur un nouvel alcali vegetal (la strychnine) trouvé dans la feve de Saint-Ignace, la noix vomique, etc" [Memoir on a new vegetable alkali (strychnine) found in the St. Ignatius bean, the nux-vomica, etc)]. Annales de Chimie et de Physique (in French). 10: 142–176.
57. ^ Grunberger R (1971). The 12-Year Reich: A Social History of Nazi Germany, 1933–1945. Holt, Rinehart and Winston. p. 104.
58. ^ "Killed by Agatha Christie: Strychnine and the detective novel". Open university. Retrieved 27 July 2017.
59. ^ Inglis-Arkell E (11 June 2013). "Rat poison strychnine was an early performance-enhancing drug". io9. Gawker Media. Retrieved 23 Nov 2015.
60. ^ "Strictly strychnine - medicines to be avoided by athletes".
61. ^ Wells HG. The Invisible Man.
|
September 23, 2021
Hankering for History
Nazi War Criminals Hanged to Death
4 min read
John C. Woods
John C. Woods Preparing the Gallows
Here is a new one for you. What happened in history, yesterday… I know, I know. A day late, and a dollar short. But it was too good not to go back and cover it. Actually, I looked into this specific event months ago and had been waiting for yesterday to write about it. Unfortunately, I spent eight hours traveling yesterday and didn’t get the opportunity to write about it.
Yesterday in history, October 16, 1946, ten Nazi war criminals were hanged as a result of the Nuremberg Trials. While the trials themselves are historically famous, a part of it that is often overlooked are–for lack of a better term–the ‘brutal’ executions that these men faced. There are several that give into conspiracies and believe that the job was purposely botched, there are those that blame it on hurried and shoddy craftsmanship of the noose and the gallows, and of course there believe that the hangings were unfortunate, but accidentally brutal.
John C. Woods
John C. Woods Preparing the Gallows
The hangman was John C. Woods, an American Master Sergeant, that over his career, as the hangman for the Third United States Army, would execute three hundred and forty-seven (347) criminals during his fifteen years of service. Woods, with the help of Joseph Malta, a United States Army military policeman who volunteered to help him, hung all ten men on two separate gallows. The gallows and hangman nooses were constructed by Woods, and the executions took place in the prison gymnasium. Unfortunately, there were issues with both the gallows and the nooses.
The gallows had a small trapdoor with improper bungs. The rubber bungs on a trapdoor are to make sure that the door doesn’t swing back after release. This one did; in doing so, slapping the hanged men in the face or in the back of their heads. As you can see in the pictures (below), many of the men showed signs of bruising and bleeding on their faces. While these men were being slapped with wood, they slowly suffocated. Instead of measuring out each rope for a proper drop and an instant neck-break, Woods used the standard military six-foot drop. This technique was outdated and not nearly as effective as the British’s current techniques, which was developed by Albert Pierrepoint. His technique was specifically tailored to each persons’ height and weight. Pierrepoint’s hanging method resulted in an almost instantaneous death, unlike Woods’. With Woods’ standard drop, the condemned men took ten to twenty minutes to slowly and painfully suffocate to death. The following are excerpts from The Execution of Nazi War Criminals: by Kingsbury Smith. Smith was one of the eight reporters allowed to be present during the execution.
At that instant the trap opened with a loud bang. He went down kicking. When the rope snapped taut with the body swinging wildly, groans could be heard from within the concealed interior of the scaffold. Finally, the hangman, who had descended from the gallows platform, lifted the black canvas curtain and went inside. Something happened that put a stop to the groans and brought the rope to a standstill. After it was over I was not in the mood to ask what he did, but I assume that he grabbed the swinging body of and pulled down on it. We were all of the opinion that Streicher had strangled.
More than one of the hanged men were quoted as “moaning” as they slowly died.
With both von Ribbentrop and Keitel hanging at the end of their rope there was a pause in the proceedings. The American colonel directing the executions asked the American general representing the United States on the Allied Control Commission if those present could smoke. An affirmative answer brought cigarettes into the hands of almost every one of the thirty-odd persons present.
The hangings were taking so long that there were multiple pauses, and even smoke breaks.
As the black hood was raised over his head Kaltenbrunner, still speaking in a low voice, used a German phrase which translated means, ‘Germany, good luck.’
His trap was sprung at 1.39 a.m.
There was a brief lull in the proceedings until Kaltenbrunner was pronounced dead at 1.52 a.m.
From here you can see that it took thirteen (13) minutes for Kaltenbrunner to be pronounced dead. And as for the pause? They had to pause because while Kaltenbrunner was dying, Keitel was still suffocating. It was twenty-four (24) minutes before they could pronounce Keitel as deceased.
As for John C. Woods, he had two quotes that showed his feelings about the hangings:
I hanged those ten Nazis… and I am proud of it… I wasn’t nervous…. A fellow can’t afford to have nerves in this business…. I want to put in a good word for those G.I.s who helped me… they all did swell…. I am trying to get [them] a promotion…. The way I look at this hanging job, somebody has to do it. I got into it kind of by accident, years ago in the States….
Ten men in 103 minutes. That’s fast work.
15 thoughts on “Nazi War Criminals Hanged to Death
1. John C. Woods is my Great Uncle. I enjoyed your story. Thanks for the pictures of the men hung makes it more real. Growing up I have heard the story but this brings more reality to it. I never met John but grew up knowing a great lady my Great Aunt Hazel (his wife). I am curious of Uncle John’s death accidental electrocution….Hmmm that would be a story?
1. My dad age 93 was a young private on that pacific island when sgt Woods died and offers to share his memories with anyone who may be interested. Michael White
2. we were never quite as good at hanging as the British…but then again we never really gave a damn if a condemned man suffered or not.
3. The judicial death sentence is just that! Death. Nothing else. No tortuous strangling, unnecessary delay or psychological torment.
Leave a Reply to BernieMendoza Cancel reply
|
Sharing tools for stakeholder engagement and collaboration at the Chesapeake Watershed Forum
Suzanne Webster ·
4 December 2017
Environmental Literacy | Environmental Report Cards | Science Communication | Applying Science |
Last month, several IAN staff members traveled to the National Conservation Training Center in Shepherdstown, West Virginia, to attend the Chesapeake Watershed Forum. The Forum is an annual regional conference hosted by the Alliance for the Chesapeake Bay. This year IAN was represented by Caroline, Emily, Dylan, Vanessa, and Suzi.
The 2017 conference theme was Healthy Lands, Healthy Waters, Healthy People. Presenters and attendees were encouraged to consider how the Chesapeake Bay and its health affect communities living within the watershed. With this theme in mind, IAN was excited to present two sessions this year, each focused on how different aspects of science communication can help individuals and communities explore their own connections with the natural world. Emily will discuss her session on art and science in a forthcoming blog post, and I will focus the current discussion around our team-led session on stakeholder engagement and collaboration.
Our presentation team sharing a meal before our session.
During our 90-minute session, “Collaboration in practice: Tools for engagement,” we shared several strategies for bringing together diverse stakeholders and having robust conversations that translate into collective actions that protect environmental resources and improve the social, mental, and physical health of communities. In order to help the environment, scientists must accept that it is insufficient to simply study an ecosystem and make management recommendations based solely on observations of the natural world. Instead, protecting the environment necessitates both understanding how people value nature and managing human use of natural resources.
It is therefore important that we, as science communicators, move beyond one-way communication and dissemination of information, and instead encourage two-way dialogue, co-creation of knowledge, and collaborative problem solving between scientists and others. Inviting other stakeholders into the science and management discourse will lead to a more holistic understanding of the natural world and our relationship with it, and consequently, more effective management. Encouraging more representative participation in science could have the added benefits of increasing public environmental stewardship and support of science or management policies, as well as uniting and empowering communities.
Engaging diverse groups of people in environmental science has many benefits.
Engaging diverse groups of people in environmental science has many benefits.
In recent years, our work at IAN has increasingly involved communicating WITH others, rather than simply communicating TO them. During our interactive session at the Forum, we shared several tools for engaging groups in environmental science:
1. Role-playing exercises
Role playing requires participants to assume the roles of characters in a fictional setting. During a game, exercise, or skit, participants must consider situations and make decisions with the frame of mind of their fictional persona. An example of a role-playing exercise that we use at IAN is the "Get the Grade!" report card game. During this game, players are each assigned to play the part of a particular stakeholder who is involved in creating a report card for a fictional ecosystem. They must take turns voting on policy decisions, forming strategic partnerships, and making decisions that are in the best interest of their character, considering what their assigned stakeholder perceives to be the values and threats to the ecosystem.
We find that these types of activities are useful for engaging groups of people during workshops. Role-playing games are fun ice-breakers and provide participants with the opportunity to have a shared immersive learning experience that encourages everyone to not only participate and voice opinions in a safe, structured, judgement-free atmosphere, but also to be more open-minded and accommodating to other perspectives. Participants also enjoy many other benefits of role-playing—they can practice integrating new knowledge to solve complex problems, identifying shared goals, making decisions effectively as a group, empathizing with people with varying viewpoints, and other communication skills such as listening, and persuasive speaking.
Participants play
Participants play
2. Citizen science
Citizen science is a way to involve non-researchers in the scientific research process. Citizen science projects are extremely variable—for example, some projects, such as BioBlitzes, are one-time events that encourage volunteers in a small geographic area to participate in the data collection phase of biodiversity research, while other projects, such as water quality monitoring efforts with the Chesapeake Monitoring Cooperative, can be much more long-term and large-scale, and in some cases, can involve a far deeper degree of community engagement. Citizen science is a great tool to unite researchers with management practitioners, interested members of the public, and professionals from other fields. Citizen science also provides an active learning opportunity and promotes a deeper community stewardship ethic that can help diverse groups of people become more informed and empowered to improve their environment.
3. SNAP
We play SNAP with groups of stakeholders in order to articulate individual priorities and concerns, understand how individual opinions fit within the context of the group, and identify areas of agreement. At the Forum, we asked participants “What do people in your communities value about the Chesapeake Bay?” and instructed each person to provide a few one- or two-word answers written on separate Post-it notes.
We shared answers as a large group and when anyone heard another answer that matched their own, they yelled "Snap!" and joined the matching Post-its to form a chain. Individual Post-its and chains were organized into diverse, emergent categories, such as "Human Health," "Biodiversity," and "Economy". Finally, participants were asked to place two stickers on the categories that they felt deserved to be highest priority, considering their own values, as well as what they learned from the other responses and discussions during the group activity.
SNAP helped our group identify major themes and visualize consensus. For example, during our session, the categories of "Recreation" and "Food" proved to be the most populated by Post-it notes; however, the "Water Quality" category was eventually voted as the group's highest priority in the end, despite originally receiving less than a quarter of the Post-it notes as the most-populated categories. Another similar technique that we use to visualize consensus hidden within a group of one-word answers is to create a word cloud, which clusters all of the words and adjusts the font size according to how often they are repeated.
We identified many values of the Chesapeake Bay and used SNAP to sort responses and visualize consensus. (Photos by M. Smith (top), S. Spitzer (middle), and C. Donovan (bottom))
4. Conceptual diagramming
Collaboratively creating conceptual diagrams is a way creatively integrate multiple stakeholders' ideas and combine diverse perspectives into a cohesive image that represents a shared vision of an ecosystem, or a "coupled" system. During this process, a group of stakeholders receives a base map of an ecosystem and they work together to "fill in" the values and threats of the place of interest, using symbols or other artistic representations. This process necessitates planning, discussion, flexibility, and strategy in order to incorporate everyone's priorities, and also requires that stakeholders communicate (and listen!) effectively so the final version of the diagram is representative of the entire group.
5. Stakeholder mapping
Finally, stakeholder mapping is a useful activity that encourages groups to take a step back from thinking about the specifics of an ecosystem, and instead focus on the many people who hold a stake in the ecosystem. During this activity, participants identify all stakeholders relevant to a particular situation and then "map" them according to their degree of influence and interest in the situation. The process of stakeholder mapping is useful for identifying who should be involved in environmental management or reporting efforts and how stakeholders should prioritize either engaging or empowering one another, depending on current positions on the map.
This activity is particularly useful after completing one of the other activities described, such as SNAP or conceptual diagramming, which identify values and threats to an ecosystem. Once values are collaboratively identified, for example, a stakeholder mapping exercise can prompt people to consider such questions as, “Who should be included in a discussion to preserve these values?” or "What communication strategies could help our stakeholder network accomplish our collective goal?" Stakeholder mapping can also be extended to include a basic social network analysis activity, during which groups can use the map to identify any interest groups by noticing clusters of similarly-positioned stakeholders, and also to visualize network connectivity by drawing lines between stakeholders and noticing who is more centrally vs. peripherally connected and who has the strongest relationships with other stakeholders.
Participants consider degree of interest and influence to create a stakeholder map (Photo by E. Nastase (top), map co-created by workshop participants (bottom))
Overall, we believe our conference session was a success. We had fun teaching participants about several tools that they can use to engage groups of people in their own places of work, and we especially enjoyed the opportunity to give attendees a taste of three games we use often in our own workshops. We are already looking forward to next year's Forum!
About the author
Suzanne Webster
Suzi Webster is a PhD Candidate at UMCES. Suzi's dissertation research investigates stakeholder perspectives on how citizen science can contribute to scientific research that informs collaborative and innovative environmental management decisions. Her work provides evidence-based recommendations for expanded public engagement in environmental science and management in the Chesapeake Bay and beyond. Suzi is currently a Knauss Marine Policy Fellow, and she works in NOAA’s Technology Partnerships Office as their first Stakeholder Engagement and Communications Specialist.
Previously, Suzi worked as a Graduate Assistant at IAN for six years. During her time at IAN, she contributed to various communications products, led an effort to create a citizen science monitoring program, and assisted in developing and teaching a variety of graduate- and professional-level courses relating to environmental management, science communication, and interdisciplinary environmental research. Before joining IAN, Suzi worked as a research assistant at the Marine Biological Laboratory in Woods Hole, MA and received a B.S. in Biology and Anthropology from the University of Notre Dame.
Next Post > The Belmont Forum goes to Brazil
Post a comment
|
KAREN ELPHICK. United States Senate shows President a red light on war powers as Labor promises a war powers inquiry in Australia (Australia Parliamentary Blog 21.12.2018)
Jan 17, 2019
For several years, Yemen has been in a state of civil war between a Saudi-led coalition supporting the Yemeni Government and Houthi forces. The US armed forces are not directly engaged in Yemen but have been supporting Saudi military efforts with aerial targeting and intelligence sharing.
On 13 December 2018, the United States (US) Senate passed Resolution S.J. Res. 54 – 115th Congress (Res. 54) directing the US President to withdraw US Forces from hostilities in Yemen. Res. 54 provides (emphasis added):
This joint resolution directs the President to remove U.S. Armed Forces from hostilities in or affecting Yemen within 30 days unless Congress authorizes a later withdrawal date, issues a declaration of war, or specifically authorizes the use of the Armed Forces. Prohibited activities include providing in-flight fueling for non-U.S. aircraft conducting missions as part of the conflict in Yemen. This joint resolution shall not affect any military operations directed at Al Qaeda.
The President must submit to Congress, within 90 days, reports assessing the risks that would be posed: (1) if the United States were to cease supporting operations with respect to the conflict in Yemen, and (2) if Saudi Arabia were to cease sharing Yemen-related intelligence with the United States.
Res. 54 has not yet been debated in the House of Representatives and currently has no legal force. However, it is an historic public message from the Senate: this is the first time since the US War Powers Resolution(WPR) was made in 1973 that a measure introduced to restrict a President’s use of armed force has proceeded to a vote and passed either chamber of Congress.
The Law Library of Congress explains the background to the WPR:
The Constitution of the United States divides the war powers of the federal government between the Executive and Legislative branches: the President is the Commander in Chief of the armed forces (Article II, section 2), while Congress has the power to make declarations of war, and to raise and support the armed forces (Article I, section 8). Over time, questions arose as to the extent of the President’s authority to deploy U.S. armed forces into hostile situations abroad without a declaration of war or some other form of Congressional approval. Congress passed the War Powers Resolution in the aftermath of the Vietnam War to address these concerns and provide a set of procedures for both the President and Congress to follow in situations where the introduction of U.S. forces abroad could lead to their involvement in armed conflict. …
U.S. Presidents have consistently taken the position that the War Powers Resolution is an unconstitutional infringement upon the power of the executive branch.
The WPR asserts that the President has power to use the armed forces in ‘hostilities,’ but only where there has been:
• a declaration of war
• specific statutory authorisation, or
• an attack on the US or its armed forces
According to Dr Stephen Schwalbe, an academic at the American Public University, the WPR also requires:
• the President to consult with Congress before deploying armed forces abroad
• the President to provide written notification to the Congress within 48 hours of deploying armed forces abroad and
• all armed forces to be withdrawn after 60 days, unless there is a declaration of war or a Congressional joint resolution to continue military operations.
Despite disputing the constitutional validity of the WPR, Presidents since Nixon have generally chosen to formally advise Congress of deployments where force is likely to be used, but usually after some delay and deliberately not in response to the WPR requirements.
On 13 December 2018, the House of Representatives attached a procedural rule change to an important Agriculture Improvement Bill to ensure that there would be no expedited consideration of a House Resolution almost identical to Res.54. This does not defeat Res. 54 but suggests it is likely the resolution will be not considered until the next session of Congress commences in January 2019. Even if Res. 54 passes the House in the new year, President Trump has power to veto the Bill which would send it back to Congress for a vote to override the veto.To pass a bill over the President’s veto requires a two-thirds vote in each Chamber. Res. 54 was passed in the Senate by 56 votes to 41 so it is unlikely to achieve a two thirds vote. Even if Congress voted to override the veto, its next action is uncertain.
Although judicial review of executive action is theoretically available, so far the US courts have not allowed judicial review of military deployment decisions and no congressional effort to hold the President accountable for breaching the WPR has succeeded. The only effective enforcement power Congress has is to cut off funding for military operations completely or to remove the President from office. Both options are, in this context, almost unachievable for pragmatic political reasons. WPRhas therefore, not been effective as a legal tool to control the war powers of the executive government.
Parliamentary powers in Australia
Australia does not have a law similar to the WPR. The uniform practice in Australia has been for the Executive Government to commit Australian military forces to operations. No law requires the Government to consult or inform Parliament when deploying military forces. However, on most occasions the Prime Minister or the Minister for Defence has informed Parliament of Cabinet’s decision through a ministerial statement or tabled paper; the Governor-General has rarely issued an order or proclamation. The Executive’s decision to declare war or deploy forces overseas has on most occasions been taken prior to Parliament debating the issue. Historical examples of the process are documented in Parliamentary involvement in declaring war and deploying military forces. The current practice is increasingly being contested.
There have been unsuccessful attempts since 1985 by the Australian Democrats and more recently by the Australian Greens to amend the Defence Act 1903 to remove the exclusive power of the government to commit Australia to war. The latest attempt is the Defence Legislation Amendment (Parliamentary Approval of Overseas Service) Bill 2015.
Also in 2015, the Australians for War Powers Reform (AWPR) published a free online book, How does Australia go to war? A call for accountability and change. In September 2018, AWPR launched a public campaign, Be Sure On War: No War Without Parliament, with a petition which aims to ‘[s]eek reform of the War Powers under which the executive government can commit troops to international conflict’.
More recently, the National Labor Party Conference passed the following resolution on 18 December 2018:
A Shorten Labor Government will refer the issue of how Australia makes decisions to send service personnel into international armed conflict to an inquiry to be conducted by the Joint Standing Committee on Foreign Affairs, Defence and Trade. This inquiry would take submissions, hold public hearings and produce its findings during the term of the 46th parliament.
The Australian Constitution does not explicitly refer to the power of the Executive to declare war or deploy military forces. At the time the Constitution was written, the Royal prerogatives with respect to foreign affairs and war ‘may have been more properly regarded as falling within the executive power of the British Imperial Government’. From settlement to 1913, the Royal Navy provided Australia’s Naval defence and until 1913 Australia had no independent capacity to deploy military forces overseas.
Interpretation of the executive powers under section 61 of the Constitution has changed over time. Section 61 is now considered to encompass all traditional Royal prerogatives. The power to authorise use of military force is therefore generally accepted to reside in the Governor-General who is required by firm convention to act on the advice of the Executive Council.
The history of the Australian Parliament’s involvement in decisions to deploy Australian forces is covered in detail in a 2010 Parliamentary Library Background Note, Parliamentary involvement in declaring war and deploying military forces. The Library has also acquired a recent academic work on the subject by Dr Cameron Moore, Crown and Sword: Executive power and the use of force by the Australian Defence Force.
Share and Enjoy !
Receive articles straight to your Inbox
How often?
Thank you for subscribing!
|
College informative speech
The purpose of the United Nations. Be explicit regarding the subject and avoid straying from it.
Academic Writing Workspace Work directly with experts
Therefore, we will deliver academic essays of amazing quality not available anywhere else. The journey to becoming a nuclear physicist.
Why Lincoln was the best President. Here are a few examples: Profound study and comprehension are ways of ensuring that your speech is noteworthy and exciting.
Boys and girls should be taught in separate classrooms. Stage make-up, costumes, and props are prohibited. The value of information provided through school libraries.
Alternatively, ponder on how you usually spend your time. How to pick a name for your children.
509 Informative Speech Ideas and Topics
High School will be the best time of your life. The artifact may be anything of rhetorical significance, such as a book, a speech, an advertising campaign, or a protest movement. Attempt to deliver the speech while sticking to the time limit. Explain the importance of your subject and illustrate the primary ideas by introducing a few fascinating examples as well as citations.
Positive thinking is the key to peaceful living. Human rights should be advanced all over the world. Our team of writing experts is available on call and can churn out an outstanding essay for you on short notice without compromising on quality. Amphibian vehicles — search for information about those rare car-boat vehicles, and you have lots of fun informative speech topics to talk about.
The different types of tropical fish. Katniss Everdeen would alienate Harry Potter. Students must not be afraid to ask questions. It is important that students check their emails often.
Students should be paid for getting good grades. You ought to add dramatic breaks to render the speech more compelling. Is netball or hockey more dangerous. School School is a whole new world, where students discover more about themselves and life around them. Acting and interpretation events[ edit ] Though the purpose of each event differs based on if it is an acting event or an interpretation event, all of these events seek to use different forms of literature to tell a certain theme or story.
See this page for a full list of Family Informative Speech Topics. The importance of formal education for building a successful career. Female sports should be given equal coverage by the media. Teens should have weekend jobs. Are they sufficiently enough?. Informative Speech Topics for College Students: Speeches about Sometimes, oddly enough, you have to write a speech about speeches!
And that’s not a joke –. University of Hawai'i Maui Community College Speech Department Topic Selection Helper Click on any of the following categories to view a selection of possible speech topics. Informative speech topics give you the chance of sharing your knowledge on a given issue with your listeners.
They bring exciting and useful information to light. Top 99 College Speech Topics Here's my list of 99 college speech topics!
They cover all sorts of subjects and you should be able to find something suitable for whatever type of speech. The National Speech & Debate Tournament returns to Dallas, Texas, the week of JuneWe look forward to hosting you during the largest academic competition in.
Sargent College prepares students for a variety of rewarding and in-demand careers in health and rehabilitation sciences. Learn more about Sargent now.
College informative speech
Rated 5/5 based on 67 review
Commencement Speech, High School Graduation Speeches, How To, Information
|
5 Fascinating Facts about Aluminium you were Unaware of
5 Fascinating Facts about Aluminium you were Unaware of
Aluminium can be considered as a modern man’s metal. According to historic evidence, copper and iron started being in use by early humans 9000 years ago, whereas aluminium dates back to less than 200 years of usage. The first person to produce aluminium, in 1825, was Hans Christian Ørsted in Copenhagen, Denmark. In fact, it was considered more precious than gold and silver in early days that Napoleon is said to have served dinner in aluminium plates!
Apart from fulfilling many roles in the domain of basic home needs, aluminium has much more common and useful applications in modern society and its infrastructure requirements. Thus, aluminium companies and aluminium extrusion manufacturers cater to a wide variety of market needs. Let us explore some of the interesting facts about this metal that has made life easier for us in many ways.
1. Aluminium is the most abundant metal
Aluminium in its pure form as bauxite ore is the third most abundant element found on earth after oxygen and silicon. There is enough aluminium to last us many generations and its availability is much more than iron or any other precious metals. Aluminium comprises 8.09% and iron 5% of the earth’s crust. There is enough aluminium around that can last us for generations.
2. Aluminium is very ductile
Aluminium is ductile and has a low melting point and low density. Mildest form of steel has only 50% ductility of aluminium. Hence it can be used in anything from automobile parts to kitchen utensils with ease. Aluminium alloys are made with elements like copper, zinc, magnesium, silicon or manganese to enhance the properties of the metal. The ductility of Aluminium is intact in both hot and cold conditions and hence the options available for customization are limitless.
3. Aluminium is 100 percent recyclable and infinitely so
Aluminium is infinitely recyclable and is one of the few materials that pay for the cost for its own collection. In fact, 75% of all aluminium ever produced is still in circulation and it requires only 5% of the energy needed for the initial production of the metal to recycle it. It retains its physical properties indefinitely too. Discarded aluminium is one of the most valuable items in your recycling bin. Global production of aluminium is around 40 million tonnes a year. Recycling aluminium products brings into circulation a similar amount too. So next time before throwing away your used coke can, do remember that recycling a single aluminium can save enough energy to run your TV for three hours!
4. Aluminium is a lightweight metal
Aluminium weighs only one-third of steel and its density is 2.7 g/cm3. This makes it a very economical metal for industrial use. It is easier and cheaper to handle and to transport. The reduced energy consumption and versatility makes aluminium a favourite choice with aluminium extrusion manufacturers also. Due to its corrosion resistance it is viable to be stored for long periods too.
5. Aluminium is highly heat resistant
Apart from being lightweight, the metal is highly heat-resistant as well. It takes more than 1220 degree Fahrenheit to smelt aluminium from solid to liquid state. The natural oxide coating found on solid aluminium inhibits reaction of the solid aluminium found underneath with atmospheric air, thereby contributing to its high resistance to burning. Its heat resistance property makes aluminium one of the most durable industrial materials. All these unique features make aluminum a popular choice across industries like transportation, construction, technology and aviation. To know more about one of the best aluminium companies in Chennai that can cater to a wide variety of aluminium extruded products, log on to https://kmcaluminium.com/
Pic courtesy: unsplash
|
How do I access the Pan tool in AutoCAD?
You can also pan by clicking and dragging with the mouse wheel, using it as you would a mouse button. When you press down and click with the mouse wheel, the cursor will turn into a hand icon and the Pan command will temporarily be activated.
What is Pan command in AutoCAD?
Pan: Hold down the scroll wheel as you move the mouse. (The scroll wheel is also considered a button.) Zoom to the extents of a drawing: Double-click the scroll wheel. This method is particularly useful when you accidentally press Enter at the wrong time during a Move or Copy operation.
Where is the Tools menu in AutoCAD?
To display the menu, click Quick Access Toolbar drop-down > Show Menu Bar. To display a toolbar, click Tools menu > Toolbars and select the required toolbar.
How do I change the Pan button in AutoCAD?
If you open the CUI (type CUI on the command line) go to the Customize tab and scroll down to the Mouse Buttons options, you can customize your clicks there.
IT IS INTERESTING: How do I set base points in Autocad?
How do I pan in Autodesk?
1. Click Pan or press F2. The cursor changes to the pan cursor .
2. Use the arrow cursor to drag the view in the graphics window. You can also pan the view using the Intellimouse. Hold down the wheel button, move the mouse in the direction you want to pan, and release the wheel button to stop panning.
27 июл. 2020 г.
What type of program is AutoCAD?
What is Zoom command?
Press and hold the Ctrl key and scroll the wheel on your mouse up to zoom in or down to zoom out. For example, you can do this now to zoom in and out on your browser.
What is AutoCAD Express tools?
Express Tools is a library of productivity tools designed to help you extend the power of your AutoCAD product.
What is AutoCAD menu bar?
What’s the difference between CAD and AutoCAD?
IT IS INTERESTING: Quick Answer: How do you measure a polyline in Autocad?
How do you zoom and pan in AutoCAD?
Zooming and panning in AutoCAD can be done entirely with the mouse wheel. Point the cursor to where you would like to zoom and turn the mouse wheel to zoom in and out. You can also pan by clicking and dragging with the mouse wheel, using it as you would a mouse button.
Is the function key for Ortho command?
The keyboard function keys F1 – F12 control settings that are commonly turned on and off as you work in the product.
Key Feature Description
F7 Grid display Turns the grid display on and off.
F8 Ortho Locks cursor movement to horizontal or vertical.
How do I change the middle button to a pan in AutoCAD?
Turn on AutoScroll in the system
1. In the Windows control panel, double-click the Mouse icon to open the Mouse Properties dialog window.
2. Click the Buttons tab.
3. In the Wheel button drop-down list, select AutoScroll.
4. Click OK.
20 дек. 2020 г.
How do I move my mouse in AutoCAD?
Using Your Mouse to Move, Copy or Clear a Selection Set
1. Select the objects to move.
2. Click & hold with the left mouse button over some geometry (not a grip) until your cursor changes from the cross-hairs to the windows arrow cursor and a little box appears next to the cursor.
3. Move your mouse and release.
How do you grab in AutoCAD?
Hold down the Shift key, right-click anywhere in the drawing area, and release the Shift key. The Object Snap menu appears. Choose an Object Snap mode, such as Endpoint, from the Object Snap menu.
IT IS INTERESTING: How do I type a command in AutoCAD?
What is mirror in AutoCAD?
Sketch up
|
7 Smallest Countries in the World
The country is a region identified as a distinct national entity. Some countries are so small that some cities of other nations are big enough. These countries have their own culture, government, etc. This article deals with the 7 smallest countries in the world.
Created On: Mar 14, 2020 13:22 IST
Modified On: Mar 14, 2020 14:31 IST
The country is a territory distinguished by its people, culture, language, geography, etc. or in other words, a country is a region identified as a distinct national entity. It consists of a large number of an area with a huge population. Some nations are large and some are small depends upon the area. Even some countries are so small that some cities of other nations are big enough. These small countries have their own government, own culture; even some are the richest in the world. Let us find out the 7 smallest countries in the world.
7 Smallest Countries in the World
1. Vatican City
Vatican City the smallest country in the world
Area: 0.44 Km2
Location: Europe
Population: about 800 (2017)
Vatican City is the smallest country in the world. It is also known as Holy See. It is located in the city of Rome, Italy. It is the center of the Catholic Church. As the biggest church, St Peters Basilica in the world resides here. World’s famous paintings and sculptures like The Pieta and The creation of Adam are featured here. The unique feature of this country is that it has no permanent citizens and citizenship is given to those who work at the Vatican only as well as their spouses and children and is revoked when they stop working there.
The official language of Vatican City is Italian and Latin. In fact, it is the only city in the world whose ATMs offer services in the Latin language.
2. Monaco
Monaco smallest country
Area: 2.02 km2
Location: Western Europe
Population: 38,036 (2017)
Monaco is the second smallest country in the world. But do you know it is the home to billionaires and millionaires per capita in the world? It is located at French Riviera on the Mediterranean. From three sides it is surrounded by France and another facing the Mediterranean Sea. Its economy solely depends upon casinos and tourism. It is famous for gambling, luxury goods, and the service industry. The most popular annual event is the Formula 1Grand Prix race.
3. Nauru
Nauru smallest country
Area: 21 Km2
Location: Australia
Population: 10,308 (2017)
Nauru is the smallest island nation in the world, located in the east of Australia. It is an independent republic and the only country without an official capital. It is also known as a pleasant island. Earlier in 1980, it came into existence because of phosphate mining but now it is a silent island, leading 90% towards unemployment. Do you know that this is the country with most obese people in the world i.e. why it is the home to the world’s highest level of type 2 diabetes.
10 Poorest Countries of the World
4. Tuvalu
Tuvalu smallest country
Area: 26 Km2
Location: North-East of Australia
Population: 9981 (2017)
Tuvalu is a Polynesian island nation located in the Pacific Ocean, North-East of Australia. It is also known as Ellice Island. Earlier it was British territory but gained independence in 1978. Its capital is Funafuti. Do you know in the whole country only on one island i.e. Vaitupu secondary school is located? Its official language is Tuvaluan, English and Samoan.
5. San Marino
San Marino
Area: 61 Km2
Location: Europe
Population: 32,131 (2017)
It is also known as The Most Serene Republic of San Marino and is surrounded by Italy. It is the oldest surviving sovereign state in the world. The capital of this country is San Marino city itself. In terms of GDP per capita, it is wealthiest.
6. Liechtenstein
Liechtenstein smallest country
Area: 160 Km2
Location: Between Switzerland and Australia
Population: 38,022 (2017)
It is the richest country in the world by GDP per capita and has the lowest unemployment rate. It is the only nation in the world completely located in the Alps. It does not have any railway station and airport. Its official language is German.
7. Saint Kitts and Nevis
Saint Kitts and Nevis
Area: 261 Km2
Location: North- America
Population: 56,780 (2017)
Saint Kitts and Nevis's full name is Federation of Saint Christopher and Nevis. Its capital city is Basseterre. These were the first islands occupied by the Europeans. In 1983 Saint Kitts and Nevis gained full independence from the United Kingdom. Its economy is dependent on tourism, agriculture and small manufacturing industries. English is the sole official language of Saint Kitts and Nevis but Creole is also widely spoken.
From the above article, we come to know about the seven smallest countries in the world by area along with the location, population, official language, etc.
Source: worldatlas.com
Currencies of the World: Countries that have Rupee as currency
Comment ()
Related Categories
Post Comment
0 + 1 =
|
Is Rollback Possible In Truncate?
How do I use rollback?
ROLLBACK in SQL is a transactional control language which is used to undo the transactions that have not been saved in database.
The command is only be used to undo changes since the last COMMIT….Difference between COMMIT and ROLLBACK :COMMITROLLBACKWhen transaction is successful, COMMIT is applied.When transaction is aborted, ROLLBACK occurs.2 more rows•Apr 7, 2020.
Can we rollback insert statement?
For example, you may want to rollback a transaction that inserts a record in the books table if a book with the same name already exists. In that case, you can use the rollback SQL statement.
Does delete need commit?
TRUNCATE is a DDL command so it doesn’t need an explicit commit because calling it executes an implicit commit. From a system design perspective a transaction is a business unit of work. It might consist of a single DML statement or several of them. It doesn’t matter: only full transactions require COMMIT.
What is rollback after commit?
Can we rollback delete and truncate?
The operation cannot be rolled back. DROP and TRUNCATE are DDL commands, whereas DELETE is a DML command. DELETE operations can be rolled back (undone), while DROP and TRUNCATE operations cannot be rolled back.
Can we rollback after truncate in Oracle?
Quote: A TRUNCATE statement does not generate any undo information and it commits immediately. It is a DDL statement and cannot be rolled back. TRUNCATE does apparently do a COMMIT before and after it executes, which would explain why there is no ROLLBACK.
Is truncate faster than delete?
TRUNCATE is faster than DELETE , as it doesn’t scan every record before removing it. TRUNCATE TABLE locks the whole table to remove data from a table; thus, this command also uses less transaction space than DELETE . Unlike DELETE , TRUNCATE does not return the number of rows deleted from the table.
Which is better truncate or delete?
Truncate removes all records and doesn’t fire triggers. Truncate is faster compared to delete as it makes less use of the transaction log. Truncate is not possible when a table is referenced by a Foreign Key or tables are used in replication or with indexed views.
Can we rollback after commit?
You cannot roll back a transaction once it has commited. You will need to restore the data from backups, or use point-in-time recovery, which must have been set up before the accident happened.
Does truncate free space?
If you’re using innodb_file_per_table=ON, or you’re using MyISAM, TRUNCATE TABLE will delete the table files used by the table in question (and create new, empty ones). So, the space used will be released to the file system, and in Unix/Linux, “df” on the file system will show new space.
Can we rollback DDL commands?
2 Statements That Cannot Be Rolled Back. Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines.
Can truncate have where condition?
TRUNCATE cannot be executed with a WHERE clause means that all records will be removed from the TRUNCATE / statement. However, partitions can be truncated as shown in the below T-SQL statement. From the above statement, partitions 2,4,6,7,8 will be truncated leaving the other partitions data will not be truncated.
Which command we can rollback?
The COMMIT command is the transactional command used to save changes invoked by a transaction to the database. The COMMIT command saves all the transactions to the database since the last COMMIT or ROLLBACK command.
What are the after triggers?
AFTER Triggers. AFTER Triggers are executed after the DML statement completes but before it is committed to the database. … INSTEAD OF Triggers. INSTEAD OF Triggers are the triggers which gets executed automatically in place of triggering DML (i.e. INSERT, UPDATE and DELETE) action.
Is truncate DDL or DML?
Although TRUNCATE TABLE is similar to DELETE , it is classified as a DDL statement rather than a DML statement. It differs from DELETE in the following ways: Truncate operations drop and re-create the table, which is much faster than deleting rows one by one, particularly for large tables.
How do you truncate in SQL?
Truncate (SQL)In the Oracle Database, TRUNCATE is implicitly preceded and followed by a commit operation. … Typically, TRUNCATE TABLE quickly deletes all records in a table by deallocating the data pages used by the table. … You cannot specify a WHERE clause in a TRUNCATE TABLE statement—it is all or nothing.More items…
What is rollback in SQL?
In SQL, ROLLBACK is a command that causes all data changes since the last BEGIN WORK , or START TRANSACTION to be discarded by the relational database management systems (RDBMS), so that the state of the data is “rolled back” to the way it was before those changes were made.
Can we recover data after truncate in SQL?
If you’ve accidentally executed a TRUNCATE statement and you have a full database backup, given that no changes occurred after the table was truncated, you can simply recover the data by overwriting the original database with the backup.
Can we commit inside a trigger?
You can’t commit inside a trigger anyway. Trigger should not commit and cannot commit. Committing in a trigger usually raises an exception unless it happens into autonomous transaction.
What happens if we truncate a table?
The TRUNCATE TABLE statement is used to remove all records from a table in Oracle. It performs the same function as a DELETE statement without a WHERE clause. Warning: If you truncate a table, the TRUNCATE TABLE statement can not be rolled back.
Why use truncate instead of delete?
TRUNCATE TABLE is faster and uses fewer system resources than DELETE , because DELETE scans the table to generate a count of rows that were affected then delete the rows one by one and records an entry in the database log for each deleted row, while TRUNCATE TABLE just delete all the rows without providing any …
|
Jill Staake
Teachers and kids both love STEM challenges. They encourage problem-solving skills and let kids prove just how much they already know. They’re hands on learning at its best! These fourth grade STEM challenges require only simple supplies and are terrific for partners or groups.
All you need to do is post one of these STEM challenges on your whiteboard or projector screen. Then hand out the supplies and let kids take it from there! Get a free printable list of these fourth grade STEM challenges here, too.
Want this entire set of STEM challenges in one easy document? Get your free PowerPoint bundle of these fourth grade STEM challenges by submitting your email here, so you’ll always have the challenges available.
25 Fourth Grade STEM Challenges
1. Create the longest possible paper chain from a single sheet of construction paper.
2. Build a domino chain reaction that incorporates at least two non-domino items.
3. Using only 10 index cards, construct the tallest possible free-standing tower.
4. Engineer a catapult from wood craft sticks and rubber bands that can launch a marshmallow the furthest distance.
5. Use newspapers and masking tape to build a chair that a member of your team can sit on.
6. Design a bubble wand that makes the largest bubbles using pipe cleaners and string.
7. Create a model of a new piece of playground equipment using construction paper, cardboard tubes, plastic straws, scissors, and tape.
8. Stack 20 plastic cups into the tallest tower you can.
9. Use paper plates, plastic straws, and masking tape to construct a new kind of vehicle.
10. Build a bridge from wood craft sticks and binder clips that can support the weight of at least one book.
11. Put together a working windmill from a cardboard tube, construction paper, and plastic straws.
12. Design a marble maze on a paper plate using plastic straws.
13. Build a paper airplane that can fly through a hula hoop from 8 feet away.
14. Engineer the tallest possible tower from toothpicks and mini-marshmallows.
15. Build a sailboat raft from wood craft sticks, construction paper, and glue.
16. Use scissors to cut a paper plate in a way that makes it stretch out as long as possible.
17. Make a parachute for a small toy from a drinking straw, a plastic bag, and Scotch tape.
18. Construct a building using only wood craft sticks.
19. Use balloons and masking tape to construct the tallest tower you can.
20. Build a boat from aluminum foil that holds 100 pennies.
21. Engineer a 3-foot-tall structure that can hold a basketball from newspapers and masking tape.
22. Come up with a creative new use for an empty plastic bottle. You can use other supplies to alter it however you like.
23. Use items from around the classroom to figure out a way to pick up and carry a beach ball from one side of the room to the other, without touching it with your hands.
24. Construct a building from paper and masking tape that won’t blow over in the breeze from a fan.
25. Use pipe cleaners and drinking straws to build as many 3-D shapes as you can in 10 minutes.
Enjoying these fourth grade STEM challenges? Try these 30 Impressive 4th Grade Science Experiments and Activities.
Plus, 50 Easy Science Experiments Kids Can Do With Stuff You Already Have.
Get a PPT Version of These STEM Challenges
Source link
|
The word “bride” originates from the Old French word “brise” which means, “bitter comb”. The term “bride” at some point developed into the ultra-modern term “bridal”, from the Latin “braculum” which means, “a comb worn in the hair”. A far more likely origin would be the Traditional word “krate”, which means “a comb”. The word “bride” may be resulting from the Ancient greek language word “peg”, which at first meant, “grapefruit tree”. The very source of the term, however , is usually from the French word “fain” which means, “a comb”. This is one way the modern bride’s groom quite often describes his bride: being a “brush with teeth”.
A bride’s soon-to-be husband is referred to as the groom in legal marriages, while an engagement ring bearer is called simply “ring bearer”. In casual weddings, the groom is referred to as simply “boy” or “young man”. Historically, it was not uncommon for any groom to acquire children along with his star of the event. Often this kind of happened in royal marriages where there were two tourists with one head and two destinies. Such unions were sometimes referred to as blood ties. Actually in these conditions, it was common for the bride’s relatives to give a groom an engagement ring in realization of his taking on the bride’s responsibilities.
Modern brides are often required to complete their family line by providing birth to a child or perhaps being hitched to another person who carries the bride’s genealogical. A more conservative approach to the bride’s groom is used once there is currently a young family member included in another relationship. Traditionally, the bride’s soon-to-be husband is responsible for caring for his wife until jane is able to look after herself. If this is happening, the bride’s soon-to-be husband may be offered primary custody of the children of their child (Ren), although this may not be always the case.
|
Accessible Published by De Gruyter November 6, 2019
Artificial Intelligence in Basic and Clinical Neuroscience: Opportunities and Ethical Challenges
Philipp Kellmeyer ORCID logo
From the journal Neuroforum
The analysis of large amounts of personal data with artificial neural networks for deep learning is the driving technology behind new artificial intelligence (AI) systems for all areas in science and technology. These AI methods have evolved from applications in computer vision, the automated analysis of images, and now include frameworks and methods for analyzing multimodal datasets that combine data from many different source, including biomedical devices, smartphones and common user behavior in cyberspace.
For neuroscience, these widening streams of personal data and machine learning methods provide many opportunities for basic data-driven research as well as for developing new tools for diagnostic, predictive and therapeutic applications for disorders of the nervous system. The increasing automation and autonomy of AI systems, however, also creates substantial ethical challenges for basic research and medical applications. Here, scientific and medical opportunities as well ethical challenges are summarized and discussed.
Die Analyse großer Datenmengen (big data) mit künstlichen neuronalen Netzen für tiefes Lernen (deep learning) ist die treibende Technologie hinter neuen Systemen der künstlichen Intelligenz (KI) für alle Bereiche der Wissenschaft und Technik. Diese KI-Methoden haben sich aus Anwendungen in der automatisierten Bilderkennung (computer vision) entwickelt und umfassen heute Methoden zur Analyse multimodaler Datensätze, die Daten aus vielen verschiedenen Quellen kombinieren, darunter biomedizinische Geräte, Smartphones und allgemeines Nutzerverhalten auf Apps und im Netz. Für die Neurowissenschaften bieten diese zunehmenden Ströme persönlicher Daten und Deep Learning viele Möglichkeiten für die grundlagenorientierte Forschung sowie für die Entwicklung neuer diagnostischer, prädiktiver und therapeutischer Anwendungen bei Erkrankungen des Gehirns. Die zunehmende Automatisierung und Autonomie von KI-Systemen erzeugt aber auch erhebliche ethische, rechtliche und gesellschaftliche Herausforderungen. In dieser Arbeit werden die neurowissenschaftlichen und medizinischen Chancen sowie ethischen Herausforderungen zusammengefasst und diskutiert.
Artificial intelligence (AI) seems to be everywhere now. From navigational tools, digital assistants, and self-driving vehicles, to social robots, autonomous weapons, analytic and predictive tools in science to decision-support systems in medicine and many other domains and applications.
This development is in large parts a result of a particular technological convergence in recent years: the concomitant rise of big data, advanced methods of machine learning (e. g. deep learning) and increasing computing power and efficiency. This perfect technological storm drives a large-scale techno-social transformation across all sectors in society: work, health, research and technology and the social domain; which is often indiscriminately referred to as digitalization.
But what is AI exactly and why does it capture the imagination so vividly and often disquietingly? What is the current and future impact of AI for neuroscience and the clinical fields occupied with treating brain diseases and mental health disorders? What are the ethical, legal, social and political tensions and challenges that emerge from this techno-social constellation?
Here, I will first provide short and succinct background information on the technological aspects of the current wave of AI methods and contextualize these developments in terms of their putative current and future applications in neuroscience. This will provide the basis to then discuss important ethical, legal and social challenges. The focus in that regard will be on the question of how societies can benefit from the many promising applications of AI in neuroscience and neuromedicine while ensuring the responsible design, development and use of this transformative technology.
Background: Artificial intelligence, big data, machine learning and neurotechnology
According to the latest analysis of the innovation dynamics of emerging technologies from 2018—the Gartner®[1] Hype Cycle for Emerging Technologies—artificial neural networks (ANNs) for deep learning are currently located at the very “peak of inflated expectations”. This represents a snapshot of the cacaphonous media buzz and hype surrounding the putatively transformative power of AI for all sectors of society. As a basis for our discussion here, we need to recognize that the main driving force of what is usually referred to as AI today is the convergence of several technological innovations and components[2]:
• Ubiquitous data-collecting technology: in the environment (e. g. public closed-circuit television), in machines (e. g. cars), in personal devices (e. g. smartphones for collecting personal data on user behavior, movement, geolocation and many other parameters), as well as the traditional arenas in biomedicine such as medical centers and research institutions.
• The, mostly cloud-based, server infrastructure to store and process large amounts of these personal data (big data);
• High-performance analyses on these data with graphics processing units (GPUs), particularly with
• Machine learning (ML) methods, particularly artificial neural networks for deep learning,
• Dynamic user interfaces to facilitate human-AI interaction
These infrastructural and technical components provide the basis for many applications of AI in research, technology development and clinical medicine. One illustrative and highly dynamic translational research area is the field of neurotechnology. Figure 1 illustrates how many of the components mentioned above can be fully integrated to build an AI-based brain-computer interface that could provide a paralyzed individual with the means to operate a computer-based communication system. But neurotechnology is not confined to the assistive treatment of relatively rare neurological disorders, such as severe paralysis / locked-in syndrome, but has recently also entered the consumer-market with various devices for neurofeedback-based relaxation or well-being applications (Ienca et al., 2018; Kellmeyer, 2018).
Current and future applications of AI for basic and clinical neuroscience
In neuroscience, as in most other research areas, AI systems based on artificial neural networks have a wide spectrum of applications. As we have discussed, machine learning with ANNs has proven particularly successful in computer vision tasks. Therefore, the primary domain of application in neuroscience will also be the processing and classification of a large amounts of images. Examples are the classification of histopathological images (Litjens et al., 2016), the segmentation of tumors in brain MRI images (Pereira et al., 2016) and many other processing applications in neuroimaging (Akkus et al., 2017; Milletari et al., 2017; Kleesiek et al., 2016). In addition to such computer vision task, however, AI methods based on ANNs are also successfully used in the analysis of bioelectric and hemodynamic brain signals, particularly electroencephalography (EEG) (Schirrmeister et al., 2017a; Schirrmeister, et al., 2017b). In that research area, EEG signal analysis with deep learning could be used, inter alia, to operate an autonomous robot via a brain-computer interface (Burget et al., 2017), classify EEG recordings as normal or pathological (Schirrmeister et al., 2018). Another emerging machine learning method, generative adversarial networks (GANs), have recently been applied in neuroscience to generate naturalistic EEG signals (for data augmentation purposes) (Hartmann et al., 2018), and other applications (Wang et al., 2019).
Figure 1: Example of an AI-based brain-computer interface that integrates ❶ intracranial electroencephalography (iEEG) to sense bioelectric brain activity and ❷ transmit large amounts of brain data to a ❸ computer-based processing unit with a ❹ high-end GPU that uses deep learning to analyze the brain data which in turn is used to operate a ❺ dynamic user interface, e. g. for communication.
Figure 1:
Apart from these applications in data analytics in neuroscience, a comprehensive and high-impact review (Hassabis et al., 2017) of how neuroscientific knowledge and methods can inspire AI methods, and vice versa (Marblestone et al., 2016), has shown that ANNs for deep learning have contributed substantially to the understanding of complex cognitive functions, such as attention, memory and learning, at the level of regional and inter-regional brain networks.
In clinical neuroscience, understood here as the overlapping domains of clinical research and clinical provision in neurology and psychiatry, these AI methods also provide fertile ground for new applications in diagnosing, predicting and treating brain diseases and mental health disorders.
To highlight a few developments here: (a) in the area of diagnostics, the AI-based image processing methods could obviously be used for various groundbreaking applications—e. g. the differentiation between healthy and pathological brain images, the segmentation of tumor tissue from brain MRI images, the diagnosis and sub-classification of neurodegenerative movement disorders from tracer-based imaging. (b) In the area of prediction, the same methods could be used to predict the onset of dementia, or the likelihood / risk of epileptic seizures from implanted cortical electrodes, predict the fluctuations of disabling movement symptoms in Parkinson’s disease from deep brain electrodes, and of course many other applications. In the area of therapy, deep learning with ANNs could be used to develop new targeted drugs (Popova et al., 2018; Gawehn et al., 2016)[16, 17], e. g. based on antibodies and fusion proteins (“biologicals”), e. g. for treating neuroimmunological diseases such as multiple sclerosis; or for closed-loop control of impending epileptic seizures via a real-time cortical monitoring and electrostimulation system (Berényi et al., 2012).
The breadth of the actual and potential applications of AI methods can only be sketched here; the reader’s imagination is trusted, however, to visualize the full extent and importance of this development for neuroscience and medicine in general, which have also been treated comprehensively by other authors, see e. g. (Topol, 2019). Such a profound and cross-cutting socio-technological change, one might say paradigm shift, of course, creates substantial ethical legal and social challenges, some of which shall be highlighted here.
Ethical challenges of human-AI interaction in basic and clinical neuroscience
In this section, I highlight some of the most widely discussed current ethical concerns and tensions in neuroethics, neurolaw and related disciplines that engage with these issues. As a disclaimer, given the limited scope here, I neither aim to provide a complete overview nor anything other than my own subjective view on these issues–for a selection of further recent contributions and views please also see (Ienca et al., 2018; Amadio et al., 2018; Illes, 2017; Yuste et al., 2017; Ienca et al., 2017; Mittelstadt et al., 2016).
Shared agency and autonomy in human-AI interaction
In the context of very close human-AI interaction, for example in a closed-loop brain-computer interface in an epilepsy patient, the degree to which the underlying AI system is granted decision-making capacity and—conversely—how much the human subject is kept in the loop in these interactions may lead to new hybrid forms of human-machine or human-AI actions.
Imagine, for example, Maria, a 45 year old woman with severe motor paralysis of the upper and lower limbs who has been implanted with a closed-loop electrode system that allows her to use her brain activity to operate a service robot that can reach, grasp and bring her objects.
Now, any particular action sequence by Maria, say for example fetching a cup and drinking tea, is only realizable by decoding her brain activity and having the robot perform the required tasks. Suppose further that the robot itself also has some degree of autonomy in terms of how it realizes this goal, for example it may have the capacity to freely roam the room and grasp the cup any way that is optimal for realizing the set goal. In such a scenario, it would be reasonable to consider the human-robot interaction necessary for realizing Maria’s goals as requiring a form of shared agency (and autonomy) between Maria and the service robot.
This may seem perfectly fine for all instances in which the interaction works as intended by Maria and her goals a fully realized without significant deviations from the robot. But what happens in cases of unintended yet substantial failures—what if, for example, the robot spills the hot tea and injures a third person or Maria herself? Such interactions gone awry lead to questions of responsibility and accountability in human-AI interaction that are difficult to navigate both ethically and legally.
Accountability, responsibility and the question of trust
Without having the space to provide a detailed and philosophically grounded conceptual analysis here, I urge the reader to consider the difference—ethically and legally—of the concepts of accountability and responsibility. Both denote the ascription to an individual (or active claim by an individual) of some kind of causal agency in a particular action or sequence of actions; for example: “Margret was responsible for writing the letter to the president.” or “The policeman is accountable for explaining his use of his service weapon.”.
In cases of very close, shared or even hybrid actions that are performed in concert by a human and an AI system, however, one might encounter a gap in our ability to unequivocally ascribe responsibility and/or accountability to particular actions. This “accountability gap” (Kellmeyer et al., 2016) may arise in many situations in which decision-making capacity is relegated to an AI system—e. g. a deep-learning-based brain implant or a self-driving car—whose internal learning dynamics and decision-making processes we cannot sufficiently infer: the so-called “black box” aspect of AI (Castelvecchi, 2016). In the ethical and legal domain, we do not yet have effective and resilient norms to ascribe (let alone adjudicate) responsibility in cases of system failures for such black box systems.
Therefore, the topic of → interpretability of machine learning algorithms, particularly ANNs for deep learning, is not only of great interest for computer scientist and engineers, but also an indispensable prerequisite to be able to provide a reasonable ethical understanding and precise legal instruments to adjudicate future cases of liable human-AI interactions.
Intrusive AI and the protection of brain data, mental privacy and personal identity
Today, our methods for observing brain activity, mainly EEG, functional MRI and related methods have inherent limit in the ways in which they can measure the temporal, spatial and frequency-related characteristics of brain signals. This limits the amount and quality of information that we can extract from these signals with our current analytical methods, yet we already see how emerging machine learning methods, specifically deep learning, improve our information extraction capabilities substantially (Akkus et al., 2017; Milletari et al., 2017; Schirrmeister et al., 2017a).
If this progress in data analysis will be complemented by substantial improvement in our measurement methods, for example with intracortical microelectrode grids that measure EEG directly from the cortical surface or, as yet unproven methods such as “neural dust” (a system of intracortical nanoparticles and ultrasound) (Neely et al., 2018), we can expect substantial further progress in the types and amounts of information that can be extracted from neurotechnological measurements.
The practical limits on the amount and specificity of information that can be extracted from brain signals at the individual level—now and in the near future—mean that scenarios involving “reading” the “mind” or “thoughts” will remain elusive for the time being. This will not deter scientists in public (or private-public) research institution nor researchers in technology companies (that have invested substantially in their own neuroscience and neurotechnology research in recent years (Kellmeyer, 2018; Strickland, 2017; Regalado, 2017; Clark, 2017), however, to use brain data as an interesting class of personal data in multimodal deep learning analysis frameworks. In this scenario, we do not yet know whether: a) the combination of many different classes of data (e. g. user behavior, geolocation data, data from devices, brain data etc.) allows for hitherto unprecedented inferences on an individual’s first-person subjective (i. e. “mental”) experience and/or her personal identity (Kreitmair et al., 2017); or, b) the aggregation of such multimodal data from an unprecedented number of individuals (e. g. in a large-scale “experiment” on an internet platform in which a company outfits thousands of users with a consumer neurotechnology device, e. g. a dry-cap EEG, for measuring and uploading brain data to their company servers) would allow to identify particular groups of individuals based on social, biological or other markers—which would raise concerns regarding the protection of group privacy (Ienca et al., 2018; Taylor et al., 2017).
In the neuroethics community and other research fields, these concerns have precipitated a discussion around whether brain data should be treated as a special class of data that needs extra protection in data protection guidelines and regulations (such as genetic data e. g.) (Kellmeyer, 2018), perhaps even “neurorights” that refer to basic human rights (Ienca and Andorno, 2017), or whether in fact the question of what actually constitutes biomedical or health-related data becomes increasingly meaningless as AI methods can make health-related inferences on many different types of data (and their combination and aggregation) that would have previously not been considered to be special health-related data (e. g. your movement patterns from your mobile phone, or your user behavior on the web).
Bias in human and artificial intelligence in interaction
The propensity to take “mental shortcuts” (also known as a → heuristic) for judgement is an inherent feature of human cognition and serves important purposes in everyday life decision-making. If these heuristics, however, produce systematic skews in our decision-making, they are called biases which, if accumulated over time, can produce substantial distortions of knowledge and behavior both at the individual and societal level. These individual and societal biases are also an important driver of creating and maintaining social injustices, e. g. rooted in prejudice, stereotyping, discrimination and other negative social attributions.
The data streams that power current large-scale AI systems, e. g. in translation engines, navigation systems or computer vision (e. g. face recognition technology) today are based on human-derived knowledge structures (ontologies) and most artificial neural networks for deep learning are trained with data that require the input of human experts (e. g. for selecting and labeling the data). Therefore, any bias that is engrained at the level of data selection, structuring, labeling and so forth, may be reproduced, inflated and disseminated by an AI system that is trained on these biased data. Many examples in recent years show, how this can lead to a perpetuation of social injustices and discriminations that are based on human biases, e. g. with respect to ethnicity, gender and other social markers (Knight, 2017; Baeza-Yates, 2016).
Now, there is no easy fix for this deeply entrenched and interlocked problem of human biases and their spillover effects into AI bias. For one, human cognitive biases are almost impossible to contain effectively at the individual level, i. e. most behavioral training methods for so-called de-biasing have failed to show substantial let alone sustainable effects in reducing cognitive biases in humans (Smith and Slack, 2015; Croskerry et al., 2013a; Croskerry et al., 2013b). Furthermore, there is also no straightforward way computationally to effectively de-bias AI systems, both in terms of reducing the technical aspect of bias in algorithms (see → bias) nor the human-derived biased data structures and ontologies (Krywko, 2017; Geman et al., 1992). On the bright side, many computer scientists and data scientists have now recognized the problem and are actively working on potential ways to mitigate the problem (Courtland, 2018).
Meanwhile, however, it is important for researchers in neuroscience and clinicians to be aware (and to raise critical awareness) that automated AI systems may contain biases in their decision-making procedures.
The problem of “perpetual ethics”, governance of AI systems and fair access
In the legal, political and regulatory sphere, these developments raise questions on whether existing regulatory and legal procedures suffice for ensuring the responsible research and effective governance of AI across all sectors in society, while preserving the innovation dynamics of the beneficial applications of this emerging technology (Voeneky and Neuman, 2018; Kaebnick et al., 2016)[42, 43]. Raising awareness and promoting a participatory societal discourse on the ethical issues around AI is commendable and necessary first step for achieving a more inclusive process of deliberation and technology governance.
At the same time, however, the inherent complexity of human-AI interaction and the many stakeholders in AI could also produce a potential problem of “perpetual ethics”—an infinite loop of inter- and transdisciplinary debate without a mechanism and route for democratically legitimized and evidence-based sociopolitical adaptations in the form of laws, rules and regulations.
We can already see how the big five technology companies are all too eager to participate in (some might say usurp) the ethical discourse around AI and neurotechnology (Murgia and Shrikanth, 2019; Hoffmann, 2017). A democratically grounded process of multistakeholder deliberation on the ethics of AI and neurotechnology, however, requires equal and fair access to the debate in the public sphere, rather than the oligopolization of ethical discourse by academia, experts and big companies. Importantly, researchers of all career levels involved in AI-related disciplines (whether from a developmental, computer science perspective or in applied areas such as medicine) can actively participate in exerting counterpressure to this domination of the ethical discourse by private companies by engaging in science communication and public outreach at their institutions.
Furthermore, apart from this bottom-up process of a participatory discourse in societies on equal terms, the transnational nature of technology governance also requires the involvement of supranational bodies (such as the EU) and international organizations (such as the UNESCO) in developing effective and adaptive instruments of governance (i. e. laws and regulations) that preserve the right and freedom to science while making sure that AI is used to nurturing human well-being and flourishing rather than feeding the revenue stream of big technology companies.
Conclusions and outlook
The comprehensive technological change associated with big data, deep learning and the expansion of the digital infrastructure offers many reasons to hope for groundbreaking progress in basic and clinical neuroscience.
At the same time, neuroscience and neurotechnology, as academic and professional fields, should actively work towards embedding and integrating research and conceptual analysis on the ethical tensions in human-AI interaction into their activities. Basic ethics curricula, at all levels of secondary education, in all professions engaged in neuroscience and neurotechnology research and development, should become the norm rather than the exception. We need the coming generations of neuroscientists, programmers, engineers and other specialists to add ethical thinking and analysis into to their methodological toolbox and professional capabilities.
To this end, the comparatively young academic fields of neuroethics (Kellmeyer et al., 2019) and neurolaw (Meynen, 2014) are emerging as particularly dynamic (and partly overlapping) research and teaching environments for addressing the manifold ethical, legal and social challenges from human-AI interaction in the arena of neurotechnology and neuroscience.
Ultimately, from a professional perspective, the engagement with the profound ethical challenges that are created by large-scale techno-social transformations such as AI (or gene editing) is not only adding value to our identity as researchers and/or clinicians in neuroscience but may, collectively, mitigate negative consequences of this rapid change for society.
This work was (partly) supported by the German Ministry of Education and Research (BMBF) (grant number 13GW0053D) to the Medical Center – University of Freiburg and the German Research Foundation (DFG), grant number EXC1086, to the University of Freiburg, Germany.
Very generally, an algorithm is a procedure (e. g. a computation) for solving a particular problem by following a set of instructions step-by-step. In order to function properly important features of algorithms are that: the set of instructions must be definite and without contradictions; each step must be realizable; the description must be finite; the final step should produce a result; it should be determinate in the sense that when repeated under the exact same circumstances the result of the procedure should be the same and that at any given step in the procedure, there is only one option to proceed.
Artificial Intelligence (AI)
Artificial intelligence is an umbrella term that has many different definitions that to some degree also depend on the goals that an AI system is designed to achieve. Most commonly, it refers to a subfield of computer science that aims at creating computer programs that can perform tasks that under usual circumstances would require human intelligence; e. g. speech perception, facial recognition, navigation, or other tasks.
Artificial Neural Networks (ANN)
In the field of → machine learning, an artificial neural network is a computing program architecture that is inspired by the structure of neural networks in animal brains. An ANN in its most basic form consists of different layers of interconnected units (called nodes or artificial neurons)—e. g. an input layer, intermediate layer and output layer. The artificial neuron receives an input (a real number), performs a computation (using a non-linear function) and thus creates “weights” which can be used in various forms of learning. The performance of an ANN depends, among other factors, on the quality of the input data, the number of intermediate layers, the degrees of connectedness between the nodes and the type of learning scenario (e. g. reinforcement learning).
In everyday language, bias refers to systematically skewed decision-making that is often associated with discrimination and other forms of unfairness. More systematically, e. g. in the field of psychology, a cognitive bias refers to a systematic tendency in human decision-making that skews decisions in a particular way (Tversky and Kahneman, 1975). One example, from the groundbreaking work on cognitive biases by the Israeli psychologists Tversky and Kahneman, would be the “availability bias”, i. e. the tendency to use information that is readily at hand for judgement (rather than including information that needs some sourcing). Cognitive biases can be a useful and adaptive → heuristic under circumstances that require rapid action but may equally be maladaptive or irrational in situations that require deeper deliberation or reflection.
In → machine learning and statistics, in contrast, bias refers to the difference between a calculated expected estimate (or value) of a parameter and this parameter’s true value.
Big Data
There is no universally accepted definition on what parameters qualify a particular data set to be considered “big data” (Mauro et al., 2015). An early definition, still in use today in some form or another, by the technology consultancy Gartner® emphasized the aspects of “high-volume, high-velocity and/or high-variety information assets” (Gartner, 2003) as being characteristic of big data sets.
Black Box (aspect of AI / Deep Learning)
The black box aspect of AI is a concern which is often invoked in discussions around questions of opaqueness, transparency and → interpretability of → deep learning (and some other AI methods). Usually it refers to the inability to retro-infer the information content and processes that have occurred in a trained deep neural network. One reason is that, unlike your computer’s random access memory (RAM), information in deep neural networks is diffused throughout the layers and nodes that makes it next to impossible to extract. In analogy to the brain, information storage in ANNs is reflected in the strength of the connected units rather than in any particular set of nodes or layers. Many computer scientists are now working on opening this black box, but no general solution has been developed to the problem yet (Castelvecchi, 2016).
Brain Data
Data on the structure or function of the brain and its various components (networks, cells etc.), examples are MRI images, EEG recordings and other data types.
Convolutional Neural Network
A particular class of → artificial neural network based on deep learning (also: “deep neural network”) that is inspired by the connectivity patterns in the visual cortex. In a convolutional network, each node (a.k.a “neuron”) in one layer of a multilayer network is fully connected to all other nodes in the next layer.
Deep Learning
A → machine learning method in which an → artificial neural network (in that case also referred to as a “deep neural network”) with many dozens to hundreds of layers is used for data analysis
Emerging Technologies
General term that refers to technologies that have been demonstrated to function in a particular way, but are not yet fully developed and/or realized and are typically not available in the market place on a large scale. Current examples would be self-driving cars or brain-computer interfaces.
Generative Adversarial Networks (GANs)
A → machine learning method in which two → artificial neural networks contest with each other. One network, the generative network, produces data structures, for example human faces, and the other network, the discriminative network, evaluates the output with regard to certain set specifications (e. g. whether the faces resemble faces of famous people on which the discriminator network has been trained with large amounts of data). The generative network produces data (faces) until the discriminator network is unable to distinguish between real faces (that it has been trained on) and generated faces. This method is a powerful tool for increasing the amount of data for training neural networks (data augmentation) but can also be used to produce fake content such as images or videos (“deep fakes”).
The process of governing (by supranational bodies, regional or local authorities) a particular social organization unit or social system, e. g. a state, territory or community, via laws, regulation and power.
Graphics Processing Unit (GPU)
Parallelized circuits that are specialized for graphics and image processing and are generally more efficient than conventional centralized processing units (CPU).
In cognitive psychology, a heuristic describes a method for problem-solving, e. g. by an individual, that relies on immediate and highly automated patterns and/or actions, for example the application of guesses, “rule of thumbs” or other types of intuitive judgments.
Interpretability (of → Machine Learning)
The ability to correctly interpret the results of a machine learning analysis, both in terms of the distinctive classes or features that the machine learning program has produced when analyzing the data.
Machine Learning (ML)
The most widely used description of machine learning is “learning without being programmed”, which points to the fact that methods used for machine learning at the most general level of description enable a software / algorithm to discriminate patterns in data and/or make predictions by learning distinctive features from these data that are not part of the original set of programming instructions. There is now an ever growing variety of machine learning methods, of which → artificial neural networks for → deep learning are the most popular and successful in recent years.
Persuasive Technologies
A concept from human-technology interaction studies in which technologies by virtue of their particular design features and functions may make the interaction very persuasive for humans. Persuasiveness can have a positive connotation, in the sense that a device’s design enables a compelling and intuitive user experience, but can also be perceived as negative, in the sense of being overly manipulative or even deceptive (Fogg, 2003).
User Experience Design
An interdisciplinary research field at the intersection of industrial design, psychology and cognitive science that studies human-technology interaction from a user-centered perspective.
User Interface
A graphics display or other type of output device that lets a user interact with a computer system. It can have different features such as being static, dynamic, touch sensitive, or adaptive.
Technological Solutionism
The societal tendency to turn to technology first, rather than sociopolitical actions say, for solving complex problems in the social realm (Morozov, 2014). Examples would be to respond to shortages in human caregivers by implementing a large-scale program for care robots or to combatting social isolation and loneliness in elderly people with a program of free virtual reality headsets (with an accompanying virtual platform for online interaction).
Akkus, Z., Galimzianova, A., Hoogi, A., et al. (2017). Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J Digit Imaging 30, 449–459. Search in Google Scholar
Amadio, J., Bi, G.-Q., Boshears, P.F., et al. (2018). Neuroethics questions to guide ethical research in the international brain initiatives. Neuron 100, 19–36 Search in Google Scholar
Baeza-Yates, R. (2016). Data and Algorithmic Bias in the Web. In: Proceedings of the 8th ACM Conference on Web Science. ACM, New York, NY, USA, S. 1–1 Search in Google Scholar
Berényi, A., Belluscio, M., Mao, D., Buzsáki, G. (2012). Closed-Loop Control of Epilepsy by Transcranial Electrical Stimulation. Science 337, 735–737. Search in Google Scholar
Burget, F., Fiederer, L.D.J., Kuhner, D., et al. (2017). Acting thoughts: Towards a mobile robotic service assistant for users with limited communication skills. IEEE, S. 1–6 Search in Google Scholar
Castelvecchi, D. (2016). Can we open the black box of AI? Nature News 538, 20. Search in Google Scholar
Clark, L. (2017). Elon Musk reveals more about his plan to merge man and machine with Neuralink. Wired UK Search in Google Scholar
Courtland, R. (2018). Bias detectives: the researchers striving to make algorithms fair. In: Nature. Accessed 29 Juni 2018 Search in Google Scholar
Croskerry, P., Singhal, G., Mamede, S. (2013a). Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf 22, ii58–ii64. Search in Google Scholar
Croskerry, P., Singhal, G., Mamede, S. (2013b). Cognitive debiasing 2: impediments to and strategies for change. BMJ Qual Saf bmjqs-2012-001713. Search in Google Scholar
Fogg, B.J. (2003). Persuasive Technology: Using Computers to Change what We Think and Do. Morgan Kaufmann Search in Google Scholar
Gartner (2003). What Is Big Data? – Gartner IT Glossary – Big Data. Accessed 27 Mai 2019 Search in Google Scholar
Gawehn, E., Hiss, J.A., Schneider, G. (2016). Deep Learning in Drug Discovery. Molecular Informatics 35, 3–14. Search in Google Scholar
Geman, S., Bienenstock, E., Doursat, R. (1992). Neural Networks and the Bias/Variance Dilemma. Neural Computation 4, 1–58. Search in Google Scholar
Hartmann, K.G., Schirrmeister, R.T., Ball, T. (2018). EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals. arXiv:180601875 [cs, eess, q-bio, stat] Search in Google Scholar
Hassabis, D., Kumaran, D., Summerfield, C., Botvinick, M. (2017). Neuroscience-Inspired Artificial Intelligence. Neuron 95, 245–258. Search in Google Scholar
Hoffmann, A.L. (2017). A Chief Ethics Officer Won’t Fix Facebook’s Problems. In: Slate Magazine. Accessed 28 Mai 2019 Search in Google Scholar
Ienca, M., Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy 13, 5. Search in Google Scholar
Ienca, M., Kressig, R.W., Jotterand, F., Elger, B. (2017). Proactive Ethical Design for Neuroengineering, Assistive and Rehabilitation Technologies: the Cybathlon Lesson. Journal of NeuroEngineering and Rehabilitation 14, 115. Search in Google Scholar
Ienca, M., Haselager, P., Emanuel, E.J. (2018). Brain leaks and consumer neurotechnology. Nature Biotechnology 36, 805–810. Search in Google Scholar
Illes, J. (2017). Neuroethics: Anticipating the future. Oxford University Press Search in Google Scholar
Kaebnick, G.E., Heitman, E., Collins, J.P., et al. (2016). Precaution and governance of emerging technologies. Science 354, 710–711. Search in Google Scholar
Kellmeyer, P., Cochrane, T., Müller, O., et al. (2016). The Effects of Closed-Loop Medical Devices on the Autonomy and Accountability of Persons and Systems. Camb Q Healthc Ethics 25, 623–633. Search in Google Scholar
Kellmeyer, P. (2018). Big Brain Data: On the Responsible Use of Brain Data from Clinical and Consumer-Directed Neurotechnological Devices. Neuroethics. Search in Google Scholar
Kellmeyer, P., Chandler, J., Cabrera, L.Y., et al. (2019). Neuroethics at 15: The Current and Future Environment for Neuroethics. AJOB Neuroscience. Search in Google Scholar
Kleesiek, J., Urban, G., Hubert, A., et al. (2016). Deep MRI brain extraction: A 3D convolutional neural network for skull stripping. NeuroImage 129, 460–469. Search in Google Scholar
Knight, W. (2017). Biased algorithms are everywhere, and no one seems to care. MIT Technology Review Search in Google Scholar
Kreitmair, K.V., Cho, M.K., Magnus, D.C. (2017). Consent and engagement, security, and authentic living using wearable and mobile health technology. Nature Biotechnology 35, 617–620. Search in Google Scholar
Krywko, J. (2017). To fix algorithmic bias, we first need to fix ourselves. In: Quartz. Accessed 17 Aug 2017 Search in Google Scholar
Litjens, G., Sánchez, C.I., Timofeeva, N., et al. (2016). Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Scientific Reports 6, 26286. Search in Google Scholar
Marblestone, A.H., Wayne, G., Kording, K.P. (2016). Toward an Integration of Deep Learning and Neuroscience. Front Comput Neurosci 94. Search in Google Scholar
Mauro, A.D., Greco, M., Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings 1644, 97. Search in Google Scholar
Meynen, G. (2014). Neurolaw: Neuroscience, Ethics, and Law. Review Essay. Ethic Theory Moral Prac 17, 819–829. Search in Google Scholar
Milletari, F., Ahmadi, S.-A., Kroll, C., et al. (2017). Hough-CNN: Deep learning for segmentation of deep brain regions in MRI and ultrasound. Computer Vision and Image Understanding 164, 92–102. Search in Google Scholar
Mittelstadt, B.D., Allo. P., Taddeo. M., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society 3, 205395171667967. Search in Google Scholar
Morozov, E. (2014). To save everything, click here: the folly of technological solutionism. PublicAffairs, New York Search in Google Scholar
Murgia, M., Shrikanth, S. (2019). How Big Tech is struggling with the ethics of AI. In: Financial Times. Accessed 28 Mai 2019 Search in Google Scholar
Neely, R.M., Piech, D.K., Santacruz, S.R., et al. (2018). Recent advances in neural dust: towards a neural interface platform. Current Opinion in Neurobiology 50, 64–71. Search in Google Scholar
Pereira, S., Pinto, A., Alves, V., Silva, C.A. (2016). Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Transactions on Medical Imaging 35, 1240–1251. Search in Google Scholar
Popova, M., Isayev, O., Tropsha, A. (2018). Deep reinforcement learning for de novo drug design. Science Advances 4, eaap7885. Search in Google Scholar
Regalado, A. (2017). Google’s health-care mega-project will track 10,000 Americans. MIT Technology Review Search in Google Scholar
Schirrmeister, R.T., Springenberg, J.T., Fiederer, L.D.J., et al. (2017a). Deep learning with convolutional neural networks for EEG decoding and visualization. Human brain mapping 38, 5391–5420 Search in Google Scholar
Schirrmeister, R.T., Springenberg, J.T., Fiederer, L.D.J., et al. (2017b). A novel deep learning approach for classification of EEG motor imagery signals. J Neural Eng 14, 016003. Search in Google Scholar
Schirrmeister, R.T., Gemein, L., Eggensberger, K., et al. (2018). P64. Deep learning for EEG diagnostics. Clinical Neurophysiology 129, e94. Search in Google Scholar
Smith, B.W., Slack, M.B. (2015). The effect of cognitive debiasing training among family medicine residents. Diagnosis 2, 117–121. Search in Google Scholar
Strickland, E. (2017). Facebook Announces „Typing-by-Brain“ Project. In: IEEE Spectrum: Technology, Engineering, and Science News. Accessed 22 Sep 2017 Search in Google Scholar
Taylor, L., Floridi, L., Van der Sloot, B. (2017). Group privacy: new challenges of data technologies. Springer, Cham Search in Google Scholar
Topol, E.J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine 25, 44–56. Search in Google Scholar
Tversky, A., Kahneman, D. (1975). Judgment under Uncertainty: Heuristics and Biases. In: Wendt, D., Vlek, C. (Hrsg.). Utility, Probability, and Human Decision Making. Springer Netherlands, S. 141–162 Search in Google Scholar
Voeneky, S., Neuman, G.L. (2018). Human rights, democracy, and legitimacy in a world of disorder Search in Google Scholar
Wang, Z., She, Q., Smeaton, A.F., et al. (2019). Neuroscore: A Brain-inspired Evaluation Metric for Generative Adversarial Networks. arXiv:190504243 [cs, eess] Search in Google Scholar
Yuste, R., Goering, S., Arcas, B.A. y, et al. (2017). Four ethical priorities for neurotechnologies and AI. Nature News 551, 159. Search in Google Scholar
Published Online: 2019-11-06
Published in Print: 2019-11-26
© 2019 Walter de Gruyter GmbH, Berlin/Boston
|
0800 612 2524
Change Region
Understanding the cloud
The term “the cloud” is a popular, yet vague, concept that is often thrown around in C-level boardrooms. But does it actually mean?
You’ve probably used it more than you think. If you draft documents in Google Docs, watch films on Netflix, use Siri or Alexa or store images on Dropbox, you’re using the cloud.
Put simply, the cloud refers to any type of software or service that isn’t physically located on your device, but instead runs on the internet. Any files, videos or images that you save on cloud services are actually stored on servers that belong to third party providers, which you can access from anywhere.
How does cloud hosting work?
Before the cloud existed, traditional websites and applications were hosted on single servers or shared servers in a data centre or on-site (i.e. in your office). However, since the early 2000s, businesses have been slowly waking up to the many benefits of moving data into the cloud.
Instead of having your website, application or data on a single physical server, cloud hosting uses a ‘cluster’ of multiple virtual servers, also called virtual machines (VMs).
What is a Hypervisor?
A dedicated server is split up using a hypervisor to create multiple virtual machines. A hypervisor is a type of lightweight operating system that is installed on the server to allow multiple VMs to run. Hypervisors make cloud-based applications available to users across a virtual environment while still allowing control over the infrastructure, applications and sensitive data.
Virtual machines are not physical objects; they only exist virtually. The network of virtual servers tap into an underlying network of physical infrastructure in data centres and are all connected and accessible via the internet.
In essence, cloud hosting is a network of virtual servers, storage and networking made available on-demand, with potentially endless processing power. Clouds can also be networked together and be hosted in data centres all over the World.
What is a cloud service provider?
A cloud service provider usually offers infrastructure services in a utility model, where resources can be adjusted depending on your requirements. Customers can boot up, shut down, upgrade and downgrade the virtual servers on-demand.
For example, if your website sees a large spike in traffic, more resources can be added. Similarly, if you see a lull in traffic, fewer resources will be accessed, making it a cost-effective solution.
The cloud-hosting model is a cheaper alternative to a traditional dedicated server model that requires companies to build and manage their own servers – which can be a costly capital and operating expense.
Who should use cloud hosting?
In short, anyone and everyone who has a digital footprint or operating system. IT professionals and business owners should think about cloud hosting when looking to have business access anywhere, improve business agility and accommodate the demand for planned growth in a cost-effective way.
If a website is mission-critical, meaning that the company cannot function without it, then cloud hosting is a good idea – e.g. e-commerce stores or high-traffic sites.
For example, imagine you needed to create a website for a new food delivery business. The data (transactions, delivery addresses, financial information etc.) and computing power to keep the website up and running has to be physically stored somewhere. Without the cloud, you would need to purchase a server, install the right software, learn how to manage your server and ensure your website is secure and accessible at all times.
For anyone without very specific expertise, this could cost thousands in money and even more in your time. That’s where cloud hosting can help – you can simply pay a hosting provider to house your data and ensure it is safe and available to your customers at all times, without the headache of running the infrastructure yourself.
As cloud hosting makes it easy to scale, it makes it appealing to news outlets and ticketing websites who expect traffic surges or large numbers of clicks on their posts. Streaming giant Netflix has to meet huge spikes in demand as well as times of lower activity, so cloud hosting enables it to scale up or down as and when it needs. Similarly, social networking sites such as Facebook and Instagram rely on cloud hosting to adjust to constantly varying levels of traffic.
What is the difference between Public vs. Private cloud?
Before the cloud, when you hosted your website or application on a single server, you had the choice of a shared server (sharing the machine with others) or a dedicated server (an individual machine specifically for you). Cloud hosting operates in a similar way.
Public cloud is the equivalent of a shared server, where the hardware is shared with several other virtualized sites, meaning the cost is also lower. Unlike traditional shared servers, you do not share disk space, processing power, or anything else with anyone else – you simply share the racks in the data centres.
With private cloud, however, you don’t share infrastructure with anyone else, which comes with a higher cost, but offers greater security and control. With no hardware shared and the entire virtualized resource sitting behind your own firewall, private cloud is well suited to those who consider data security important.
What are the benefits of the cloud?
• Reliability
If all of the data and compute of your website are stored on one machine in your office and a disaster happens such as a fire or theft, your site will vanish. With cloud hosting, high uptime is built into its structure – if your site is shared between a network of interconnected machines, then when one goes offline, the others can pick up the slack and keep your site online.
• Security
Cloud data centres are highly secure buildings with multiple security measures in place, including 24-hour monitoring, fingerprint locks, ID card scanners, security camera, armed guards etc. Cloud providers take cybersecurity seriously, constantly updating security measures including patching, firewalls, web application firewalls, DDoS defence systems, VPNs, vulnerability scanning and much more. In addition, data stored in the cloud is usually encrypted, making it very difficult for hackers to gain access to, or make any sense of your information
• Scalability
The beauty of cloud hosting is that it can be responsive to demand, meaning you only pay for the resources you use. Such resources can be easily scaled up or down depending on your needs; if you are expecting extra traffic, you can scale up your resources to cater for the spike (e.g. eCommerce over the Black Friday weekend). Without the constraints of a physical server, cloud hosting is a cost-effective and greener way to run your website or application as you are only using the server resources you actually require. Therefore, by streamlining your data and resource use, you are creating less of an environmental impact.
• Versatility
Cloud hosting is also incredibly versatile, with most hosting providers building individual services that are entirely bespoke to the specific needs of the customer. With cloud hosting providers, you are able to specify the exact space, architecture, processing power, operating system and security that you need for your cloud set-up.
• Performance
Cloud hosting allows for easier load balancing between multiple server environments, which puts much less strain on a single server’s resources. This results in lightning-fast speeds and increased traffic capacity.
What is a Hyperscale Cloud?
With a combined market share of 58%, it is no secret that the cloud market is heavily dominated by three big players in the industry – Amazon, Microsoft, and Google.
When infrastructure is provided on such a large scale, it is referred to as “Hyperscale Cloud”. This model uses a huge network of servers to cope with high levels of throughput, performance and spikes in demand.
Advances in Artificial Intelligence (AI) and Machine Learning have accelerated the growth of the hyperscale cloud market, with huge pressure on businesses to keep up with ever-growing demands for complex computing tasks.
However, many organisations will often start with the false premise that they can simply “lift and shift” their on-premises infrastructure and move it over to a hyperscale public cloud. The reality is that cloud migration and ongoing management are complex tasks, so unless you have a team of technical experts in-house, managed cloud hosting could be beneficial.
What is managed cloud hosting?
As cloud platforms continue to become more complex, many companies are left wondering where to begin.
A managed service provider, like us, can work with you all the way through your journey to the cloud, from understanding your business challenges, to selecting the right cloud solution to use, and also architecting, deploying and managing the setup.
Management of security, infrastructure and maintenance also falls with the provider, leaving you the time and resources to focus on your business.
Hyve not only offer this on our own fully managed cloud platforms, but we also act as a partner to provide management on public clouds such as AWS and Azure.
Hyve case study
Broadlight is a DevOps and cloud consultancy, operating at the cutting edge of the digital transformation industry.
As Broadlight has grown, demands on their hosting infrastructure have increased, but the company found themselves unable to scale effectively with their previous hosting provider. The Broadlight website is now live on Hyve’s managed private cloud, and is scaled to give the flexibility and performance levels they were looking for. Read more here.
The future of cloud hosting
Cloud has produced a completely new world of jobs, services and applications, with almost every significant innovation such as artificial intelligence, AR/VR and IoT heavily relying on it.
With many industries now depending on this technology to innovate and grow – from storing digital health records to analysing the earth’s tectonic plate movements – cloud computing is becoming the backbone of modern society.
If you’d like to talk to one of our cloud hosting experts about how to get the most from your hosting why not give us a call on 0800 6122524 or email sales@hyve.com
No votes yet.
Please wait...
|
Why is Labor Day in September, and why is it such a big deal?
Credit: DaytonDailyNews
Why We Celebrate Labor Day
Credit: DaytonDailyNews
Labor Day is this Monday. Frankly, it couldn’t come along at a better time.
>> Read more trending news
After all, it’s been two long months since many of us enjoyed the revelry of the Fourth of July. Meanwhile, this whole flip-flops and humidity hair thing is getting old.
Just as Memorial Day has come to be thought of as the "beginning" of summer, Labor Day generally is considered its official unofficial end. But just try telling that to kids, who've already been back in school for weeks.
So why is Labor Day always in September? And why does this somewhat taken-for-granted holiday matter so much for America?
(Hint: It has nothing to do with car or mattress sales).
Here are the seven things you really need to know about Labor Day:
It all began with one "monster labor festival." That might sound more like a description of the goings-on at Bonnaroo or Dragon Con. But in truth, Labor Day's beginnings go back about a century and are rooted in a serious workers' rights movement. Rallies and other events aimed at bettering labor conditions were becoming increasingly frequent in the late 19th century, when workdays were long (10 to 12 hours), "living" wages weren't for many, and the United States had the highest job-related fatality rate of any industrialized nation in the world.
But the event that’s generally considered the tipping point happened Sept. 5, 1882. That’s when New York’s Central Labor Union organized tens of thousands of workers to come together for a self-described “monster labor festival” that began with a parade and proceeded on to a park for a picnic, speeches and -- hey, this is America! -- fireworks.
Like Labor Day? Maybe you have a tuba player from New Jersey to thank. The success of that first event was hardly assured. Police, expecting a riot, showed up in force on the morning of the parade; meanwhile, no one knew how many workers -- most of whom would have to forfeit a day's pay to be there -- would show up to march. Those who did reportedly remained rooted in place at first because there was no music for them to march to. While some urged canceling the whole thing, organizers suddenly received word that "two hundred marchers from the Jewelers Union of Newark Two had just crossed (over on) the ferry," an official history recounts on the U.S. Department of Labor's website. "And they had a band!" Some 10,000 to 20,000 people wound up marching that year, the DOL says.
But let's not overlook the right honorable gentleman from ... South Dakota? Buoyed by that initial success, the CLU repeated the event on Sept. 5, 1883. Soon, individual states started creating so-called "labor days" -- in 1887, Oregon, New York, New Jersey, Massachusetts and Colorado all made it an official state holiday. More states followed suit, and on June 28, 1894, Congress enacted a bill making it a federal holiday. "Being and celebrated the day known as Labor's Holiday," the legislation that had been introduced the previous year by Sen. James Henderson Kyle of South Dakota grandly proclaimed, "is hereby made a legal public holiday."
Take a hint, Fourth of July! The first so-called Labor Day, on Sept. 5, 1882, took place on a Tuesday (the following year, it fell on a Wednesday). Luckily for fans of three-day weekends, the legislation that made Labor Day a federal holiday also established it henceforth and forevermore as being celebrated on "the first Monday of September in each year." The earliest date Labor Day can take place in any year is Sept. 1 (most recently in 2014) and the latest date is Sept. 7 (most recently in 2015).
Without Labor Day, summer would just go on and on and . . . Sounds pretty nifty, right? But without fall, there'd be no Halloween, no college football playoffs and -- gasp! -- no Pumpkin Spice Lattes! Summer may officially end Sept. 22, but like we said before, most people consider Labor Day to be its end point. But here's where it gets really interesting. Memorial Day, aka the "first day of summer," always takes place on the last Monday in May. The earliest date that can be is May 25 -- and that always occurs in the same years in which Labor Day arrives latest, on Sept. 7. In other words, summer is much "longer" in those years (a full five days longer than this year, just as a f'rinstance). The next time it will happen is in 2020.
Meanwhile, kids in Virginia would just be riding roller coasters forever. A 1986 state law there says public schools can't start before Labor Day without a waiver from the Virginia Department of Education. "Supporters of the current law say that it helps protect Virginia's tourism industry," the Virginia Gazette wrote earlier this year about an unsuccessful legislative attempt to end what's been nicknamed the "Kings Dominion Law." Because the holiday period can represent a last chance for family visits, the article explained, "Theme parks like Kings Dominion and Busch Gardens have advocated keeping schools from starting before Labor Day." But before students get any ideas, it seems waivers are increasingly winning out: A Richmond TV station reported Wednesday that an estimated 65 percent of Georgia's public school students had returned to class before Labor Day this year.
Finally, go team! After Labor Day, along with the rest of the NFL. Even the mighty NFL takes a cue from Labor DayOn Sept. 5, 2002, the NFL opened its season for the first time ever on a Thursday night with a game between the New York Giants and the San Francisco 49ers at Giants Stadium. Coming three days after Labor Day and just before the one-year anniversary of the Sept. 11 attacks, the game was preceded by "NFL Kickoff Live" from Times Square, "a football and music festival honoring the resilient spirit of New York and America." Every year since then, the high-profile game has kicked off the NFL season on the Thursday after Labor Day (the lone exception: in 2012, it was played on the Wednesday after the holiday to avoid conflicting with the Democratic National Convention). Since 2004, the defending Super Bowl champion has played in the NFL Kickoff game.
|
Hard hats are most generally associated with construction, but there are tons of industries that require head protection due to the nature of the job. Aside from construction, there are many Washington State industries that have hazards from above including working on ports, fishing boats, and logging. However, some of the industries are less diligent about hard hat use, resulting in potentially serious head injuries.
The real threat of head injuries should not only prompt employers to enforce hard hat rules, but should inspire workers to habitually wear them. The reason is that one of the worst, deadliest, and costliest injuries is a traumatic brain injury (TBI). A simple washer for example, falling about 30 feet, will generate a force of 61/2 pounds on impact! That’s enough to potentially pierce the skull.
Brain Injuries on the Job
People who suffer brain injuries on the job often find themselves on a long road to recovery. Depending on the severity, they may need to learn how to speak, walk, and even feed themselves. Regardless, even the most simple brain injury could result in physical and mental therapy to treat memory problems, behavioral changes, depression, and personality changes, all associated with TBI. This is not to mention the chance of a seriously shorter life directly linked to survivors of TBI.
But this can also be linked to the type of injury. You see, the human skull is not perfectly round so the brain not only spins as a whole but some parts within it spin at different rates. This sets up additional shear forces inside the brain itself. This stress within the brain results in tearing of nerve fibres and tiny veins within the brain. This is called Diffuse Axonal Injury (DAI).
Types of force associated with head injuries:
Linear Force – Caused by a straight and direct impact (such as when a ball hits a wall without rotating); it consists firstly of blunt compression (the hit) and then a reaction (the bounce) causing direct injury to the point of impact and potential further injuries following a straight line into the brain.
Rotational Force – Causes the head to rotate around its point of articulation at the top of the spine as it is hit.
Hard Hat Safety
A survey by the Bureau of Labor Statistics (BLS) of accidents and injuries on a national level showed that most workers who suffered head injuries were not wearing head protection. The majority of workers were injured while performing their normal jobs at their regular worksites.
The BLS survey further found that in most instances head injuries occurred when the injured worker’s employer had not required usage of head protection even though there was a chance of head injury. Of those workers wearing hard hats, all but 5% indicated that they were required by their employers to wear them.
Researchers noted that the majority of workers who wore hard hats habitually at work believed that hard hats were a practical requirement for safety on their jobs.
According to the report:
• Nearly half of the accidents involving head injuries, employees knew of no actions taken by employers to prevent such injuries from recurring.
• More than one-half of the workers were struck on the head while they were looking down and almost three-tenths were looking straight ahead.
• A third of the unprotected workers were injured when bumping into stationary objects, such actions injured only one-eighth of hard hat wearers.
Duty of Care
According to the Occupational Safety and Health Administration (OSHA) and in compliance with Occupational Safety and Health Act of 1983, the employer is required to show a Duty of Care when there is a reasonable threat of head injuries in the workplace. They must identify the threats and take measures to protect employees, including requiring or providing hard hats.
They must require hard hats when:
• There is a possibility that a person may be struck on the head by a falling object.
• A person may strike their head against a fixed or protruding object.
• Accidental head contact may be made with electrical hazards.
Washington Brain Injury Lawyer
When a worker is hurt on the job, L&I is appointed with the task of investigating the accident to determine if there are any safety issues or violations by the company that facilitated the personal injury and subsequent workers compensation claim. The worker goes through the process of filing a claim and generally they receive benefits within 14 days of the industrial accident, but sometimes, for some reason or another they may be denied. That’s when the worker needs help from an experienced legal professional.
If you or someone you know suffer a serious personal injury at work due to improper procedures or a hazardous workplace then you need a skilled attorney with experience in the procedures of workers compensation to get you the compensation you deserve. Call Phillips Law Firm for a free consultation.
|
Prohibiting Some Types of Lying
As referred to previously, it is socially unacceptable to habitually tell lies ( – but that does not mean that there should be an all-encompassing law against lying. Such a law would be unenforceable in practice, given the enormous number of ‘little white lies’ that are told to soothe people’s feelings and sometimes raising questions about whether a lie was intentional. It may be thought appropriate to criminalise some types of lying, though, where damage can be proved – but there are practical difficulties.
Libel and slander can damage people’s reputations, so it is appropriate to consider legal action against them. People can take each other to court in some jurisdictions, to sue for damages using the law of torts (5.2.4), but that has two serious disadvantages: the cost might prevent many people from being able to bring a prosecution; and the further publicity provided by the court case might exacerbate the damage caused.
An issue that has come to the fore recently is that of subverting political discourse with ‘fake news’, as discussed later ( This is serious, having been credited with changing the outcome of elections, but there are practical difficulties in criminalising it: it isn’t possible to prosecute perpetrators in other countries (5.3.4); it spreads so fast on social media that the law could never catch up with it; people might not realise that it is fake; and it is possible to use the formula “it has been reported that…” to evade responsibility for the accuracy of the story. In practice, it is currently being combated by putting pressure on service-providers to remove damaging material.
The issues raised by telling lies to stir up ethnic conflict are raised in the next section (5.4.6), as part of a wider topic.
Next Section
|
Causes and Treatment of Red Spots on Skin
Reviewed on 9/29/2020
Which infectious conditions cause red spots?
A variety of conditions can cause red spots, such as infectious and allergic.
Some of the common infections that cause red spots include:
Which allergic conditions cause red spots?
Some of the common allergic conditions with red spots are:
• Contact dermatitis: Allergy to latex, an insect (mosquitoes, ticks) bite, and diapers (diaper rash in children) are examples of contact dermatitis. Treatment involves the use of oral pills of antihistamines and creams for local applications to relieve the itch, such as a calamine lotion or moisturizer mixed with a steroid (hydrocortisone).
• Food allergies: People may have an allergy to certain foods, which can appear as rashes. The most common foods are fish, shellfish, peanuts, and tree nuts like walnuts. Problem foods for children can include eggs, milk, peanuts, soy, and wheat.
• Drug rash: A sudden allergic reaction to any medicine can appear in the form of a rash. It is advisable to contact the doctor who will consider your medical history and prescribe an alternate medicine.
• Atopic dermatitis: It often starts in babies and can either go away with age or stay permanently with flares throughout life. Atopic dermatitis treatment is aimed at managing flares and keeping the skin moisturized.
• Poison ivy, poison oak, or poison sumac rash: Touching any of these plants results in blistering red spots with intense itching all over the body in most people. The red spots usually subside on their own within 7 to 21 days. Treatment includes washing the affected part with lukewarm or soapy water, applying Calamine lotion, and taking antihistamine medications to relieve the itch.
What percentage of the human body is water? See Answer
What are the other conditions that cause red spots?
The other common conditions that cause red spots but not because of any allergy or infection include:
Heat rash: Red spots caused by over-exposure to the sun or heat are known as heat rash. Treatment involves applying a soothing lotion, such as Calamine lotion or aloe vera. Preventive measures include applying sunscreen lotion before venturing out and wearing full sleeve tops.
Swimmer itch: Swimmer itch is a rash that comes from being in the water where certain infected snails are present. It normally subsides on its own in about a week and medical treatment is generally not needed.
Acne rosacea: A chronic skin condition in which a red rash appears most commonly on the cheeks and around the nose. The disease is characterized by flares that are triggered by a variety of factors. A doctor will be able to suggest the most effective treatments.
Pityriasis rosea: A scaly reddish-pink rash that sweeps outward like the branches of a pine tree. It commonly appears mostly on the chest, abdomen, and back and is most common in people between 10 to 35 years of age. Treatment includes antihistamine pills.
Psoriasis: Psoriasis appears as silvery or red, scaly, itchy rashes, most commonly over the knee joints, elbows, fingers, and toes. There are several types of psoriasis and each appears slightly different than the other. Treatments involve the application of creams and medications to the skin, light therapy, and injectable medications.
Petechiae: These are red spots that appear due to the rupture of tiny blood vessels (capillaries) under the skin. These may happen in:
If the red spots cause severe discomfort and are associated with fever or unusual signs, it is vital to consult a doctor who can diagnose the condition and start the treatment immediately.
Health Solutions From Our Sponsors
Allmon A, et al. Common Skin Rashes in Children. Am Fam Physician. 2015;92(3):211-216.
Petechiae. Available at:
Slide show: Common skin rashes. Available at:
Health Solutions From Our Sponsors
|
What Are the 5 Warning Signs of Prostate Cancer?
Reviewed on 7/2/2020
What is prostate cancer?
The prostate is a walnut-shaped gland located below the bladder.
5 Warning signs are bone pain, compression of the spine, Painful urination, erectile dysfunction, and blood in the urine.
Prostate cancer affects the prostate glands of men. Prostate cancer is the second-leading cause of cancer deaths for men in the United States.
The prostate is a small organ that lies below the urinary bladder and in front of the rectum (part of the large intestine). In men, it is normal for the size of the prostate to increase with age. In younger men, it is about the size of a walnut. The prostate makes a milky fluid, which is a part of semen. This fluid feeds the sperm.
Growth in the prostate can be of two types:
Prostate cancer starts in the prostate gland and may spread to other organs.
What causes prostate cancer?
The exact cause of prostate cancer is unknown. One in three men older than 50 years has some cancer cells in the prostate. Luckily, eight out of 10 tumors are found to be small and harmless after the biopsy. Although the reason for prostate cancer is unknown, there are many risk factors that increase the risk of prostate cancer:
What are the 5 warning signs of prostate cancer?
Prostate cancer rarely produces symptoms in the early stage; however, few signs can help in detecting prostate cancer. Five potential warning signs of prostate cancer are:
1. Bone pain (due to spread)
2. Symptoms from compression of the spine
3. Painful urination or ejaculation
4. Sudden erectile dysfunction (trouble in getting an erection)
5. Blood in urine or semen
What are the other symptoms of prostate cancer?
Symptoms of prostate cancer are:
How is prostate cancer diagnosed?
Screening asymptomatic men help identify early prostate cancer. Screening is recommended in men:
Screening methods include:
Screening Tests Every Man Should Have See Slideshow
How is prostate cancer treated?
The treatment plan for prostate cancer depends on the following factors:
• The stage and grade of cancer
• Age and health
• Risk category
• Patient values and preferences
• Life expectancy
Treatment choices for prostate cancer involve:
What is the survival rate for prostate cancer?
Health Solutions From Our Sponsors
Health Solutions From Our Sponsors
|
Algo: Depth-first search
Depth-first search in undirected graphs
Exploring mazes
Depth-first search is a surprisingly versatile linear-time procedure that reveals a wealth of information about a graph. The most basic question it addresses is, What parts of the graph are reachable from a given vertex?
To understand this task, try putting yourself in the position of a computer that has just been given a new graph, say in the form of an adjacency list. This representation offers just one basic operation: finding the neighbors of a vertex. With only this primitive, the reachability problem is rather like exploring a labyrinth.
Exploring a graph is rather like navigating a maze
You start walking from a fixed place and whenever you arrive at any junction (vertex) there are a variety of passages (edges) you can follow. A careless choice of passages might lead you around in circles or might cause you to overlook some accessible part of the maze. Clearly, you need to record some intermediate information during exploration.
This classic challenge has amused people for centuries. Everybody knows that all you need to explore a labyrinth is a ball of string and a piece of chalk. The chalk prevents looping, by marking the junctions you have already visited. The string always takes you back to the starting place, enabling you to return to passages that you previously saw but did not yet investigate.
How can we simulate these two primitives, chalk and string, on a computer? The chalk marks are easy: for each vertex, maintain a Boolean variable indicating whether it has been visited already. As for the ball of string, the correct cyberanalog is a stack. After all, the exact role of the string is to offer two primitive operations—unwind to get to a new junction (the stack equivalent is to push the new vertex) and rewind to return to the previous junction (pop the stack).
Instead of explicitly maintaining a stack, we will do so implicitly via recursion (which is implemented using a stack of activation records). The resulting algorithm is shown in the following picture.
Finding all nodes reachable from a particular node
The $\tt{previsit}$ and $\tt{postvisit}$ procedures are optional, meant for performing operations on a vertex when it is first discovered and also when it is being left for the last time. We will soon see some creative uses for them.
More immediately, we need to confirm that $\tt{explore}$ always works correctly. It certainly does not venture too far, because it only moves from nodes to their neighbors and can therefore never jump to a region that is not reachable from $v$. But does it find all vertices reachable from $v$? Well, if there is some $u$ that it misses, choose any path from $v$ to $u$, and look at the last vertex on that path that the procedure actually visited. Call this node $z$, and let $w$ be the node immediately after it on the same path.
So $z$ was visited but $w$ was not. This is a contradiction: while the explore procedure was at node $z$, it would have noticed $w$ and moved on to it.
Incidentally, this pattern of reasoning arises often in the study of graphs and is in essence a streamlined induction. A more formal inductive proof would start by framing a hypothesis, such as “for any $k \ge 0$, all nodes within $k$ hops from $v$ get visited.” The base case is as usual trivial, since $v$ is certainly visited. And the general case—showing that if all nodes $k$ hops away are visited, then so are all nodes $k + 1$ hops away—is precisely the same point we just argued.
The following figure shows the result of running ${\tt explore}$ on our earlier example graph, starting at node $A$, and breaking ties in alphabetical order whenever there is a choice of nodes to visit. The solid edges are those that were actually traversed, each of which was elicited by a call to explore and led to the discovery of a new vertex. For instance, while $B$ was being visited, the edge $B$--$E$ was noticed and, since $E$ was as yet unknown, was traversed via a call to ${\tt explore}(E)$. These solid edges form a tree (a connected graph with no cycles) and are therefore called tree edges. The dotted edges were ignored because they led back to familiar terrain, to vertices previously visited. They are called back edges.
The result of ${\tt explore}(A)$ on the graph given above
The ${\tt explore}$ procedure visits only the portion of the graph reachable from its starting point. To examine the rest of the graph, we need to restart the procedure elsewhere, at some vertex that has not yet been visited. The algorithm of shown below, called depth-first search (DFS), does this repeatedly until the entire graph has been traversed.
Depth-first search
The first step in analyzing the running time of DFS is to observe that each vertex is ${\tt explore}$’d just once, thanks to the ${\tt visited}$ array (the chalk marks). During the exploration of a vertex, there are the following steps: 1. Some fixed amount of work—marking the spot as visited, and the ${\tt pre/postvisit}$. 2. A loop in which adjacent edges are scanned, to see if they lead somewhere new. This loop takes a different amount of time for each vertex, so let’s consider all vertices to- gether. The total work done in step 1 is then $O(|V|)$. In step 2, over the course of the entire DFS, each edge $\{x, y\} \in E$ is examined exactly twice, once during ${\tt explore}(x)$ and once during ${\tt explore}(y)$. The overall time for step 2 is therefore $O(|E|)$ and so the depth-first search has a running time of $O(|V| + |E|)$, linear in the size of its input. This is as efficient as we could possibly hope for, since it takes this long even just to read the adjacency list.
The following shows the outcome of depth-first search on a $12$-node graph, once again breaking ties alphabetically (ignore the pairs of numbers for the time being). The outer loop of DFS calls ${\tt explore}$ three times, on $A$, $C$, and finally $F$. As a result, there are three trees, each rooted at one of these starting points. Together they constitute a forest.
(a) A 12-node graph. (b) DFS search forest.
Depth-first search can be easily used to check whether an undirected graph is connected and, more generally, to split a graph into connected components.
Previsit and postvisit orderings
We have seen how depth-first search—a few unassuming lines of code—is able to uncover the connectivity structure of an undirected graph in just linear time. But it is far more versatile than this. In order to stretch it further, we will collect a little more information during the exploration process: for each node, we will note down the times of two important events, the moment of first discovery (corresponding to ${\tt previsit}$) and that of final departure (${\tt postvisit}$). The figure above shows these numbers for our earlier example, in which there are $24$ events. The fifth event is the discovery of $I$. The $21$st event consists of leaving $D$ behind for good. One way to generate arrays ${\tt pre}$ and ${\tt post}$ with these numbers is to define a simple counter ${\tt clock}$, initially set to $1$, which gets updated as follows.
These timings will soon take on larger significance. Meanwhile, you might have noticed from the graph above that:
Property For any nodes $u$ and $v$, the two intervals $[pre(u), post (u)]$ and $[pre(v), post(v)]$ are either disjoint or one is contained within the other.
Why? Because $[pre(u), post(u)]$ is essentially the time during which vertex $u$ was on the stack. The last-in, first-out behavior of a stack explains the rest.
Depth-first search in directed graphs
Types of edges
Our depth-first search algorithm can be run verbatim on directed graphs, taking care to traverse edges only in their prescribed directions. The following figure shows an example and the search tree that results when vertices are considered in lexicographic order.
DFS on a directed graph
In further analyzing the directed case, it helps to have terminology for important relationships between nodes of a tree. $A$ is the root of the search tree; everything else is its descendant. Similarly, $E$ has descendants $F$, $G$, and $H$, and conversely, is an ancestor of these three nodes. The family analogy is carried further: $C$ is the parent of $D$, which is its child.
For undirected graphs we distinguished between tree edges and nontree edges. In the directed case, there is a slightly more elaborate taxonomy:
The graph above has two forward edges, two back edges, and two cross edges. Can you spot them?
Ancestor and descendant relationships, as well as edge types, can be read off directly from ${\tt pre}$ and ${\tt post}$ numbers. Because of the depth-first exploration strategy, vertex $u$ is an ancestor of vertex $v$ exactly in those cases where $u$ is discovered first and $v$ is discovered during ${\tt explore}(u)$. This is to say ${\tt pre}(u) < {\tt pre}(v) < {\tt post}(v) < {\tt post}(u)$, which we can depict pictorially as two nested intervals:
The case of descendants is symmetric, since $u$ is a descendant of $v$ if and only if $v$ is an ancestor of $u$. And since edge categories are based entirely on ancestor-descendant relationships, it follows that they, too, can be read off from ${\tt pre}$ and ${\tt post}$ numbers. Here is a summary of the various possibilities for an edge $(u, v)$:
You can confirm each of these characterizations by consulting the diagram of edge types. Do you see why no other orderings are possible?
Using the just introduced edge types one can check whether a given graph is acyclic or not in linear time. Depth-first search can also be used to find a linearization of a given dag as well as splitting any directed graph into strongly connected components.
DFS visualisation by David Galles:
|
Visit COVID-19 resources
[Skip to Content]
Information Booklets
amd2 Age-related macular degeneration (AMD) affects a tiny part of the retina at the back of your eye, called the macula. AMD causes changes to the macula, which leads to problems with your central vision. Understanding Age-related (2019)macular degeneration (2019)
Read the Understanding Retinal Detachment Information Booklet (2020)
Read the RNIB Nystagmus Information Booklet (2020)
Read the Understanding Glaucoma (2019)
Read the Understanding Eye conditions related to diabetes (2019)
Dry eye is an eye condition caused by a problem with tears. Dry eye can make your eye feel uncomfortable, red, scratchy and irritated. Despite the name, having dry eye can also make your eyes watery. Typically, dry eye doesn’t cause a permanent change in your vision. It can make your eyesight blurry for short periods of time, but the blurriness will go away on its own or improve when you blink. RCOphth RNIB Understanding Dry Eye (2017)
Charles Bonnet syndrome (CBS) causes people who have lost a lot of sight to see things that aren’t there. Medically, this is known as having hallucinations. CBS hallucinations are only caused by sight loss and aren’t a sign that you have a mental health problem. RCOphth RNIB Understanding Charles Bonnet Syndrome (2017)
Posterior vitreous detachment (PVD) is a condition where your vitreous comes away from the retina at the back of your eye. This detachment is caused by changes in your vitreous gel.
PVD isn’t painful and it doesn’t cause sight loss, but you may have symptoms such as seeing floaters (small dark spots or shapes) and flashing lights.
Read the RNIB Posterior Vitreous Detachment Booklet (2020)
A cataract can make your vision blurry or misty, a bit like trying to look through frosted glass. Some babies are born with cataracts or develop cataracts at a very early age. RNIB Congenital Cataracts Patient Information (2019)
|
Can You Trick Your Brain Into Being Happy?
How does Smiling affect your brain?
Smiling activates tiny molecules in your brain that are designed to fend off stress.
These molecules, called neuropeptides, facilitate communication between neurons in your brain.
Also, when you smile, your brain releases dopamine, endorphins and serotonin..
Can you lose the ability to feel pain?
What keeps happy?
How can I trick my mind to study?
Here are some tips I actually use to ‘trick’ myself into studying.1) Keep your homework open on your desk before you go to bed. … 2) Treat yourself to tea at the beginning and end of study sessions. … 3) When in doubt, haul your butt to the silent library. … 4) Take brain breaks – but don’t spend them looking at a screen.More items…•
How do you reprogram your brain to be happy?
Here are 5 daily disciplines that will help you reprogram your mind to positive…Keep a gratitude journal. Research has shown that practicing gratitude regularly makes your brain healthier and happier. … Repeat positive affirmations. … Associate and surround yourself with supportive people. … Ignore negative thoughts. … Stay active.
Can you mentally make yourself feel pain?
Does fake smiling release endorphins?
Release the Endorphins! One study even suggests that smiling can help us recover faster from stress and reduce our heart rate. In fact, it might even be worth your while to fake a smile and see where it gets you.
Does fake laughing release endorphins?
Can you trick your brain?
Can you force yourself to be happy?
You should never force feelings or emotions upon yourself. … However, you can “trick” your brain into feeling happy by doing things like smiling and engaging in things that you love. Telling yourself that you are happy/successful/positive/etc even if you don’t feel that way is another way to do this.
Can you turn off pain receptors?
Scientists have discovered a new pain center in the brain that they may be able to ‘turn off’ to relieve agony for chronic nerve sensitivity. Nerve pain is one of the most difficult types of constant discomfort to treat because most painkillers do not target the correct receptors for it.
How can I be happy single?
How can a sad person be happy?
Here are some positive ways to deal with sad feelings:Notice how you feel and why. Knowing your emotions helps you understand and accept yourself. … Bounce back from disappointments or failures. When things don’t go your way, don’t give up! … Think positive. … Think of solutions. … Get support. … Put yourself in a good mood.
How can I trick my mind out of anxiety?
How can I trick my brain into no pain?
How can I trick my brain to like doing hard things?
5 Ways to Trick Your Brain to Do Hard ThingsUse Micro-Goals to Trick Your Brain to Do Hard Things. … Use Mindfulness to Trick Your Brain to Do Hard Things. … Use Exercise to Trick Your Brain to do Hard Things. … Get Enough Sleep to Trick Your Brain to Do Hard Things. … Listen to Music to Trick Your Brain to Do Hard Things.
Is Smiling attractive?
|
How To Experience The Unique Cultures Of African Tribes
It’s easy to visit Africa and yet see nothing of the local customs and cultures. Safaris tend to focus on wildlife, and many travelers don’t get close enough to see a local village or its people, let alone experience some of their cultures or customs.
The African continent has 54 countries and around 1.3 billion people. There are an estimated 3,000 tribes, speaking more than 2,000 different languages, each with its own style, look, and culture. From shaven heads to intricate braids, brightly colored clothing to intricate beaded jewelry, these are just some of the features of diverse African tribes. And just as they look different, they have different traditions, too. To come to Africa without meeting its people is to miss out on a big part of what makes this continent unique.
What Is A Tribe?
Discussing the definition of a “tribe” would doubtless keep a social anthropologist busy for days. Still, it’s commonly understood that a tribe is a community of people who share the same culture, language, traditions, and ideology. Read on to learn a little about a few of the fascinating and different tribes you could visit on your next African journey.
The hunter-gatherer San people are one of the world’s oldest tribes and probably the first inhabitants of southern Africa. Today their approximately 100,000 descendants are predominantly found in Botswana, Namibia, Angola, and South Africa. San people, also known as Bushmen, are recognizable for the unique clicking sound they make when speaking.
The San’s tracking and hunting skills are renowned, helping them survive the desolate and unforgiving landscapes of southern Africa’s deserts and vast salt pans. Spending some time with them provides insight into their unique culture and skills. They can show you how to make animal traps, find roots and tubers, and even how to make tobacco from zebra dung! Dressed in loincloths, with bows and arrows over slung their shoulders, they lead the way and you follow, and you can’t help but be in awe of their intimate knowledge of the land.
The San were the great artists of southern Africa and were responsible for cave and rock art found across the region, some of which dates back thousands of years. They used pigments made from minerals, ochre, eggs, and blood to paint iconic images of hunters and various animal prey.
Sadly, they are also synonymous with the plight of minorities in Southern Africa, and have been variously hunted, exploited, and pushed off their land. Today the traditional lifestyle of the San Bushmen is restricted to small pockets of land, and their survival and way of life hang in the balance.
Zulu tribe members dancing.
With a population of around 11 million, the Zulu are the largest tribe in South Africa and one of the largest tribes in Africa.
They are a warrior tribe, originally from East Africa, but who migrated south centuries ago, finding a home in KwaZulu-Natal, South Africa’s Indian Ocean Coast. In the early 19th century, the Zulus, under the leadership of King Shaka, became a formidable empire with a fearsome reputation that is still acknowledged today. Shakaland, a cultural village, which has the largest kraal in Zululand, is also the birthplace of the legendary King Shaka. Here, Zulu traditions and culture are kept alive, with demonstrations of Zulu craft, building skills, pottery, brewing, dancing and music. The Zulu are particularly renowned for their beadwork, with bright colored beads woven into intricate patterns that are highly decorative, functional, and symbolic.
The Maasai are arguably the most famous of all the African tribes and live along the Great Rift Valley of Kenya and Tanzania. These homelands are close to many famous game parks, meaning you are bound to come into contact if you venture here on safari. With their red, sarong-like blankets (shuka), pierced ear lobes, and colorful ornaments, you will know when you see a Maasai tribesman! Despite the pressures of the modern world, the Maasai have fought to preserve their way of life. On any east African safari, you are bound to encounter them and some of their famous traditions, including the jumping dance (adamu), and their predilection for spitting and drinking blood.
Adamu is performed as part of initiation rites when young adults become men and eligible bachelors. Accompanied by song, pairs of men take turns to see who can jump the highest, demonstrating their prowess and fitness … he who jumps highest attracts the best bride!
While in Western traditions saliva is a pretty private, personal matter, in Maasai culture it’s considered extremely good luck to be shared! When shaking the hand of an elder, it is important to spit in one’s palm, and to ward off evil spirits, one must spit onto a newborn baby’s head.
Spitting is one thing, but how about drinking blood? Yes, the Maasai drink cow’s blood (often mixed with milk)! (For your peace of mind, let me assure you that the Maasai revere their cattle, and the letting of blood causes no lasting harm.)
Visit a Maasai village to learn their culture and traditions and visit a traditional boma to watch them herding their cattle and making traditional beaded jewelry. Some camps offer guided walks with the Maasai, which are a good opportunity to enjoy the wilderness, watch wildlife, and spend more time with these friendly people.
The unforgiving, desolate Kunene region of northwest Namibia is home to a resilient people called the Himba. This tribe of hunter-gatherers and pastoralists has successfully maintained their culture and traditional way of life, predominantly because the area they call home is so incredibly remote.
Himba women are famous for their appearance with red-tinged complexions and thick, red hair in elaborate hairstyles. Hair for Himba women signifies age and status, starting with shaved heads for young children, then braids and plaits, and graduating to a leather ornament called an Erembe for women who have had children. The unique color comes from a paste made from butter, ochre, and fat. The paste is known as otjize and is applied daily to skin and hair alike. (The Himba men do not use the paste).
Central to the Himba’s cultural beliefs is Okuruwo, the holy fire, which symbolizes their connection to their ancestors, who are believed to be in direct communication with Mukuru, the Himba god. There is a permanent fire at the center of each village to signify this connection. It is tended to by a fire-keeper from each family.
Spending time in a Himba village is a humbling experience. It gives you a chance to learn about the architecture of their houses, the structure of their community, their survival in an unforgiving landscape, and how to create beautiful and intricate jewelry from iron and shell beads.
Samburu warriors in Kenya.
The Samburu tribe from north and central Kenya are pastoralists, primarily herding cattle, but also goats, sheep, and sometimes camels. The Samburu are closely related to their southern neighbors, the Maasai, but are semi-nomadic, wandering in remote, arid areas. Like their Maasai neighbors, the Samburu diet includes milk and animal blood, while eating meat is reserved for special occasions.
The word Samburu means butterfly and refers to their many colorful adornments. Men wear black or pink robes in the style of a Scottish kilt, along with headdresses, anklets, bracelets, necklaces, and long braids. Women have shaven heads and wear two blue or purple cloths, one around the waist and one around the chest, and adorn their bodies further with ochre, similar to the Himba of Namibia.
What sets the Samburu apart is their gerontocracy. Gerontocracy is a social structure where the elders make all the decisions. The oldest members of the society are the leaders and have the final say in all matters and possess the power to curse younger members of the tribe.
The Samburu are one of the few African tribes that still live according to old traditions and customs, making a unique and interesting visit.
Southern Ndebele
The Southern Ndebele are found in South Africa’s north-eastern provinces, and while they share some language with the Zulu, they have unique culture and beliefs that set them apart from other African ethnic groups.
The Ndebele believe that spells or curses cause illness. To cure illness, a sangoma (traditional healer) battles these forces using traditional herbal medicines and bone throwing. While these traditions are interesting, what truly makes the Southern Ndebele unique is their artistic style. Not just clothing and adornments, but homes, too, are decorated with striking geometric patterns filled in with color.
While traditional Ndebele designs were of muted earth-ochres, tastes have evolved, and modern Ndebele designers use a much more vibrant and vivid palette. One such famous Southern Ndebele artist is Esther Mahlangu, whose designs have appeared around the world, from the tails of British Airways jumbo jets to museums and private art collections.
If you’re interested in finding out more about African tribes and experiencing their way of life, modern-day tourism makes this possible. Many safari companies can include visits to tribal villages in your itinerary. These can be anything from an hour or two visit to an overnight stay and more. If you do decide to experience local African culture in this way, it’s a good idea to follow a few basic etiquette rules:
Different cultures view time differently, so focus on the moment and the people you are with rather than the schedule. People and experiences in the present are more valuable than appointments in the future.
Keep smiling! If you feel uncomfortable, awkward, or embarrassed, just smile!
We live in an amazing age where global travel is relatively quick and easy. You no longer need to be an anthropologist to visit these incredible African tribes and make memories that will last a lifetime. So don’t just read about rich African cultures — come and experience them!
Sarah Kingdom
Expert Contributor
What do you think?
Leave a Reply
Ras Caleb congratulates Akufo-Addo for settling disputes in MUSIGA (Video)
“Overdose” is a lifestory-Swag Nation
|
Brass Cueing Technique – Not Just Breathing
Evidently, integrating a clear and rhythmically decisive breath into a cue can be quite unnatural to many brass students. This is, most likely, simply a result of a lack of practice. When musicians are alone practicing their individual part they rarely incorporate a communicative breath into their drills. When put in a rehearsal or performance situation they then must suddenly add something new to what they’ve practiced in private. When others are relying on you is not the time to be trying to do something new.
The most common problem lies with the timing of the inhale. Some brass students don’t accurately place the beginning of the inhale; others try to spread the audible part of the inhale (which is usually the bulk of the inhale) over too long a period of time.
The sound of the inhale should begin on the ictus before playing commences, that is, squarely on the second beat of the cue (again, the duration of a beat depends on the time signature, tempo and character of the music). Remember, it is of vital importance that the Leader firmly establishes the tempo and counts off in his or her head before attempting to give the cue. The sound of the start of the inhale will aid the Followers in establishing where the Leader’s ictus is, thus where the following attack should occur.
When a Leader attempts to extend the sound of the inhale over the entire beat they can run into one or two problems.
1. Managing the switch from the fastest part of the inhale to an exhale can be difficult and result in an explosive and uncontrolled attack.
2. The sound of a long inhale can give the misguided impression of a slower tempo.
Both issues can be addressed simply by making the audible part of the inhale, when the majority of the air is taken in, no longer than half the length of the time between the beginning of the second beat of the cue and the following entrance. Therefore, if a piece begins on beat one of a 4/4 measure, the inhale would be heard as an eighth note (quaver) on the first half of the preceding beat. If there is a pick note, the inhale is heard for half the length of time between the beginning of the second beat of the cue and the pickup note. In the case of an eighth note pickup in 4/4 time, the inhale would be heard as a sixteenth (semiquaver). A player could certainly continue to inhale longer if they like, but limiting the time of the audible part of the inhale provides more rhythmic clarity to the Followers (and also ensures that the Leader is subdividing). This is also more natural for the Leader in that he or she will take in more air at the beginning of the inhale (the audible part) than at the end.
Musicians Grunting on Stage!!!
Whenever I introduce the notion of grunting in performance to brass students it always creates a half confused/half disgusted look on their faces. It is important to understand that this “grunt” from the Leader is only for the Followers to hear, not the audience. The reasons that a grunt is effective and efficient are as follows:
1. Its placement can be more precise than using words. A grunt can be sharply accented to clearly show where the Leader is placing his or her ictus (i.e., where the beginning of beat occurs). Conversely, saying “one” or “three” takes slightly longer and both words begin with a soft syllable. Since the goal is to give a rhythmic point of reference, a more concentrated sound is more effective.
2. Pronouncing words distorts the embouchure. By grunting, the embouchure can remain in the ready position.
3. When a piece starts in 3/4, 5/4 or begins with one or more pick up beats, less experienced Leaders often misspeak the beat number in the cue, subsequently confusing the Followers. A grunt is not time signature dependent and relieves the Leader of having to think of what to call the beat which lay two counts before the entrance.
Now the audible inhale on the beat preceding the first note, not only allows the Leader to take a breath and encourages the ensemble to breathe together, but also reinforces to the Followers where the ictus is and, therefore, where to place that first attack.
Brass Players Are Not Neanderthals
Despite grunting from the days when we lived in caves, using a grunt as part of a cue is not intuitive for many brass students and I must often explain to them how to do it.
The pronunciation is “uh”. Accent the beginning of it and shorten the sound as much as possible-the idea is to produce a very precise and compact sound that serves as a model for the level of precision of the ensemble’s pending entrance. The sound should be made….
See complete article and videos here
|
In the summertime, when the weather is fine, you may have swimming on your mind. A swimming pool dip can cool you off. But some other bodily functions may produce unhealthy chemical reactions. Urinating in a chlorinated pool can lead to small amounts of chemicals that might irritate your lungs, according to this new video from Breakthrough Science and the American Chemical Society.
The culprit is uric acid in urine, says chemist Ernest Blatchley of Purdue University, who admits in the video that he has peed in a pool. But he doesn't anymore, and not just because it is impolite. He and some colleagues discovered that uric acid and chlorine interact to produce small amounts of cyanogen chloride and trichloramine. When inhaled in large volumes these chemicals can damage internal organs. The amount of these chemicals that can be made by the typical amount of urine released by a swimmer--30 to 80 milliliters--is much smaller than anything that has been linked to physical harm, Blatchley hastens to say. But particularly in indoor pools, it is possible they could cause some irritation, he says. He and his colleagues originally reported these results in the journal Environmental Science & Technology.
Peeing in the ocean should be worry-free, however. That's according to some reporting by Lauren Wolf, an editor at Chemical & Engineering News. Urine, she notes, is largely water and salt--and that happens to be, largely, what seawater is made from. There isn't anything like chlorine in ocean water to trigger a bad reaction. Plus, she notes, fish and whales are doing a lot of ocean urinating on their own.
|
22. Effect of circadian rhythms in helth
Jamie Zeitzer.
Palo Alto, USA
Defining healthy sleep and circadian rhythms
While healthy sleep and circadian rhythms are an indisputable foundation for overall human health, how we define what constitutes “healthy” sleep and rhythms is often ignored. With the explosion of wearable and at-home devices that can record a variety of physiologic data, we are afforded an opportunity to gain a better understanding of what constitutes normal, healthy sleep. Furthermore, these device allow us to monitor longitudinal changes in sleep and circadian rhythms, how to specifically target these changes, and how these changes could lead to improved physical and mental health. My laboratory has been exploring the three parts of this equation – (1) What are the physiologic variables that provide the most insight? (2) What are the outcome measures associated with changes in these variables? and (3) How do we manipulate sleep and circadian rhythms in a personalized, actionable manner? Data from both tightly controlled laboratory and ecologically-relevant cohort studies will be discussed to delineate the capacity of the sleep and circadian systems and how these systems can be altered within the context of normal behavior.
Jamie Zeitzer is an associate professor of Psychiatry and Behavioral Sciences in the Division of Sleep Medicine at Stanford University, as well as a science specialist at the U.S. Department of Veterans Affairs. He is a world-expert on sleep and circadian rhythms. He received his undergraduate degree in Biology from Vassar College, PhD in Neurobiology from Harvard University, and completed post-doctoral fellowships at UCLA and Stanford University. His studies have been ongoing for more than 20 years and yielded more than 100 peer-reviewed manuscripts on the impact of light on human biology, translational sleep physiology and pharmacology, and the interaction of sleep and circadian rhythms in a variety of disease states, including traumatic brain injury, bipolar disorder, breast cancer, dementia, and spinal cord injury. His current work examines novel ways in which light can be manipulated to optimize its clinical and biological impact on sleep and circadian rhythms. A parallel line of research aims at using modern statistical and engineering technology to discover new ways of harmonizing objective and subjective measures of sleep quality.
[/tab] [/tabs]
Choose your language
|
Looking Back: What Happened In Pearl Harbor?
7th of December 1941 is the day the Japanese attacked Pearl Harbor. Prior to this, America was opposed to the idea of participating in the Second World War. This event pushed America in entering the battlefield.
What happened in Pearl Harbour? Let’s take a look back.
Image result for pearl harbor attack newspaper headlines
Prior events
Prior to the Pearl Harbor Attack, tensions between Japan and the United States are present during World War II. Japan has committed several atrocities including the invasion of Manchuria and the Nanjing Massacre. In response, the US imposed trade bans on Japan in products like oil, steel and other key ingredients for war. This is an attempt to paralyze Japan’s ambition to expand. Instead, this fueled anger towards the interference of the West in the affairs of the East.
Months leading to the attack, multiple negotiations between the two parties were conducted. As expected, these negotiations failed.
The day of the attack
With all these laid down, it is clear that a war is inevitable. What made the attack so striking is that it went down in American soil. For quite some time, the isolationist stance of the Americans gave them a false assurance that no one will dare attack it in their own soil. At the time, this is simply unheard of. On top of this, the Americans didn’t expect that an attack will be launched first in Hawaii. The Pearl Harbor is quite unguarded — an easy target. Japan took this opportunity, carefully crafting a plan that will weaken the US naval forces in the Pacific.
On 7 December 1941, Japan attacked Pearl Harbor. Early in the morning, bombs, guns, and ships attacked Pearl Harbor’s airfields and ships.
LEARN MORE 5 Ways To Improve Emergency Preparedness For Next Time
The attack damaged eight battleships in the base, 300 aircraft, and killed 2,400. Within the next 24 hours, the US declared war.
The motivation
By neutralising the US Pacific Fleet, Japan can conquer the current and former Western colonies in the South Pacific.
By 1942, Japan captured Burma, British Malaya, Dutch East Indies, and the Philippines. This allowed access to natural resources such as oil and rubber that can propel the expansionism.
The aftermath
The goal to completely neutralise the US naval forces in the Pacific wasn’t completely attained through the Pearl Harbor attack.
While Japan managed to capture South Pacific colonies, they were unaware of the infrastructure of Pearl Harbor. They missed oil storage facilities, ammunition sites, and reparation facilities. In the day of the attack, no US aircraft carrier was present, either.
This oversight eventually haunted Japan six month later in the Battle of Midway. The resources and infrastructure from Pearl Harbor contributed to the victory of the US. This shifted the odds in favor of the Allies.
This blunder of forcing the US entry to the war will further work against Japan less than four years later, with the US playing an essential part to their eventual and absolute defeat.
Pearl Harbor today
Image result for uss arizona memorial
USS Arizona Memorial, Pearl Harbor, USA
Today, Pearl Harbor remains to be a base of the US Naval Fleet. In 2010, it merged with the nearby Hickam Air Force base, forming the Joint Base Pearl Harbor–Hickam.
The remains of USS Arizona was preserved as a national cemetery as is one of the most-visited places in Hawaii.
LEARN MORE Press-Shy Presidents?
Previous Article
How We Can Recycle More Buildings
Next Article
The Global Disparity in Carbon Footprints
Related Posts
|
Hackers infect over 700 libraries of Ruby programming language
A recent report suggests that over 700 libraries of popular programming language, Ruby are infected by Bitcoin hackers. As developers embrace the ready-to-use software components, cyberattackers are abusing open source repositories to distribute malicious packages.
The incident has been discovered by Cambridge-based ReversingLabs, Massachusetts two days ago. Hackers first inserted the malicious files inside a package manager called RubyGems. It is commonly used to upload and share performance and improvements on existing pieces of software. The report also highlights that the hackers tried to trick developers into download malware using typosquatting method.
The typosquatting technique is used by attackers to intentionally upload malicious packages representing misspelt legitimate packages. The unwitting developers sometimes mistype the name of these packages and install the libraries inside them.
According to ReversingLabs, the packages were uploaded to RubyGems between February 16 to 25. Most of these packages have been designed to steal funds by redirection cryptocurrency transactions to a wallet.
ReversingLabs said, "Being closely integrated with the programming languages, the repositories make it easy to consume and manage third-party components. Consequently, including another project dependency has become as easy as clicking a button or running a simple command in the developer environment. But just clicking a button or running a simple command can sometimes be a dangerous thing, as threat actors also share an interest in this convenience by compromising developer accounts or their build environments, and by typosquatting package names."
As soon as the hackers get access to the developers machine, the malware executes the script and starts an infinite loop. The program takes hold of user's clipboard data, which redirects all subsequent cryptocurrency transaction to the designated wallet.
Popular repositories platforms like Python Package Index (PyPi) and GitHub's Node.js package manager nom have come up as an effective attack vector to distribute the malware. It is recommended for developers to check if they have used the correct package names.
Next Story
Cognizant is hit by 'Maze' ransomware; customers face disruption
Cognizant is hit by 'Maze' ransomware; customers face disruption
|
Tips on Writing an Effective Abstract
Writing an abstract is essential as it provides a first impression of the piece of writing that follows, helps the reader decide whether he or she wants to continue reading, and tells what they can expect to get to know from the document. Though you can find many abstracts that only list the document’s content, the effective abstracts should tell the reader much more.
Useful Writing Pieces of Advice on How to Write an AbstractArtcileDarwin Post
A perfectly composed abstract represents the maximum of the quantitative and qualitative data in the document, while also reflecting its reasoning. In general, an informative abstract has to briefly answer the following questions:
• Why did you work on this project?
• What did you do and how?
• What are your findings?
• What is its meaning?
Image dedicated to Opened book placed on the table with notebook Post.
If your paper is about a new method or apparatus, you can change the last two questions to:
• What are its advantages?
• How well does it work?
You should also note these points about abstracts:
• Abstracts are always read along with the title, so avoid repeating and paraphrasing the title. It will probably be read separately from the document, so make sure it is complete enough to stand on its own.
• Make sure, to sum up, the purpose of your writing, methods, main findings, and conclusions. Emphasize various points in proportion to the significance they have in the body of the paper.
• Do not refer to any data that is not provided in the paper.
• Avoid using I or us, but prefer active verbs rather than passive (for example, the research showed instead of it was shown by the research).
• Do not include trade names, abbreviations, symbols, and acronyms since they require too long explanations.
• Use keywords from your paper.
Useful Writing Pieces of Advice on How to Write an AbstractPostonDarwin
|
Latin america colonial times
While the former alternative seeks to produce social transformation from below in civil societythe latter deepens the strategy of producing social change from above by the State. The Making of a Republic, —, Cambridge: Although the initial answers that Hispanic Americans offered were framed in terms of traditional scholastic political thinking, they soon began to appeal to the political ideas of the French revolution, specifically to the idea of popular sovereignty.
French finally driven away from Brazil. The Constitution exhibited the influence of Enlightenment rationalism, the rationalist natural rights discourse, and the political ideas of Montesquieu and Rousseau, to name the most salient referred to authors Varela It is no accident, then, that Christopher Columbus was a Genoese who had long been in Portugal and had visited the Atlantic islands.
Spanish liberalism was a revolutionary ideology that marked a radical break with the monarchical status quo. Farmers During the 18th century, most Americans lived and worked on small farms.
Join Kobo & start eReading today
Nevertheless, the liberal legislation also faced strong opposition by established social forces such as the Catholic Churchthe new republics were marked by great political instability regimes were often overthrownand economic progress did not materialize. Mora warned that any unlimited authority was essentially tyrannical and, following Montesquieu, characterized despotism as the lawless, absolute, and unlimited use of political power regardless of the hands in which it falls and the particular form of government that it takes Mora As a result, the indigenous peoples, once in contact, were very vulnerable to the outsiders.
Everyone of importance was there, with only underlings doing essential tasks located in the country. European nations practiced a mercantilistic system in latin America Share to: The slaves were always, as in this case, employed far from their place and culture of origin.
France sends official governor to Tortuga island: Life of Diego Quispe Tito, prime early painter of the Cuzco school.
Latin America in Colonial Times
Those of the Jews and Moors who had refused to convert were in time forcibly expelled, and the Inquisition became active in the attempt to enforce the orthodoxy of those who had accepted conversion. Audiencia established at Manila. The Iberians In most ways the Spaniards and Portuguese shared the characteristics of other European peoples.
5,222 results
Though they did not enjoy the same rights as white citizens, these free black men and women owned property, worked in a wide range of skilled jobs, and made significant contributions to their communities. The profits accumulated by the local elites were wasted in the consumption of superfluous and luxurious goods for pure ostentation, rather than saving and investing in productive sectors of the national and nascent economy; 3.
Marriage of Isabella of Castile and Ferdinand of Aragon. This was the case in Argentina where the local Catholic Church was relatively weak. Alberdi claimed that the much needed social transformation could take place through the interaction of the local population with northern European immigrants who would bring with them the habits of order, discipline, and industry that were necessary for economic progress and republican citizenship.
In passages that evince the influence of Adam Smith, he maintained that social prosperity was not the work of governments, but a spontaneous result.
In the early twentieth century, Hispanic American liberalism became the subject of strong criticisms. At the theological stage, society is subject to the authority of spiritual dogmas and is governed by force. In the sphere of academia, many scholars have enthusiastically welcomed the influence of Anglo-American contemporary liberalism.
A notable exception to this dominant view was the short lived liberal Colombian constitution of which granted universal male suffrage following the French example after the revolution Bushnell In the Reconquest Reconquista the Christians had pushed their rivals back through military force; those who carried out the conquests often went to settle among the Moors and were rewarded by the government with grants of land and other benefits.
Potosí Mines
Spanish women Spanish women were an important element in the sedentary urban society growing up in the central areas. Since he also held that an examination of the situation of the South American republicans shows that they are not civilized enough to govern themselves through democratic institutions, he maintained that a possible republic should not grant equal political rights to all citizens.
An important reason for this is that alternative ideologies became prominent. Alfaomega Grupo Editor, 4th ed. Thus Africans were soon a significant group numerically; on the Peruvian coast, at least, it is thought that after several decades they equaled the Spaniards in numbers.
Conquest of Cuba, from Hispaniola. It is a system sustained by a racist ideology where cultural space is developed exclusively for relations of domination. First circumnavigation of globe, by Magellan's expedition.
In the book, Galeano argued that colonial masters drained Latin America of natural resources for three centuries, Britain took advantage of the region’s underpriced labor and exports via unequal.
Articles > Colonialism and Underdevelopment in Latin America. Colonialism and Underdevelopment in Latin America.
Print; Email to a Friend; by: Vinicius Valentin Raduan Miguel. August 4 The legacy of the colonial times - the concentration of power, wealth and land - led to a stratified society with an extreme inequality. The Colonial Williamsburg Foundation's Official History and Citizenship Website
Economic Development of Latin America: A Survey from Colonial Times to the Cuban Revolution (Cambridge Latin American Studies, 8). By Celso Furtado; tr. by Suzette Macedo. Cambridge: Cambridge University Press, Latin America has tons of destinations to offer, but perhaps what makes it unique is its many gorgeous colonial towns.
The world has many spots and wonders to offer: from amazing landscapes to marvelous cities with long and deep historical backgrounds, there are endles. Latin America in Colonial Times presents that story in an engaging but scholarly new package, revealing how a new civilization – Latin America – emerged from that encounter.
The authors give equal attention to the Spanish and Portuguese conquerors and settlers, to the African slaves they brought across the Atlantic and to the indigenous Price: Liberalism was the dominant political discourse in Latin America during most of the nineteenth century.
Initially, in the first half of the century, it was a discourse of liberation from colonial rule in Hispanic America.
Latin america colonial times
Rated 3/5 based on 68 review
Colonial America for Kids
|
India as a super power
Why India will not become a superpower
Indian cinema transcended its boundaries from the days of film Awara, a great hit in Russia. The highest growth rate of These prehistoric human activities had been in continuation since the Indus Valley Civilization.
The Election that Changed India' provides a perspective into how elections in India are now much more complex than the conventional tussle of ballots. The appointment in of Lord Dalhousie as Governor General of the East India Company set the stage for changes essential to a modern state.
Democracy has given the weakest and the poorest a stake in the system. Once a member of the Rajya Sabha, the Indian upper house of parliament, his diplomatic passport has been cancelled.
The quality and effectiveness of service delivery today is directly linked to good governance practices and use of modern technology, especially ICT. The power sector has reached critical levels of coal stock on account of slowdown in domestic mining. While India is a democracy with relevant institutional arrangements and experience stretching over half a century, the fact of the matter is that Indian democracy is still plagued by numerous lacunae.
It is however, steadily combating its energy issues. It is going to be the biggest superpower of the 21st century. Indian politics was, since independence, dominated by vote banks politics based on religion, caste and other class factors and forces.
Governance in India has always been a critical issue for the governments since independence. India should develop strength and capacities to become a superpower.
For example, if you improve water supply, everyone benefits. Similarly, saw growth rate dip to 1. And in terms of labour market efficiency, India does not even make it into the top There is a perceptible lack of internal cohesion and unity in society of India.
Presently, the Navy maintains a fleet of vessels which includes 3 stealth warships, recently inducted. While we boast of our economic development and attendant technological capacities, the downside is that our infrastructure is not developed.
The Indian government has said that much of the rise in inflation recently can be attributed to short-term supply constraints, such as a shortage of key foodstuffs thanks to an erratic summer monsoon.
Can India join China as an economic superpower?
Technological changes—among them, railways, canals, and the telegraph—were introduced not long after their introduction in Europe. The elections clearly manifested that youth was no longer ready to digest the blunders done by the previous UPA regimes.
However, mere potential will not make India one of the most important powers of the world. When decisions are taken, the nation moves forward. Superpower in a broader sense means a stage which has the ability to influence events and project power worldwide and has immense potential to become one.
Related Articles: Argumentative essay on the development of nuclear bomb in India. India is the name given to the vast peninsula which the continent of Asia throws out to the south of the magnificent mountain ranges that stretch in a sword like curve across the southern border.
Nov 02, · India has seen a meteoric rise in its economy, population, and military ability. Some have even suggested it will hit superpower status by Just how lik.
Nuclear power is the fourth -largest source of electricity in India after thermal, hydroelectric and renewable sources of electricity. As ofIndia has 21 nuclear reactors in operation at seven sites, having an installed capacity of MW.
and producing a total of 30, GWh of electricity 11 more reactors are under construction to generate an additional 8, MW.
Providing business analysis, advice. In nations such as India, the size of population alone pulls them toward superpower status. Bible prophecy describes global power blocs—superpowers, or groups of superpowers—that will be prominent at the end of the .
India as a super power
Rated 0/5 based on 16 review
India - Know all about India including its History, Geography, Culture, etc
|
Skip to content
Switch branches/tags
Latest commit
Git stats
Failed to load latest commit information.
Latest commit message
Commit time
A utility for finding the largest directories and/or files in a given directory hierarchy. Biggest supports pretty printed and colorized output to the terminal:
Example of Biggest being run
All pretty printing and ANSI color codes are printed to STDERR, while the actual file paths are printed to STDOUT. Therefore, suppressing STDERR output will only print the file paths with no ANSI escape sequences, pretty printing, or file sizes, which can be used for scripting:
Use Biggest in scripts by suppressing STDERR
Simply run
$ pip3 install biggest
Unlike similar utilities, Biggest will only return a directory if the sum of the sizes of its immediate files minus the sizes of any files included in the Biggest output make it large enough to be included. That’s a complicated, self-referential definition, so here’s an example:
• Say the directory /tmp/ looks like this:
├── [25 MB]
├── [15 MB]
└── /tmp/foo
├── [20 MB]
└── [10 MB]
• Running biggest /tmp/ -n 3 will yield,, and The directory /tmp/foo did not make the cut because its file was large enough to be included, and therefore its 20 megabytes were not included in the size of /tmp/foo
• Now lets say a new file that is 50 megabytes is added to /tmp. Now biggest /tmp/ -n 3 will return, /tmp/foo, and /tmp/foo was included this time because none of its files were large enough to make the cut, and therefore its size was interpreted as 30 megabytes.
Why does Biggest use this algorithm? Since a directory is always at least as large as its largest file, then, if Biggest didn’t use this algorithm, it would always return the directories containing the largest files!
Biggest uses a very efficient (and perhaps optimal) implementation of this algorithm. The worst case runtime is roughly O(𝑛 log 𝑚), where 𝑛 is the -n argument passed to Biggest and 𝑚 is the number of files and directories being analyzed, but validating that is left as an exercise for the reader.
• -f ignore directories and only find the largest files
• -h prints file sizes with human-readable unit suffixes like "MB" and "GB"
Biggest is licensed and distributed under the AGPLv3 license. Contact us if you’re looking for an exception to the terms.
A simple utility for finding the largest files in a directory.
No packages published
|
Jesus Taught Baptism
After these things Jesus and His disciples came into the land of Judea, and there He remained with them and baptized. (John 3:22)
Jesus Taught Baptism
The religious world denies salvation by baptism. Very few people who believe in Jesus Christ will affirm that baptism has anything to do with salvation, teaching a faith-only doctrine of redemption. They will cite the “Sinner’s prayer” as the means of washing away sins. Accepting Jesus as a personal Savior is how many believe a person becomes a child of God. Protestant churches deny baptism as essential. They teach a person can receive eternal life without baptism. For example, it is stated in the Hiscox Standard Manual for Baptist Churches that “Baptism is not essential to salvation, for our churches utterly repudiate the dogma of ‘baptismal regeneration; but it is essential to obedience since Christ has commanded it.” The Lutheran, Presbyterian, and Methodist churches teach justification by faith alone. For the most part, the religious world denies baptism can save.
Followers of Jesus Christ are people who abide by the teaching of Jesus Christ. It is remarkable (and sad) that so many religious folks dedicate their lives to serving the Lord and denying what He does and what He teaches. Jesus was baptized in the River Jordan by John the Baptist. His baptism was not to wash away sin but to fulfill all righteousness. God the Father accepted what Jesus did as part of the divine plan when He spoke from Heaven that He was pleased with the baptism of Jesus. If baptism was not an essential part of the coming kingdom, why did Jesus go into the land of Judea and baptize? It seems incredulous that followers of Christ would deny something Jesus preached regularly during His ministry.
Many believed Jesus to be the Son of God but refused to give their allegiance to Him lest they fall out of favor with men. Many of the Pharisees and lawyers who heard the teaching of Jesus rejected the will of God for themselves, not having been baptized by John the Baptist. They denied it had anything to do with salvation. Holding fast to the Law, they rejected the teaching of Jesus and the prophetic word of John the Baptist. It does not seem remarkable the attitudes of the Pharisees and lawyers remain today for those who firmly deny the essential nature of baptism for salvation. And yet, Jesus went into Judea, and many were baptized.
The moment of salvation is a crucial part of a person’s life. It is the single moment in time when the soul darkened by the stain of sin is washed clean by the blood of Jesus Christ. Through the grace, mercy, and love of a kind Father, redemption is granted that places a person into a covenant with God for eternal life. That moment is not when a person accepts Christ as their personal Savior. It is not the moment a good feeling comes over a person. The eternal moment in a person’s life that grants him acceptance into the body of Christ is when they are baptized in water for the remission of their sins. Not a moment sooner. There is no hope of salvation without Biblical baptism (immersion; not pouring or sprinkling or infant baptism). The final words of Jesus to the eleven were unambiguous and demonstrative: if a person believes and is baptized, they will be saved. If they do not believe and refuse to accept baptism as essential for salvation, they will be lost to perdition. Jesus taught baptism throughout His ministry and left the Father’s word for all those who seek eternal life to obey His command. Baptism is essential. Jesus said so.
This entry was posted in Uncategorized. Bookmark the permalink.
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
The Torah, A Single Line Of One Commandment
laitman_527_04The Torah is one continuous sentence that begins from the beginning of creation in its initial state (the desire of something from nothing) and ends when that desire becomes equal to the Creator in size and attributes. It is all a single line of one commandment.
Question: Who divided it into words, expressions, and sentences?
Answer: It was done by Kabbalists right after Moses wrote the Torah as one undivided text. In order for us to understand the specific actions that we must perform, this single program was divided into smaller sub-programs: sentences, words, letters, and even parts of letters, up to dots. It is only by such a consistent, discrete method that we can correct ourselves.
When we open the Torah scroll, we can see that there are no punctuation marks and that, in fact, the words practically are fused together. Only a person who understands the text is able to separate them from each other.
From the Kabbalah Lesson in Russian 9/4/16
Related Material:
Torah For All Times
The Torah Is A Program And A Guide
The Torah As An Engine For Change
Discussion | Share Feedback | Ask a question Comments RSS Feed
|
What is a sedative drug?
Sedative drugs are helpful for treating anxiety and sleep problems, but using them can lead to dependence or addiction. Sedatives are a category of drugs that slow brain activity. Also known as tranquilizers or depressants, sedatives have a calming effect and can also induce sleep.
What is an example of a sedative?
Common sedatives include barbiturates, benzodiazepines, gamma-hydroxybutyrate (GHB), opioids and sleep inducing drugs such as zolpidem (Ambien) and eszopiclone (Lunesta). Sedatives are central nervous system depressants and vary widely in their potency. They are usually in the form of a pill or liquid.
What are sedating medications?
Sedatives are a type of prescription medication that slows down your brain activity. They’re typically used to make you feel more relaxed. Doctors commonly prescribe sedatives to treat conditions like anxiety and sleep disorders. They also use them as general anesthetics.
What type of drug offers sedative effects?
Sedative drugs include benzodiazepines, barbiturates, and other sleeping pills (see Table 1). These are commonly prescribed for insomnia and other sleep problems and are also used for anxiety, either generalized or for panic attacks [1].
IMPORTANT: What are the side effects of coming off Abilify?
What are good sedatives?
Drugs Mentioned In This Article.
Generic Name Select Brand Names
eszopiclone LUNESTA
lorazepam ATIVAN
diazepam VALIUM
zolpidem AMBIEN
What is the strongest sedative pill?
High-potency Benzodiazepine List
• alprazolam (Xanax)
• lorazepam (Ativan)
• triazolam (Halcion)
What are the side effects of sedation?
Some common side effects of conscious sedation may last for a few hours after the procedure, including:
• drowsiness.
• feelings of heaviness or sluggishness.
• loss of memory of what happened during the procedure (amnesia)
• slow reflexes.
• low blood pressure.
• headache.
• feeling sick.
What drugs calm you down?
What is the fastest acting sedative?
What is the strongest sleeping pill over the counter?
Best Over-the-Counter-Sleep-Aids
IMPORTANT: How long does it take to come off of Prozac?
Are sedatives safe?
How long does a sedative last?
The effects of local anesthetic typically last for anywhere from four to six hours, though you may still feel some numbness and tingling for up to 24 hours after the procedure has been completed. It is often safe to eat and chew after a few hours and once you begin to regain feeling in your lips and mouth.
What is the best medication for anxiety and insomnia?
How do you get sedatives?
Prescription sedatives and tranquilizers are central nervous system depressants that can only be obtained with a prescription from a doctor. There are two primary types of tranquilizers and sedatives: benzodiazepines and barbiturates.
What sedatives do hospitals use?
Medications Commonly Used for Sedation
• Midazolam. Midazolam (brand name: Versed) is a medication used to help ease anxiety. …
• Pentobarbital. Pentobarbital (brand name: nembutal) is a sedative medication generally given intravenously. …
• Fentanyl. …
• Additional medications used.
Run to meet life
|
Please help keep this Site Going
Menopausal Mother Nature
News about Climate Change and our Planet
Companies Are Trading CO2 Emissions to Tackle Global Warming. But the Market Isn’t Ready Yet – The Swaddle
India is among the top five most polluting countries in the world, releasing more than 7% of the world’s carbon dioxide. Though India is a part of the 2002 Kyoto Protocol, an international agreement that involved several countries coming together to pledge a reduction in greenhouse gas emissions that result in global warming, India is not obliged to commit to reducing emissions due to its classification as a “developing” country.
Despite this, India has continued to put in voluntary efforts to reduce emissions. One way in which countries around the world, including India, are attempting to reduce emissions is through carbon trading — a process in which countries voluntarily buy and sell the rights to emit carbon dioxide to respond to the climate crisis. However, carbon trading, prevalent across European Union (EU) countries, has not worked out globally as well for India due to a lack of binding international regulations.
Carbon trading involves something called a cap-and-trade scheme, wherein companies set an overall limit (cap) on the amount of emissions allowed from sources like the automobile and power industries — known for releasing high amounts of carbon. The government in a particular region or country issues a permit for the agreed limit — either for free or via an auction system (trade). Think of these as carbon credits: if any company succeeds in reducing its own emissions below the limit it agreed to, it can sell the excess permits to other companies for cash. If it exceeds the limit, the company needs to buy more permits — a sort of check to manage the extra emissions.
Projects similar to carbon trading have shown positive results in the past. For example, trading on sulfur dioxide permits has helped reduce the amount of acid rain in the U.S. The sulfur trading program came under the U.S Clean Air Amendments Act, 1990. This suggests trading programs can succeed in reducing environmental damage if they come under a robust legal ambit.
When it comes to carbon trading, the emissions system has not managed to get a similar legal backing in the U.S. yet. Without legal safeguards, a system like carbon trading is easily cheated — a company or country can reduce emissions in one area and release them in another, gather credits fraudulently, and even over-allocate free carbon credits to the biggest polluters.
Plus, there’s no clear way to measure how much a company or country is actually polluting the environment, making the entire process work on assumptions and rendering its effect on climate change minimal. Thus, experts warn that in some cases, carbon trading can make emissions worse instead of reducing them.
Related on The Swaddle:
World Will See Second‑Largest Rise Ever in Carbon Emissions in 2021: Report
From an economic perspective, carbon trading is also plagued by certain limitations. Trading on commodities that are not driven by demand — such as carbon dioxide — can have volatile consequences. Free markets work under the basic rule of demand and supply — if there isn’t enough supply, demand increases, making the product valuable. With respect to carbon trading, there’s enough supply and no reason for the demand. This can theoretically be solved by legally increasing the cap on emissions, which would boost the need for carbon emissions certificates — but this doesn’t exist as of now. Moreover, the lack of stringent regulations governing the trading system impacts the overall efficacy.
Take India’s case as an example. The country invested in several low carbon technologies, switching to renewable energy and more in order to rack up hundreds of millions of emission reduction certificates — 750 million as of 2020 after being issued 1.95 billion certificates. The Indian government primarily did this because they were repeatedly assured that they can sell these certificates. But not many bought them due to lack of enforcement — including those in the compliance market i.e. developed countries under the Kyoto Protocol who had pledged emission reductions. The price of carbon collapsed from USD 25 (INR 1,800) to a few cents, and Indian entities with these certificates were left with practically nothing.
While countries around the world met for the United Nations Climate Change Conference in 2019, debate perched on the ability to sell these carbon credits. While many countries argued that carbon trading is an outdated process from the Kyoto Protocol era, countries like India and Brazil argued that ignoring the entire process from the Kyoto system will cause a lack of trust in how countries like the U.S. carry out climate change policies. The talks ended without proper resolution.
The only reasonable solution to the continuance of carbon trading is maintaining strict international legal regulations that include taxing carbon and enforcing commitments to reduce emissions. Without this, the planet remains at the mercy of powerful fossil fuel companies and the few countries that enable — and benefit from — them.
Please help keep this Site Going
|
Types of Dental Anesthesia
Anesthesia implies a need or misfortune of sensation. This may be with or without consciousness. Today there are numerous choices accessible for dental anesthetics. At dentist subiaco drugs can be utilized alone or combined for superior impact. It’s individualized for a secure and effective procedure. The sort of anesthetics utilized too depends on the age of the individual, wellbeing condition, length of the strategy, and any negative responses to anesthetics within the past. Anesthetics work in numerous ways depending on what’s utilized. Anesthetics can be short-acting when connected straightforwardly to a region or work for longer times when more included surgery is required.
Other things which will impact dental anesthesia incorporate the timing of the strategy. Source of research indicates that aggravation can have a negative effect on the victory of anesthetics. Also, for neighborhood anesthesia, teeth within the lower jaw (mandibular) segment of the mouth are harder to anesthetize than the upper jaw (maxillary) teeth. There are three fundamental sorts of anesthesia: neighborhood, sedation, and common. Each has particular employment. These can moreover be combined with other drugs.
1. Local Anesthesia
Local anesthesia is utilized to numb a particular locale of the teeth and gums to anticipate the persistent from feeling any distress amid a dental method. There are two sorts of nearby anesthesia: topical and injectable. Topical soporifics desensitize the surface of the gums. This frame is connected with a swab, shower or cement fix and regularly goes before the organization of the injectable frame. Topical analgesics offer assistance to stifle the feeling of a prick or sting, such as from an infusion. Injectable soporifics are utilized to halt torment within the locale of the mouth where the dental method is to be conducted, such as for a filling, planning teeth for crowns or remedial medicines, and so on. The injectable frame is utilized to piece nerve endings and numb mouth tissues within the locale where the method will be done.
READ: Green vein Sumatra Kratom
2. General Anesthesia
General anesthesia contrasts from neighborhood anesthesia in that the common frame actuates a need of awareness and actuates a profound rest within the persistent. This form of anesthesia may well be utilized in cases of drawn out or complicated dental strategies, such as surgical tooth extractions. It might moreover be utilized for those patients who endure from extraordinary (or indeed wild) uneasiness, as well as for children or for those with disabilities who are incapable to control their actions. The utilize of either nearby or common anesthesia will depend on a range of components concerning the sort of strategy, the mental temperament of the quiet, and the physical state of the persistent, which can incorporate things concerning the patient’s wellbeing profile and whether they are utilizing any drugs or tranquilizers.
3. Sedation
In expansion to common and nearby anesthesia, alternatives will some of the time exist in strategies of “conscious sedation,” which is implied to calm a persistent and diminish torment, earlier to, amid and after a dental strategy. Such implies (such as anti-anxiety operators or tranquilizers) can initiate a feeling of laziness whereas the persistent remains wakeful and responsive. The two strategies of cognizant sedation are nitrous oxide (regularly alluded to as “laughing gas”) and intravenous sedation. Here, as well, either of these shapes can be utilized based on the profile of the persistent and the practitioner’s suggestions for security and adequacy.
Leave a Reply
Pin It on Pinterest
Liked it? Share Now!
Let your friends know about this awesome post.
|
Scientists create water splitter that runs on a single AAA battery
Scientists create water splitt...
The Stanford University water splitter could save hydrogen producers billions of dollars (Photo: Mark Shwartz)
View 1 Image
The water splitter is made from the relatively cheap and abundant metals nickel and iron. It works by sending an electric current from a single-cell AAA battery through two electrodes.
"This is the first time anyone has used non-precious metal catalysts to split water at a voltage that low," chemistry professor and lead researcher Hongjie Dai says. "It's quite remarkable, because normally you need expensive metals like platinum or iridium to achieve that voltage."
The technology has huge potential as a source for powering hydrogen fuel cells, long held as a likely successor to gasoline. Unlike gasoline combustion, which emits large quantities of the greenhouse gas carbon dioxide, fuel cells combine stored hydrogen gas with oxygen from the air to produce electricity, leaving only water as a byproduct.
Fuel cell vehicles have been around since the 1960s, albeit mostly as research projects and demonstration cars and buses. But we may soon see them in commercial production, with Toyota and Honda both committed to selling fuel cell cars in 2015 and Hyundai already leasing fuel cell vehicles in Southern California.
Fuel cell vehicles have been widely criticized for their high cost, the lack of infrastructure around their fuel delivery, and their low energy efficiency after accounting for the effort it takes to produce compressed hydrogen (often involving large industrial plants that use an energy-intensive process that combines steam and natural gas).
But the new Stanford research, which latches onto a previously unknown method for splitting water, could help account for all these issues.
"It's been a constant pursuit for decades to make low-cost electrocatalysts with high activity and long durability," Dai explains. "When we found out that a nickel-based catalyst is as effective as platinum, it came as a complete surprise."
The nickel-metal/nickel-oxide catalyst, discovered by Stanford graduate student Ming Gong, also requires significantly lower voltages to split water when compared to pure nickel or pure nickel oxide. This new technique is not quite ready for commercial production, though.
"The electrodes are fairly stable, but they do slowly decay over time," Gong says. "The current device would probably run for days, but weeks or months would be preferable. That goal is achievable based on my most recent results."
The next step is to improve that decay rate and to test a version that runs on electricity produced by solar energy instead of the AAA battery.
The researchers believe that their water splitter could save hydrogen producers billions of dollars, and the electrolytic device could be used to make chlorine gas and sodium hydroxide as well as hydrogen fuel cells.
A paper published in the journal Nature Communications describes the research in more detail.
You can see Dai himself demonstrating the device in the video below.
Source: Stanford University
Stanford scientists develop low-cost water splitter
Are you kidding me? This is straight out of a grade-school science book.
Joris van den Heuvel
@slam_to: me too, but not at this voltage, not at this efficiency and not with non-precious metal electrodes.
I created an account just to say that this is ridiculous. I did this when I was 13-14. Is this what they spend their time on? Elementary school science fair level.
Hal Guernsey
So what! I this as a kid over fifty years ago--this must a stupid April Fool's joke. That electrolysis can be done is not the issue--you always have to put more energy into the system to isolate the hydrogen atoms than can be obtained from burning them. This is just. Dumb.
@slam_to, @GabrielMarshman
So... You guys had access to nickle oxide heterostructures on the sides of carbon nanotubes in highschool, set up in a previously unknown configuration that nearly bypasses Ostwald ripening?... That's pretty freaking impressive for an elementary school science fair.
Joris van den Heuvel
Doesn't anyone read the article anymore before commenting? This is nothing short of ground breaking. A clean way of splitting water, with almost no loss of energy and no need for precious metals, at a voltage of that of solar panels, so it doesn't require inverters. I am by no means an expert, but I think that's a significant breakthrough.
Jim Vanus
The actual news is that these Stanford researchers have discovered an economical electrode material which MAY be useful for large scale electrolysis production of hydrogen.
However, the principle barriers for hydrogen fuel cell use are hydrogen storage and distribution, not electrolysis electrode cost & efficiency.
In terms of energy efficiency, hydrogen produced by electrolysis requires more energy to produce than the hydrogen fuel yields. Efficiency is further lost during the fuel cell's conversion of hydrogen into electricity.
Unless the electricity powering the electrolysis is produced by "green" & "sustainable" means, the net effect on the environment is arguably worse than that of battery-powered cars recharged on the existing coal & gas fired power grid.
If these new electrodes can be scaled to industrial use, then this is a significant discovery. However, it is not a game changer because it doesn't solve the problems of "green" electricity for electrolysis and hydrogen storage & distribution.
Michiel Mitchell
This is SOOOOO cool.... We take electricity from a Duracell AAA.. we split water with it... we feed the gas into a fuel-cell and what do you know... we get electricity out the back of it... SUWEEEEEEEEET!
Jim Vanus...exactly. You cannot get more energy out of a system than that of what you put into it.
"Eureka! After the manufacturing process to procure raw materials (or even recycled) and refine them into usable products and using all the energy there required....we've produced a few bubbles storing a fraction of the energy required in the process to produce them! Praise Gore!"
"Green! Green! Buy! Buy!"
As was already mentioned above, using a 1.5 volt battery for electrolysis is basic middle school science, but only if the water being split has impurities such as salt to allow it to conduct electricity. This article, and the linked one for Stanford, do not mention the water having salt in it. If this is the case and the water is pure, then this is a significant discovery, as electrolysis of pure water requires much higher voltages or expensive catalysts like platinum.
Load More
|
Show Summary Details
Page of
Sharon J. Peacock
and David A. B. Dance
Page of
date: 24 September 2021
Glanders is a serious zoonotic disease that primarily affects equids (horses, mules and donkeys). A disease eradication programme based on case detection and destruction of infected domestic animals has been highly successful and the number of reported glanders cases in animals worldwide is now very low. Human glanders is extremely rare and associated with occupations associated with extensive contact with equids. Glanders is caused by Burkholderia mallei, a Gram-negative, non-motile, facultative intracellular organism that is an obligate parasite of equids with no other known natural reservoir. B. mallei is transmitted by direct contact with infected animals, or indirectly via communal food and water sources that have become contaminated by an infected animal. The clinical presentation in equids can be acute or chronic and has been categorized into nasal, pulmonary and cutaneous forms. Diagnosis is based on culturing B. mallei from lesions or exudates and skin or serological testing. Infected animals are usually euthanized. Optimal antimicrobial therapy for human glanders is unknown, and current advice is to adopt antimicrobial treatment guidelines for human melioidosis. There is no vaccine available for either humans or other animals. B. mallei is considered a potential biological weapon and is a Centers for Disease Control and Prevention category B select agent.
Please subscribe or login to access full text content.
|
Why is design thinking important as an entrepreneurial skill?
It’s a useful mindset for entrepreneurs and small-business owners because it teaches them how to become focused on seeing the world through their customers’ eyes; how to uncover those customers’ real needs; how to productively generate ideas for solving them; and how to quickly learn which of those ideas are viable in …
Why is design thinking important as an entrepreneurial skill quizlet?
Design thinking is a process used mostly by designers to solve complex problems, navigate uncertain environments, and create something that is new to the world. … This factor supports the need for design thinking by today’s entrepreneurs.
Why is design thinking important to the entrepreneur?
Design thinking allows you, as an entrepreneur, to follow different thinking styles and explore open-ended options to come up with an actionable solution. Design thinking tools such as empathy are great for this. Prototyping also plays an integral role in this aspect.
Why is design thinking so important?
One reason for the proliferation of design thinking in industries is that it’s useful to break down problems in any complex system, be it business, government, or social organizations. … Employing a design-thinking process makes it more likely a business will be innovative, creative, and ultimately more human.
IT IS INTERESTING: What sources of financing do entrepreneurs have?
How entrepreneurs can use design thinking?
For entrepreneurs I use design thinking to generate bold ideas. … Ideation: Developing new ideas based on observations to address latent needs. Takeaway: Don’t depend on your customers for the big ideas. Implementation: Testing assumptions of new ideas to continuously shape them into viable opportunities.
What are the 5 stages of design thinking?
What are the principles of design thinking?
What are the principles of design thinking?
• User-centricity and empathy. Design thinking is all about finding solutions that respond to human needs and user feedback. …
• Collaboration. …
• Ideation. …
• Experimentation and iteration. …
• A bias towards action.
What is the most important skill of a design thinking leader?
Design-thinking leaders know how to act as a catalyst for creativity.” Deeply understands the process of creative problem solving and knows how to act as a catalyst for creativity. Within the creative process, leaders should seek to be conduits, provocateurs, shepherds, and motivators.
Entrepreneurship Blog
|
Matrix printing, casting
Hello. My name is Sergey.
I often use a 3D printer in my work and often try something new. I came across the technology of printing matrices and casting them. I was interested in it and I decided to try this method of small-scale production.
For the implementation of the plan is very useful:
1. object to pour
2. 3D printer RK-1
3. plastic casting
4. autoclave (very desirable)
The object in our case will be some kind of cover. Yes, it can be printed, but then it will need to be processed from the supports. In a single version, this would be suitable, but not now. Need to get 65 ready-made caps. You will say that you can make a silicone form and you will be right. *
* Only you need not one form, but three whole pieces, approximately. Master model is also needed.
The model with the gating system looks like this.
I want to immediately make a reservation that such a sprue system is suitable for "long" plastics, the life time of which is about 20-30 minutes. If the lifetime is 2-5 minutes, then the item should be positioned horizontally.
Based on the model with the gating system, we can easily and naturally get a matrix. 3D model of the matrix.
Here, she, beautiful.
It remains to send to the 3D printer, wait a bit (in this case, the clock) and the matrix is ready.
But, here it is right to pour right away, it is necessary to prepare a matrix. The first thing to do is to perform the illumination of the matrix (so that the polymerization is completed). The second is to process the matrix with a separator. For this, I use Blue WAX.
Well, that's all, collect the matrix.
And pour plastic.
We wait about 10 minutes. It is during this time plastic fills the entire cavity.
Once filled, put the matrix in the autoclave. Inflate it to 6 atm.
After polymerization of the material. Extract the matrix and open it.
We see that there is burr at the junction of the matrix, but this is not very critical, because its thickness is extremely small and is literally removed with the touch of a finger.
Remains gating system.
Front side of the cover.
Instead of output (valuation):
• The cost of the matrix - 700 rubles. (weight of both halves is about 100 g)
• The cost of casting - 6 rubles. (at the cost of plastic 700rub / kg)
Also popular now:
|
How 3D printing is revolutionizing healthcare as we know it
In 1983, Chuck Hall, the father of 3D printing, created something that was equal parts simple and earth-shattering. He manufactured the world’s first-ever 3D printer and used it to print a tiny eye wash cup.
It was just a cup. It was small and black and utterly ordinary looking. But that cup paved the way for a quiet revolution, one that today is changing the healthcare industry in dramatic ways.
As healthcare costs in America continue to skyrocket, with no political solution in sight, this technology could offer some direly needed relief.
Here are just some the ways in which 3D printing is already revolutionizing the healthcare industry.
Personalized prosthetics
I love to tell the story of Amanda Boxtel, who came to me a few years ago complaining that her robotic suit, a gorgeous piece of design from Ekso Bionics, was uncomfortable to wear. Amanda is paralyzed from the waist down, and while this suit gave her the gift of movement, it couldn’t give her the symmetry and freedom of range of motion that she, like all humans, craved.
Source: Scott Summit, Charles Engelbert Photography
Unlike traditional prosthetics, which are mass-manufactured like any other traditional factory-produced good, 3D-printed prosthetics are custom-tailored for each individual user. By digitally capturing Amanda’s unique measurements, I was able to build her a custom-fit suit, much like a tailor would, creating a beautiful, lightweight design that fit Amanda’s body down to each distinct millimeter.
Bioprinting and tissue engineering
Writing in a recent issue of the Medical Journal of Australia, the surgeon Jason Chuen alerted his colleagues to a major technological breakthrough that could eventually do away with the need for human organ transplants. Here’s how it works:
3D printing is performed by telling a computer to apply layer upon layer of a specific material (quite often plastic or metal powders), molding them one layer at a time until the final product — be it a toy, a pair of sunglasses or a scoliosis brace — is built. Medical technology is now harnessing this technology and building tiny organs, or “organoids,” using the same techniques, but with stem cells as the production material. These organoids, once built, will in the future be able to grow inside the body of a sick patient and take over when an organic organ, such as a kidney or liver, fails.
3D-printed skin for burn victims
It may sound like something out of Mary Shelley’s “Frankenstein,” but the implications — and cost savings — make this technological breakthrough in 3D printing particularly immense. For centuries, burn victims have had incredibly limited options for healing their disfigured skin. Skin grafts are painful and produce terrible aesthetics; hydrotherapy solutions offer limited results. But researchers in Spain have now taken the mechanics of 3D printing — that same careful layer-upon-layer approach in which we can make just about anything — and revealed a 3D bioprinter prototype that can produce human skin. The researchers, working with a biological ink that contains both human plasma as well as material extracts taken from skin biopsies, were able to print about 100 square centimeters of human skin in the span of about half an hour. The possibilities for this technology, and the life-changing implications for burn victims, are endless.
Finally, 3D printing also has the potential to upend the pharmaceutical world and vastly simplify daily life for patients with multiple ailments. So many of us take dozens of pills each day or week, and the organization, timing and monitoring of these multiple medications and their diverse drug interactions and requirements (morning, night, with or without food) is utterly exhausting.
But 3D printing is the epitome of precision. A 3D-printed pill, unlike a traditionally manufactured capsule, can house multiple drugs at once, each with different release times. This so-called “polypill” concept has already been tested for patients with diabetes and is showing great promise.
The bottom line
The medical world, in which treatments, organs and devices are an integral part, stands to be revolutionized by the vast promises of 3D printing. With precision, speed and a major slash in cost, the way we treat and manage the health of our bodies will never be the same. And that’s something to celebrate.
|
The Hunt for the lost city of Paititi
Robert Leave a Comment
To show Paititi
What Paititi may have looked like.
We all love a good tale of a once incredibly rich city long since swallowed up by the jungle. Such stories have motivated explorers, treasure hunters and archaeologists for centuries, along with providing entertainment for everyone else. One of the most famous of all such jungle-clad lost cities is Paitite, the legendary lost city of the Incas that’s said to lie somewhere in the Peruvian Amazon east of the Andes.
I recently wrote about Paitite as part of an article on four legendary lost cities. The nearly five hundred year long hunt for Patitie really deserves it’s own article though, so here it is in all it’s mosquito riddled, snake-bitten glory.
Origins of the Legend
Paitite has been the stuff of legend for nearly five hundred years. Incan traditions mention a city deep in the jungle that became the last refugee of their empire after the Spanish conquest of Cusco. It’s said a vast amount of gold and other treasure was stored here, which sounds all to familiar to anyone who knows about the legend of El Dorado. However, only a small amount of the Incas treasure was ever recovered from Cusco with most of it vanishing. This makes it possible it was transported to a hidden city – far out of reach of the conquistadors.
Early Expeditions
Rumours of the lost city of Paititi spread in the decades after the Spanish conquest, and naturally led to expeditions being launched to find this city and claim it’s vast riches. The first expedition was led by Pedro Di Candia in 1538.
He was a friend of Francisco Pizarro and the former Mayor of Lima, and had naturally already made a fortune during the conquest of the Incan Empire. He heard about a fabulously wealthy land called Ambaya east of the Andes, where the last Incan capital was rumoured to be hidden.
The expedition proved to be a disaster. These heavily laden conquistadors had little experience of true jungle travel and weren’t prepared for the challenges they would face. Their armour weighed them down and made the humid heat even worse, and many men ended up contracting malaria. To add to their troubles the area they were travelling through had extremely hostile tribes (some were cannibals).Dozens of men died and they never discovered any treasure, let alone a city in the Amazon. This set the tone for numerous expeditions over the next 150 years. Historical records indicate there were at least seven expeditions during this period, but the true number is probably much higher.
The lust for gold and glory led many explorers to an untimely demise within the steamy, dangerous confines of the Amazon Rainforest.
20th Century Expeditions
To show Paititi
Paititi imagined in Shadow of the Tomb Raider
The discovery of Machu Pichu in 1911 helped to renew enthusiasm for discovering other lost cities iin South America like Paititi. British explorer Colonel Fawcett went on several expeditions to discover the lost city of Z: another city he claimed was lost in the Amazon. Although Fawcetts expeditions were in a completely different area of the Amazon, his search and subsequent mysterious disappearance helped keep the hunt for lost cities in the Amazon firmly in the public’s mind.
In 1954 Hitlers photographer (he was clearly no stranger to danger) Hans Earl discovered ancient sites in Bolivia and claimed to have discovered Atlantis there.
Starting in the later 1950s Peruvian explorer Carlos Landa led several expeditions to find Paititi. He discovered the Incan stone path in the Andes, and was the first person to document Huella fortress. He also led a number of expeditions in the national park of Manu. He didn’t find the lost city, but wrote an interesting book about his search. I recommend reading about this real life Indiana Jones.
In 1971 a French American expedition led by Bob Nicolls travelled up the treacherous Rio Pantiacolla. After a month the groups guide left, and the three carried on deeper into the jungle never to be seen again. In 1972 another explorer found out they had been killed by a local tribe. Once again the hunt for Paititi had claimed more victims.
In 1979 an expedition did have some success when they discovered the ruins of Mameria. This was the first time Incan ruins had been find in Amazonia; at last there was scientific proof that Paititi could exist.
Late 20th Century to Present day
To show amazon rainforest
What secrets are hidden within this vast sea of trees?
Starting in 1994 a group of explorers led by Greg Deyermenjian and Dr Neuenschwander started the most thorough search yet for the city. In 1999 they discovered Incan ruins which were the furthest yet found directly north of Cusco. They also documented other ruins and signs of civilisation, including petroglyphs.
Then two years later an amazing discovery shed more light on the legend: an Italian archaeologist discovered a report about Paititi in the Vatican archives. The report was from the Jesuit missionary Andrez Lopez and dated 1600. The report told about how Lopez had heard about an Incan city filled with gold and precious stones deep in the jungle – a city hidden from the outside world.
Some people dismissed the report as Lopez simply recording tall tales he heard while in South America. However, a number of other documents from the colonial period have been found that refer to Paititi.
In the past 20 years there have been many attempts to find the lost city. Some have been spur-of-the-moment expeditions where the adventure has been the main motivation, while others have been incredibly professional. In 2002 Jacek Palkiewicz who had discovered the source of the Rio Amazon launched one of the best equipped expeditions to date. They explored a remote region but discovered nothing apart from part of an old Incan road. On the expedition he had attempted to contact Russian cosmonauts on the International Space Station to help with plotting his next move! Interestingly, he later claimed they did find remains of some buildings, but never provided any evidence of this.
Perhaps the closest to discovering Paititi so far is archaeologist Thierry Jamin and his team. They have used technology should as drones and LIDAR to help in the hunt, which is certainly easier than solely relying on traditional methods. In recent years they have found over 40 new archaeological sites in Peru. The discoveries range from the remains of villages and forts through to small cities. They also think they know where the lost city of Paititi is located – on top of a remote, strange mountain with a flat top. In 2014 Expedition Unknowns Josh Gates joined them to document part of their search. As of today Jamin and his team are still carrying out research on the mountain.
Other people think Paititi lies in a different area, and there are currently other teams such as Paititi Research searching for the city. Since 2017 they have done very extensive research using modern methods such as GIS and data analysis to narrow down the possible location of the city to a few sites. It will be interesting to see what they discover.
Final Thoughts
Paititi is one of the most enduring and intriguing legendary lost cities. The city has captivated countless people for nearly 500 hundred years, and provided the inspiration for numerous books, films and games. It’s also led to many people losing their lives fruitlessly searching the jungle for the gold filled last city of the Incas.
Based upon the evidence it seems likely that Paititi is a real place, and it could still have a large amount of treasure there. It’s possible that some Incas did retreat here after the Spanish conquest with some of the treasure of their empire. After the fall of Vilcabamba in 1572 Paititi would have been the last Incan city left. I think disease will have killed many of the cities inhabitants, with the survivors eventually abandoning the city to the jungle.
Time to dust off the old fedora and book a flight to Peru. Who’s with me?
To show start of expedition
Adventure awaits.
|
united states of maps - News 2020-12-01T09:22:06-05:00 united states of maps 2020-12-01T09:22:06-05:00 2020-12-01T09:22:07-05:00 Placing Chocolate on the map United States of Maps Crawford Does Chocolate prove an ancient connection of PreColombian South American and Central American societies ?
Of all continents on the planet the connections of societies separated by the Darien gap between modern day Central America and South America has always been a contentious one. Many claim a complete disconnect between the two cultures. Chocolate in fact brings some evidence of preColombian contact between the two Cultural regions.
Chocolate growing origin
The wide variety of chocolate treats enjoyed all over the map of the world testify to the worlds love of chocolate. Historians are united in their agreement that chocolate made its way to Europe from Mexico and the conquest of the Aztec empire. What is less often considered is how did the Aztecs and their ancestors come to acquire the Cacao beans in the first place.
Jumping back in time it is useful to look at where the Cacao tree originates from. It is widely held that Cacao trees originate from the Amazon region. This has lead to claims by the people of Peru, Ecuador, Colombia and Brazil to being the original region of Chocolate. A map of the Amazon basin reveals the large region that incorporates tracts of these 4 countries and is the accepted area for the origin of the Cacao plant.
At the time of writing the claim from Ecuador is backed up by the fact that the earliest known evidence of Cacao use is from the Santa Ana region bordering Peru. Whilst we do not know what name the people of this culture called themselves they are referred to as the Mayo-Chinchipe culture (not to be mistaken for the Mayan culture of CentralAmerica). The use of cocoa in the region has been dated to over 5000 years ago well over a thousand years before Central American use.
The combination of first domestic use along with evidence that the plant are indigenous to the Amazon basin makes the case for import to Central America of an almost certainty. This strengthens the argument for contact between the two cultures to a high degree. Central American use came later and quickly grew in popularity along with items such as chilis native to South America.
A look at a map of the Americas reveals that whilst travel between the two areas is difficult it is at least possible to move between the two regions. Exactly how Cacao got to Central America is a mystery, but it is impossible to deny that it did happen.
Our world of Chocolate map shows the growing region of Cacao trees and it is well documented how it has spread to other continents. Today a huge percentage of all Chocolate production comes from Africa in the same Bean belt around the tropics. This was only possible after beans were exported from Central America.
So far we know of no explanation of how Chocolate was encountered by Europeans in Central America without any precontact between Central and South America. Unless the aliens did it ! Even then they would be bad aliens for not delivering it to the rest of the planet.
All the art work in this article taken from our World of Chocolate map.
Click here to view full map.
]]> 2020-11-16T14:12:42-05:00 2020-11-16T14:12:42-05:00 Ancient Publicity Wars in Cartography United States of Maps Crawford A tale of two Kings - The hidden propaganda within classic maps.
One of our motivations for drawing maps was a desire to avoid politics and simply dedicate ourselves to the production and redrawing of classic map images. When we started to redraw many classic images for our modern day map of the Americas we soon realized this was not going to happen.
The 1562 Gutierrez map of the Americas is one of the most iconic maps of the renaissance era of maps and is well known for its incredible sea monsters and classic imagery of sea battles. However it also contains a publicity campaign designed to bestow one nations king with a divine status along with a rather offensive representation of his rival.
Maps do not get made for free and the more we learnt about the Gutierrez map of 1567 the more we saw how biased it is. The map was commissioned for the royal court of Spain and this influence can be seen in the Latin texts declaring the vast majority of the Americas as rightfully belonging to Spain. The inscriptions include declarations of the ‘discovery’ of the land by Columbus in the name of the royal family of Spain. Additionally a clear indication of the political bias within the map can be seen in how two kings are represented on the map.
The King of Spain is portrayed as riding across the Atlantic in a sea chariot pulled by horses and the sea god Neptune. For good measure he has angels above protecting him. The portrayal of the King of Portugal is not so complimentary he is represented as the old man of the sea riding a sea monster and holding a shield of Portugal. His position on the map is designed to indicate the thin sliver of the americas assigned to him by the pope on the line of demarcation known from the treaty of Torsedillas.
We are sure you can imagine that this map was not a favorite one for the royal courts of Portugal. Whilst the images are now admired as some of the most popular of all map images their original significance has been overlooked by many map collectors.
Along with sea monsters and treasure ships these two portrayals of the king of Spain and Portugal are some of the best liked images according to customers buying our map of the Americas.
]]> 2020-10-20T15:19:06-05:00 2020-10-27T15:25:19-05:00 How the name America came into being United States of Maps Crawford Have you ever wondered why the Americas become to be called the Americas and not the Columbias?
Amerigo Vespucci after who America is named
|
+49 6431 59619-40 info@99sensors.com
The LESR® principle
The LESR® 14 (speak “Laser 14”) is a ring of 10 LEDs with a total diameter of 40 mm, slightly resembling a clock. 4 more LEDs are placed in the middle of the ring, arranged like a “+” which makes a total of 14 LEDs. The LEDs are RGB and can produce a theoretical 16.7 million of colours.
Inside the “+” – so small that you can hardly see it – is an infrared based gesture sensor which comes from smartphone technology. It can analyse the environmental light, proximity, and gestures.
The sensor data is collected by a small processor unit (the LESR® controller), which is completely programmable and will use the LEDs to give the user an instant feedback.
We are used to feedback. The first feedback we get is our nerves telling us that the finger has touched the surface. At this point we are expecting a reaction. Also, a good touch screen application gives the user instant feedback on what he has selected. This is normally done by highlighting a selection, changing the colour or size of an element for example. This highlighting has two vital functions: it shows where (what) the user touched, and it signals us to wait for some reaction which is not always happing immediately.
On a computer with a mouse we are also using the mouseover event to make the selection visible before we click. When a finger moves towards an item on the screen, the brain might still be thinking what exactly to touch. The highlighting gives the user the time to reconsider and maybe finalize her / his decision.
LESR®uses colour, forms and signs to match to the content on the screen. This way the user can easily make a connection between contents and which LESR® ring to use. You can easily imagine this when you are thinking of a rating system with smileys in the typical 3 (or 5) colours red, green and yellow. The association for the user is easy.
But when using a sensor – where does the feedback come from? Where does the user have to put the finger exactly to initiate an action? And did he really touch or miss? This is where the LEDs are shining in a multiple role. Apart from linking the content by colour, the ring form automatically makes the user direct his finger or hand towards the middle of the ring. Also, the LEDs give immediate feedback to the user as soon as the sensor is triggered by changing their brightness or colour or even by an animation. The user knows that he is interacting at the right position because of this feedback.
In our examples we like to use an animation which we call “clock”. As soon as the user triggers the sensor, the LEDs are one by one intensifying their brightness, starting to build a clockwise circle, letting the user know that he must keep his finger in this position to achieve an action. The animation also works like the highlighting function – the user has time to reconsider and remove his finger to cancel his action.
The result is a fully guided, easy to understand, intuitive and cancellable sensor interaction.
Moreover, the 4 LEDs inside the ring can be used to add more feedback or even more accurate guiding to the sensor.
|
15.07.2021 | Policy papers and background info | Climate
Revision of the Climate Change Act: An ambitious mitigation path to climate neutrality in 2045
On 24 June 2021, the German Bundestag adopted the revised Federal Climate Change Act. This revision became necessary as the result of a ruling by the Federal Constitutional Court and the European Union’s new 2030 climate target. The revised Climate Change Act sets higher national mitigation targets for the years 2030 (65 percent) and 2040 (88 percent) and the goal of net climate neutrality by 2045. It also adjusts the maximum permissible annual emission budgets for sectors (for example energy, buildings, transport, industry and agriculture) and lays down annual cross-sectoral mitigation targets between 2030 and 2040. Additionally, the new Act contains provisions on the contribution of the land-use sector (for example peatlands and forests) to climate change mitigation and assigns supplementary tasks to the Council of Experts on Climate Change. The Climate Change Act provides the legal framework for climate policy in Germany. By intensifying efforts to tackle climate change up to 2045 and sharing the burden of these efforts more equitably, it secures the fundamental rights to freedom of younger generations called for by the Federal Constitutional Court.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.